durusmail: durus-users: My wish-list for Durus 3.6 (20061031)
My wish-list for Durus 3.6 (20061031)
2006-10-31
2006-10-31
2006-10-31
2006-10-31
My wish-list for Durus 3.6 (20061031)
Jesus Cea
2006-10-31
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

David Binger wrote:
>> 1. Be able to play with threads in the same "safe" ways that Durus
>> pre-3.5. (would be nice if available in durus 3.5.1 :-). The patch sent
>> by David when 3.5 came out seems to work fine.
>
> I don't remember the details of that, but I'll assume that we have this
> covered for the next release.

Hope so. Patch details in thread at
http://mail.mems-exchange.org/durusmail/durus-users/698/

>> 2. Fully implement "self.storage.sync()" support in
>> serverstorage/filestorage.
>
> I think I have this working now in the devel code.

Nice to know. This single feature would make this durus release as 4.0
worth :-).

> In particular,
> I can run a StorageServer whose underlying storage is itself
> a ClientStorage connected to another StorageServer, and I can run
> the stress test on connections to a master and multiple secondary
> storage servers.

Then a "caching client storage" would be nice };-). Caching at this
level is trivial compared to "connection" caching. In fact we could do
"disk" caching to avoid needing to shrink the cache. The cache only
would vanish when the caching proxy<->server connection breaks.

We just get load sharing with a paper-sheet size code :-)

This approach also allows for a "per client" serverstorage, allowing to
serve multiple read requests in parallel (if the data is cached in the
leaf storageservers). Good!.

>> - - Notify clients of objects garbage collected, to update their cache to
>> a) reclaim memory early and b) to avoid "resurrecting" a dead object
>> (for example, keeping (incorrectly) a nonpersistent reference around).
>
> I have FileStorage reporting removed oids on the first call after a pack,
> so we will have this early notification in the "normal" durus client/server
> configuration.

OK. I will update my storage backend when beta code is available.

This single issue solve a lot of problems in my current backend,
especially "dangling" references if a client deletes a persistent object
but keeps a (illegal and forbidden) "nonpersistent" reference across
transactions. That client is able to "resurrect" a dead object already
queued to be deleted by the background garbage collector.

With this new feature, any commit with that "zombie" object will be
generate a conflict, and any try to use its state will generate a read
conflict.

My code will be cleaner. Good!.

>> - - Several serverstorage instances in a multithread server could share a
>> single backend instance (if it is thread-safe). You could use a server
>> instance for client, for example, allowing multiple read requests in
>> parallel (if the backend allows). The syncronization point would be the
>> backend (inside the "begin" method), so commits would be serialized
>> correctly.
>
> You can work out those "ifs".

Sure I will :-).

>> - - Be able to "replicate" a durus storage via the backend ability to
>> propagate changes to other (remote) processes. For example, BerkeleyDB
>> supports replication natively.
>
> Replication does not really require any code changes.
> A replication process can transfer objects from one storage
> to another, and use invalidations from the master to replicate
> to the mirror.  This has always been possible in Durus, and
> pretty simple.

Yes, but that replicated copy can't usually do any write. That is, it is
read-only (from their client perspective). Something more about
backup/archiving that load-sharing.

New code will allow true replication, read/write. Multiple servers doing
all of them write requests.

BerkeleyDB already does replication at library level, managing low level
issues like resincronizations, master election, etc.

>> - - Be able to implement multithreaded filestorage processes, sharing a
>> single backend but keeping separate caches/connections to it. Remember
>> that a single storage pool can be only accessed by a single filestorage
>> instance. This improvement would allow to open several filestorages to
>> the same data. Useful if you have a multithreaded apprication and can't
>> share a single filestorage instance (very problematic with current durus
>> 3.5).
>
> That sounds dangerous to me.

Actually simple to do.

Mainly proxying a filestorage with an object that "mutex" almost all
filestorage method calls. You can use those objects, all sharing a
single mutex, to share a single filestorage between threads.

> I think we've changed it so that get() only takes one load.

Good!.

> Don't use connection.get() like a weak reference.  If you want
> weak references, use your own application's identifiers.
> Don't bet your application that Durus identifiers won't change.

I see your point, David. I documented the feature in my "DURUS KNOW
HOW", writting very crearly that Durus devels don't like it. There are
other "problems", like not be able to use that "weak reference" when
first creating an object, since it has no OID until the commit.

I understand the tradeoffs and the risks.

>> 4. "connection" objects should provide a method to query how many cache
>> misses we had since that connection instantiation. With this
>> functionality, client could tune its cache size "automagically". To be
>> able to see "accumulated" idle time waiting for a remote object to come
>> would be nice too.
>
> Nothing has done on any of this, but it seems possible that
> it might get done.

Seems fairly simple to do.

>> 5. A "mutable data changed" notificator for BTree.
>
> Done.

:-).

>> 6. Add a "close" method to storage backend interface. This could be able
>> to do things like file locks cleanup, background thread stopping, or
>> replication exclusion.
>
> Done.

I need this to be able to shutdown my database storage cleanly,
considering that now my storage backend has -optionally- a few threads
doing background work like garbage collection or database checkpointing.
Or, in the future, database replication.

>> 7. "Factorize" the durus server socket management (in particular, socket
>> creation for incomming connections and socket "select") to be able to
>> reuse the server code in other communication media, like shared memory,
>> intraprocess queues or mmaped files.
>
> Nothing done here.

Let it go to 3.7 release (or beyond). Durus 3.6 seems a more than decent
upgrade by now. I wouldn't like to delay it a single week for
implementing this feature.

>> 8. A precompiled DURUS distribution for Windows users. Please!.
>
> It would be nice if someone else would provide this.
> I don't think we'll offer binary distributions for any platform
> from here.

Maybe somebody could do a windows release and post the link to this
list. I have currently a few "prospective" clients interested at Durus
but they require Windows deployment, and here we are a Microsoft free
shop :-(.

>> 9. Be able to raise a "read only" exception when a client request new
>> OIDs or try a commit with objects changed. This would allow for
>> read-only connections, without setting the entire storage "read-only".
>> Also, currently if an storage is read-only, clients will be disconnected
>> when trying to commit changes, with no real indication of the problem.
>
> I'll try to act on this.

Mainly cosmetic, but seems simple to do.

> Thanks for your ideas.

Thanks to all of you for implementing them. Durus is wonderful. Durus
developers rock!.

- --
Jesus Cea Avion                         _/_/      _/_/_/        _/_/_/
jcea@argo.es http://www.argo.es/~jcea/ _/_/    _/_/  _/_/    _/_/  _/_/
jabber / xmpp:jcea@jabber.org         _/_/    _/_/          _/_/_/_/_/
                               _/_/  _/_/    _/_/          _/_/  _/_/
"Things are not so easy"      _/_/  _/_/    _/_/  _/_/    _/_/  _/_/
"My name is Dump, Core Dump"   _/_/_/        _/_/_/      _/_/  _/_/
"El amor es poner tu felicidad en la felicidad de otro" - Leibniz
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.5 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iQCVAwUBRUeyCZlgi5GaxT1NAQJfYwP+LAti4ws3YBH10KidR+EbpcvCA7vxeDZY
DhYt4fFi9J070UyyyCBhC0FFmX+soYRGhkEMDDePvE5dcFCyh1RZIzjqxPoihOi9
ivxkEL5++A5qqgy73HAqIy3gB0nb3F3IMAPGKtGm6u7i0LIt6aR9eDIWplYIce5k
80i/4E7wtgA=
=uCyC
-----END PGP SIGNATURE-----
reply