On May 17, 2006, at 11:30 AM, Jesus Cea wrote: > Some of my write intensive (or read intensive with poor hit ratio) > applications are already using multiple durus backends to improve > parallelism. For example, the mailbox server have about 15-20 durus > instances. Each incomming mail is stored in the "home" durus storage > server for that user. That sounds good. > > And you already know that I would work in a more "parallel" durus > implementation :). For example, reads can be trivially parallelized. > >> You could, however, keep track of the oids consumed during the >> serialization, >> and, in the case of a conflict, these oids could be restored to >> the clients >> pool of allocated oids. > > That was my idea. When you do the abort, instead of simply clearing > the > OIDs in the newly created objects, you can do the clearing as ever but > keeping a list with the "throwed" OIDs, that can be reused in another > future transaction, in probably unrelated objects. Yes, but you suggested this be done by inspection/clearing of attributes on the newly created objects. It seems better to record oids given out as the transaction is serialized in the end() call. > >> It seems like your applications are network file systems, except >> that the file system has no directories and only a very particular >> type of file. > > You could see it that way. I call it an object store :P > > In fact I have also "patched" durus to allow clients to be notified > when > certain objects are modified. So I provide also a publish&subscribe > paradigm :-). Your clients are "smarter" than mine. Do you have a procedure for managing schema/code change?