On May 16, 2006, at 11:29 AM, Jesus Cea wrote: > > A big improvement. Now I understand. I guess you are assuming that the cache holds pretty much everything, all the time. > > In fact yesterday I was thinking about pipelining "new_oid" > requests, to > allow full compatibility with current durus deployments. That is, > instead of requesting a new oid and waiting for the answer, simply put > in the line as many request as you need and read the answers later. > This > procedure would be 100% compatible with the current one without > needing > any upgrade. The issue, nevertheless, is deadlocking: If you have a > lot > of requests to do, your tcp window can became full because the > server is > not reading it, because its sending window if full sending you OIDs > you > are not reading because you are still doing requests... :-). So I > decided for a new "new_oids" command. > > I would hate doing bulk OID asignations as rutine, because clients > would > lose most of the oids, most of the time. I don't see why. Couldn't the client just maintain a local "cache" of oids to draw from, and ask for, say 100 more when the cache runs out? You would waste no more than 100, and that only when a client is terminated. > In fact I already hate that > durus "loses" OID when a commit conflicts (why not reusing those > OID in > the following commits?). But we are doing 64 bit OID, so perhaps > improving that would be overengineering. So it seems. > > In fact, the other day I was thinking about using 128/256 bit OIDs and > give "random" OIDs to clients when they request a new object. Why?. > Because security: If your OID space is sparse and "random", you can > allow access to hostile clients, since they only can read/write/delete > objects if they know its OID, and OID would be no predictable. Just > some > food for the mind :p That's interesting. You still haven't said why you want your storage server 50ms away from you clients. I know there must be a reason.