durusmail: durus-users: latency improvement in "new_oid"
latency improvement in "new_oid"
2006-05-16
2006-05-16
2006-05-16
2006-05-16
2006-05-17
2006-05-17
2006-05-17
2006-05-17
2006-05-18
2006-05-16
2006-05-16
2006-05-17
2006-05-17
2006-05-17
2006-05-17
2006-05-17
2006-05-17
2006-05-17
latency improvement in "new_oid"
Jesus Cea
2006-05-16
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

David Binger wrote:
> I think ZODB gets new oids in blocks, and we could
> do it too.  Would that cut the time of a distant commit
> of 100 new instances in half?  I think it would still
> be too slow.

No idea about zodb.

I have 50ms round trip time (fairly low for a internet long haul), so
commiting 100 new objects requires at least 100*0.05=5 seconds.

If I can request a "new_oids" to get the 100 new OIDs in a single
command, I could get them in only 50 ms.

So the time goes from 5 seconds to 50 ms for the OIDs request, 50 ms for
the SYNC and 50 ms for the commit. If I could pipeline commands, I could
do the OID request and the sync in a single round trip, and the commit
in a second one, for 100 ms.

A big improvement.

In fact yesterday I was thinking about pipelining "new_oid" requests, to
allow full compatibility with current durus deployments. That is,
instead of requesting a new oid and waiting for the answer, simply put
in the line as many request as you need and read the answers later. This
procedure would be 100% compatible with the current one without needing
any upgrade. The issue, nevertheless, is deadlocking: If you have a lot
of requests to do, your tcp window can became full because the server is
not reading it, because its sending window if full sending you OIDs you
are not reading because you are still doing requests... :-). So I
decided for a new "new_oids" command.

I would hate doing bulk OID asignations as rutine, because clients would
lose most of the oids, most of the time. In fact I already hate that
durus "loses" OID when a commit conflicts (why not reusing those OID in
the following commits?). But we are doing 64 bit OID, so perhaps
improving that would be overengineering.

In fact, the other day I was thinking about using 128/256 bit OIDs and
give "random" OIDs to clients when they request a new object. Why?.
Because security: If your OID space is sparse and "random", you can
allow access to hostile clients, since they only can read/write/delete
objects if they know its OID, and OID would be no predictable. Just some
food for the mind :p

- --
Jesus Cea Avion                         _/_/      _/_/_/        _/_/_/
jcea@argo.es http://www.argo.es/~jcea/ _/_/    _/_/  _/_/    _/_/  _/_/
jabber / xmpp:jcea@jabber.org         _/_/    _/_/          _/_/_/_/_/
                               _/_/  _/_/    _/_/          _/_/  _/_/
"Things are not so easy"      _/_/  _/_/    _/_/  _/_/    _/_/  _/_/
"My name is Dump, Core Dump"   _/_/_/        _/_/_/      _/_/  _/_/
"El amor es poner tu felicidad en la felicidad de otro" - Leibniz
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.2.2 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iQCVAwUBRGnv6Jlgi5GaxT1NAQLkEwP7BSyv1MNLAWzv3Pvt5PgPoNFGQov17FfH
2/auitNc5jzoT3nQKcD6jxR9jw2aEdsfb979M+ABBpiw2Uve14nCtzfMU4G5F4ly
O52pElXSa7zZLRUlMCvCthFHePAh1IuHrzNWvUpcOA5M8xYFVW41JECjH/CEtZ5o
iY4ekOtEzIs=
=1uRH
-----END PGP SIGNATURE-----
reply