durusmail: durus-users: latency improvement in "new_oid"
latency improvement in "new_oid"
2006-05-16
2006-05-16
2006-05-16
2006-05-16
2006-05-17
2006-05-17
2006-05-17
2006-05-17
2006-05-18
2006-05-16
2006-05-16
2006-05-17
2006-05-17
2006-05-17
2006-05-17
2006-05-17
2006-05-17
2006-05-17
latency improvement in "new_oid"
Jesus Cea
2006-05-17
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

David Binger wrote:
> I guess you are assuming that the cache holds pretty much everything,
> all the time.

Currently I'm using this for mostly writing applications. In this case,
data collection. The little read data is, yes, mostly cached. Data
analysis is done in local.

In any case, a future improvement request could be cache persistence &
revalidation };-). Not yet there, relax :-))

>> I would hate doing bulk OID asignations as rutine, because clients would
>> lose most of the oids, most of the time.
>
> I don't see why.  Couldn't the client just maintain a local "cache" of
> oids to
> draw from, and ask for, say 100 more when the cache runs out?  You would
> waste no more than 100, and that only when a client is terminated.

You assume that clients are long running applications. My antispam
filter exports a durus server interface for other applications to query
about statistics, updating whitelisting, etc. Such applications executes
for half a second and could commit a couple of objects only, if any.

Or my mailbox storage server, the LMTP server creates a new connection
to durus per email to store, no reuse. Idem the POP3 server.

>> In fact I already hate that
>> durus "loses" OID when a commit conflicts (why not reusing those OID in
>> the following commits?). But we are doing 64 bit OID, so perhaps
>> improving that would be overengineering.
>
> So it seems.

:-)

Doing 1000 transactions per second and losing 100 OIDs per transactions,
you run out of OID space (64 bit) in about 5.8 millon years. :-). Yes, I
did the math :).

But the OID pool when a commit conflicts is useful to avoid latency
issues when you redo the transaction, since those OIDs are already
reserved to you, just reuse them instead of request new ones to the server.

Implementation would be trivial: When a commit conflicts, now durus
review the newly created objects and deletes the connection and oid
attribute. I propose to store the deleted oids in a private list. When
time later you do a new commit, just use the OIDs in that list, if
available. If not OID available, ask for one to the server, just like now.

Probably less than eight lines of code :-p

> You still haven't said why you want your storage server 50ms away
> from you clients.  I know there must be a reason.

Currently I'm using durus in USA<->Europe links to store statistics and
logs and for things like configuration files. That is, daemons instead
of reading configuration files from local disk, read the configuration
from a durus object.

Yes, I've modified servers like squid, apache or sendmail to integrate a
python interpreter and a "call to python" patches spread in the code to
read configuration files or write logs to durus objects. Easier to do
than to explain :-)

- --
Jesus Cea Avion                         _/_/      _/_/_/        _/_/_/
jcea@argo.es http://www.argo.es/~jcea/ _/_/    _/_/  _/_/    _/_/  _/_/
jabber / xmpp:jcea@jabber.org         _/_/    _/_/          _/_/_/_/_/
                               _/_/  _/_/    _/_/          _/_/  _/_/
"Things are not so easy"      _/_/  _/_/    _/_/  _/_/    _/_/  _/_/
"My name is Dump, Core Dump"   _/_/_/        _/_/_/      _/_/  _/_/
"El amor es poner tu felicidad en la felicidad de otro" - Leibniz
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.2.2 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iQCVAwUBRGsYeplgi5GaxT1NAQIW5QP5AbqrHT3Ibmv15wA3MnTReuZJPrlzbylk
S97fnixciutbOt95YNCVtzr3FQ5uY1VFXMlvS5vwsKhzTkPmixCEbovFDVbpviUf
kqGYg+PeuDLvjG/2s5vS1jwnlioWLfcQy/+MEQ0KC/ddVhK9mZZNTdeAJVPgss+o
iwgoMtlruow=
=uX4k
-----END PGP SIGNATURE-----
reply