durusmail: durus-users: A doc preview
A doc preview
2006-04-21
Re: A doc preview
2006-04-22
2006-04-23
2006-04-23
A doc preview
Paolo Losi
2006-04-22
Very good and useful work.... Thank you very much...

Some more points:
- passing references between transaction boundaries
- threading support (not exchange references between threads)

Another gotcha for ZODB is:

never try to "auto-register" objects in db in __init__ method.
This could cause conflict resolution code to deadlock in some
cases. Does this applies to Durus as well?

I'm willing to help, but I'm very busy until may..

        Ciao e Grazie
        Paolo

Jesus Cea wrote:
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
>
> David, please, look at it.
>
> I'm interested in errors and english quality :-p
>
> This document is very preliminar. Work in progress.
>
>
>
> =====
>
> $Id: KNOW_HOW-DURUS 108 2006-04-21 21:47:29Z jcea $
> #
> # (c) 2006 jcea@argo.es / jcea@hispasec.com
> #
> # PGP/GPG Public Key:
> # pub   1024R/9AC53D4D 1995-01-01
> #       Key fingerprint = F4 07 90 C2 58 86 8A 75  45 40 33 1C 72 4C E5 E1
> # uid                  Jesus Cea Avion 
> # uid                  Jesus Cea Avion 
> #
> # This product is covered by the GNU PUBLIC LICENSE, VERSION 2.
> # For more detailt, read the file "LICENSE" in the distribution.
>
>
> This document is not related to the BerkeleyDB storage engine
> for Durus, but tries to clarify Durus operation and inner working.
>
> For ease navigation in this document, each section begins with
> three "###". You can use such a sequence to go around this text.
> Each section, also, documents the date of last modification.
>
>
> ### Concurrency using the Durus Storage Server (20060421)
>
> The Durus Storage Server allows that a single storage be
> shared between several (remote) clients. So you can access
> the storage remotely, and writes by a client will be
> visible to others.
>
> Durus Storage Server will listen for requests from all
> clients connected, but when any request arrives, the server
> will be busy ONLY with that client alone. Other requests
> will be queued until finished. If that client is very slow
> or the disk access is slow, the server will sit idle, even
> if other clients are demanding attention.
>
> Hope a future Durus release can process multiple read
> requests in parallel. Each client would wait less, and
> the disk would be better utilized (better sort multiple
> seeks to serve several request that a long seek to serve
> only one request).
>
> Remember, nevertheless, that Durus clients have a local cache
> to avoid hitting the storage server. Sizing that cache, and
> learning how to use it in an effective way, are important
> issues in any demanding Durus deployment.
>
>
> ### ACID using the Durus Storage Server (20060421)
>
> ACID = Atomicity, Consistency, Isolation, Durability
>
> DSS = Durus Storage Server
>
> Since DSS only processes a request from a single client at
> anytime, commits are atomic. No other client will be
> served until the commit completes.
>
> The Durability is garanteed by the Storage Backend used by
> Durus. Some backends (for example, my BerkeleyDB Storage
> backend) can be configured to not garantee Durability
> in exchange of (vastly) improved performance. Some applications
> can take advantage of that. Some other requires durability.
>
> Transactions under DSS are Isolated. If you don't do any dirty
> trick, DSS garantee a "degree 3 isolation". That is, you only
> see committed data and reads are repeatable.
>
> You shouldn't do it, but if you manually request
> cache shrink, DSS would only garantee "degree 2 isolation".
> That is, you could get different data in two reads to the
> same object.
>
> Consistence is provided also by the Storage Backend used
> by Durus. It implies that no transaction can leave the
> Storage in an inconsistent state physically. If the
> application logic has integrity constraits, it must be
> enforced by the application.
>
>
> ### Durus Storage Server conflicts (20060421)
>
> Durus clients implement a local cache to improve
> performance, avoiding DSS accesses. Objects
> fetched or written are keep in a cache. The cache
> size is configurable, and evictions are transparent.
>
> The eviction routine can be directly called
> (losing "degree 3 isolation") or, better, automatically
> done when you do a transaction commit or abort
> (keeping "degree 3 isolation").
>
> Cache consistency is checked when you do a commit or abort.
>
> If you do an abort, locally modified objects are purged.
> If the cache has objects that other client modified, they
> are also purged. So, after an abort, your cache only keep
> unmodified objects, both locally and remotely.
>
> If you do a commit, it will fails if your cache has any
> object remotely modified by another client, even if you
> didn't use that object in current transaction. That can
> be an issue, and I would like to improve that. If your
> commit conflicts, the eviction procedure will be like the
> abort case.
>
> If your commit success, your cache is untouched. <- VERIFY THIS
>
> TODO: to document the reason because "degree 2" and "degree 3" isolations.
>
> TODO: doing an "abort" before beginning a transaction.
>
> =====
>
> - --
> Jesus Cea Avion                         _/_/      _/_/_/        _/_/_/
> jcea@argo.es http://www.argo.es/~jcea/ _/_/    _/_/  _/_/    _/_/  _/_/
> jabber / xmpp:jcea@jabber.org         _/_/    _/_/          _/_/_/_/_/
>                                _/_/  _/_/    _/_/          _/_/  _/_/
> "Things are not so easy"      _/_/  _/_/    _/_/  _/_/    _/_/  _/_/
> "My name is Dump, Core Dump"   _/_/_/        _/_/_/      _/_/  _/_/
> "El amor es poner tu felicidad en la felicidad de otro" - Leibniz
> -----BEGIN PGP SIGNATURE-----
> Version: GnuPG v1.4.2.2 (GNU/Linux)
> Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org
>
> iQCVAwUBRElVRJlgi5GaxT1NAQLElgP/SBO9/eVnGdg1l2FlaRUpY+c6gcPyWFFN
> eZ/IfP2sOhkJ7pp+s1IzufLPQr1Y8SjuQ6Q7q/0mqAzR2U9BGgtGPvzYAd0Pzbkc
> 97K5FL/QovhlM6JDfsKtphFoAxH85mDy3R2J0NIfID4BVfGIUzF5/ab21w5ljVY+
> pJYiiwPohHw=
> =4BVe
> -----END PGP SIGNATURE-----
> _______________________________________________
> Durus-users mailing list
> Durus-users@mems-exchange.org
> http://mail.mems-exchange.org/mailman/listinfo/durus-users

reply