durusmail: durus-users: Durus basics
OODB basics
2005-10-08
2005-10-09
2005-10-09
2005-10-09
2005-10-09
2005-10-09
2005-10-09
2005-10-09
2005-10-09
2005-10-09
2005-10-11
2005-10-12
Re: OODB basics
2005-10-11
OODB vs SQL
2005-10-09
2005-10-09
2005-10-09
Re: OODB vs SQL
2005-10-10
Re: OODB vs SQL
2005-10-10
OT: Durus
2005-10-13
2005-10-13
2005-10-13
2005-10-09
2005-10-09
2005-10-09
2005-10-10
2005-10-11
2005-10-11
2005-10-11
2005-10-11
Re: OODB vs SQL
2005-10-11
2005-10-11
2005-10-11
2005-10-12
2005-10-12
2005-10-12
Demo application [was: Re: [Durus-users] Re: OODB vs SQL]
2005-10-13
Re: OODB vs SQL
2005-10-11
Durus basics
2005-10-09
2005-10-09
2005-10-10
2005-10-10
2005-10-10
2005-10-13
2005-10-13
2005-10-13
2005-10-13
Re: OODB basics
2005-10-13
Durus basics
David Binger
2005-10-09
On Oct 9, 2005, at 4:51 PM, Oleg Broytmann wrote:

> And the next round, this time I am closer to Durus.
>
>    If I have a deep hierarchy a.b.c.d, and fetch the object "a",
> pickle
> (and hence Durus and ZODB) bring me all the objects. If I need a lazy

"all the objects"? Not if a.b is an instance of Persistent (or a
subclass).

> attribute that does not fetch the referenced object until I
> specifically
> say "a.b" I have to implement it myself using properties and store
> an oid
> or other index of "b", right?

No.  You just assign the instance to the attribute.
Make B an instance of some subclass of Persistent, and it won't
get loaded (except as a "ghost") until you ask for b.c.
a = PersistentThing()
a.b = PersistentThing()
a.b.c = PersistentThing()
commit()
If you restart, and evaluate "a.b",  the state of b is not loaded and
neither
is the state of c.
When you evaluate "a.b.c", the state of b is loaded, but not the
state of c.

>    Are btrees, PersistentDicts, PersistentLists lazy it that sense?
> Do they
> fetch all objects at once or fetch them one by one at need?

All Persistent instances, including BTrees and the nodes of BTrees are
lazy-loading.  The __dict__ of each instance is not loaded until you
try to find something in it (or set something in it).

>
> How can I serialize parallel writes? Two processes want to do
>    btree = root[btree_name]
>    btree[key] = value
>    connection.commit()
> in parallel with different keys; it could be a PersistentDict or a
> PersistentList instead of btree. Should I write my own server that
> accepts
> connections from all processes and serialize writing to the DB?

The durus server process serializes writes.
A connection will get a conflict error when there is a conflict, and in
this case the application should abort() to invalidate outdated
instances and retry the computation.


>
>    Can I have a serial (autoincremented) counter? Should I? They
> are useful
> in cases when I don't have an interesting distinguishing features
> in my
> objects. For example, if I want to store a list of FTP servers I can
> distinguish them by their URLs; hence I can use URLs as indices.
> But if I
> want to store access_log elements - there are no such distinguishing
> elements. Even the full tuple (time, client IP, URL) can occur many
> times
> (many concurrent queries from a program like ApacheBenchmark; or a
> huge
> network behind a NAT with a single external IP). Autoincremented
> counter
> seems to be the best way to generate names (indices).

I think I would do that as a BTree whose values are Persistent
instances with
a single "int" attribute.  Counters that change value frequently
should be isolated whenever possible on separate persistent instances
so that conflicts are less likely and less expensive.


>    The only counter I've found is .new_oid()...

That's not what you want.  Durus doesn't provide counters.
That would be provided by an application-level object.

>
>    How can I know programmatically if packing is finished?
> Necessary for an
> administration script that initiates pack, wait and backup .prepak
> file.

Not sure.  We just backup the main file directly by copying.
I think the existence of the prepack file
may indicate that the pack is complete.



reply