And the next round, this time I am closer to Durus. If I have a deep hierarchy a.b.c.d, and fetch the object "a", pickle (and hence Durus and ZODB) bring me all the objects. If I need a lazy attribute that does not fetch the referenced object until I specifically say "a.b" I have to implement it myself using properties and store an oid or other index of "b", right? Are btrees, PersistentDicts, PersistentLists lazy it that sense? Do they fetch all objects at once or fetch them one by one at need? How can I serialize parallel writes? Two processes want to do btree = root[btree_name] btree[key] = value connection.commit() in parallel with different keys; it could be a PersistentDict or a PersistentList instead of btree. Should I write my own server that accepts connections from all processes and serialize writing to the DB? Can I have a serial (autoincremented) counter? Should I? They are useful in cases when I don't have an interesting distinguishing features in my objects. For example, if I want to store a list of FTP servers I can distinguish them by their URLs; hence I can use URLs as indices. But if I want to store access_log elements - there are no such distinguishing elements. Even the full tuple (time, client IP, URL) can occur many times (many concurrent queries from a program like ApacheBenchmark; or a huge network behind a NAT with a single external IP). Autoincremented counter seems to be the best way to generate names (indices). The only counter I've found is .new_oid()... How can I know programmatically if packing is finished? Necessary for an administration script that initiates pack, wait and backup .prepak file. Oleg. -- Oleg Broytmann http://phd.pp.ru/ phd@phd.pp.ru Programmers don't die, they just GOSUB without RETURN.