On Oct 3, 2006, at 10:37 AM, Mike Orr wrote: > > How big is small? Each value might be 5 KB. There are 6100 values > total. It's an import once, read often database. The answer depends on your objectives. When you load a node of a BTree, you get, by default, a maximum of 16 values. If they are non-persistent, they will all get loaded in one trip to the storage. If they are persistent, it will take 17 trips, but each one has a smaller load. I think 1 big trip will *always* be faster, but there are other considerations. Client requests to the storage server are handled one-by-one, so huge, time-consuming requests are something we like to avoid. The best way to find the combination you like is to set the durus logging level at 5 or maybe 4 and watch the loading behavior. Set the client cache size as small as you need and time the iterations over the values. It seems like it would be worth 31MB of RAM to just keep all of this in memory, but I'm sure you have a reason for being so frugal.