On Fri, May 20, 2005 at 02:32:46PM -0600, Neil Schemenauer wrote: > around. For your case, I'm wondering if maybe your performance can > be improved by just increasing 'cache_size'. Did you already > increase it? No, thanks!My cache instrumentation was wrong, so I was off on a wild-goose chase. Bumping up the cache helps a lot, execution time going from 3.5 sec to <.2 sec. Well, there's a day down the drain... > There is another optimization that would really pay off but I can't > figure out how to do it. The latency of Connection.load_state() is > really a killer. When you un-ghost a 4000 item dictionary Durus > does 4000 client-server round trips. If a client could send a > request to unghost a batch of objects that would help a lot. Jeremy > had a term for this but I can't remember what it was. Tricky; you'd need some way to figure out what object ID will be accessed next. You could try guessing that there's some locality by OID and return the data for some nearby OIDs (if you're accessing item N, you're likely to access N+1 and N+2 soon). ClientStorage would then have to hang on to those pickles and use them if N+1 was requested. Perhaps ClientStorage could special-case certain types such as lists, and batch-request all of their contents once any element was required. Probably not worth the complexity... --amk