> > > i guess we have a misunderstanding here. Why would one want to persist > > > *cached* data? In fact for my case every bit of cached data is > > > extracted/computed from a 200 GB readonly reiserfs filesystem. > > > I don't see the point to persist these computations and go > > > through the hell of file-locking and/or using a persistence layer. > > > > Yes, we definitely have a misunderstanding. We all know that caching > > works at many levels -- from a multi-gigabyte squid cache making web > > access faster to a 512 kB level-1 cache making RAM access from my CPU > > faster. I don't understand what level you want to cache at! My app has a similar attitude toward caching. Currently I run everything in a single FastCGI process. In my app, I load users' mailboxes and messages on demand over IMAP or POP or whatever. The actual state is on the IMAP server, but IMAP requests for large complicated msgs are slow, so I keep the parsed out messages around. If the app crashes and all the messages are lost, no big deal, since they'll be grabbed from the server again when they're needed. On the other hand, multiple copies of the app running would be slow (every time you do anything with a message it must be reloaded from the server). So it's not really a *persistence* scheme per se, just a caching scheme. Two things I can think of to allow multiple processes: a shared memory scheme like Jeff suggested, or some kind of "session persistence" in SCGI which would always send the same session to the same process. I doubt that the latter will happen. It would be cool if you could stick an object in a shared slot and then other processes could directly access it.