On Tue, Sep 15, 2009 at 09:58, Binger Davidwrote: > > On Sep 15, 2009, at 12:02 PM, Matthew Scott wrote: > > This could pose a "handoff" problem, where in process A (an application) >> starts the Durus server, process B (a Python shell inspecting some things >> under the hood) connects to the server started by process A, then A shuts >> down, taking the Durus server with it, now process B is left hanging without >> Durus. >> > > I think this could be addressed by forking and having the parent run the > server so that the server still has > a live parent when the client quits. I'll keep this in mind, your solution sounds good for that particular scenario. So, rather than read and invalidate on each file length change, we'd just >> "pretend" that the file isn't growing at all, and perform an analysis on a >> snapshot of the database. When the client code was finished, it would call >> continue(), at which point the database state would be allowed to sync with >> latest changes -- client code wouldn't care though, since it is done with >> its analysis. >> > > That behavior is okay, as long as the client has nothing to write, and > doesn't really care about being perfectly up-to-date. > If the thing that I miss is the sale of my seat to someone else, I'm > unhappy. > Understandable. The client would be forbidden to write while paused. Doing so would have undefined results or (even better) would raise an exception. This is more for a scenario where you want to continue allowing writes to a certain database, but where you also want to generate some sort of report or historical record based on a consistent snapshot in time. Question: Is the client/server connection at a low-enough level that the protocol could be extended to support this "pause/resume" behavior? (Packing would not be permitted while any client had "paused" their connection, so that invalidated records in use by a paused connection would be .) -- Matthew R. Scott