Jason Sibre wrote: > I think I'll plan on going the multi-process route. Should be simpler to > keep robust, as you wrote, and the only downside (which probably isn't much > of a downside) is that I have to find a good, multi-process friendly session > persistent mechanism. I think that DirMapping may actually be fine, but I > have a feeling there's a better (higher performance/more scalable) solution > out there. Perhaps ZODB. (If I'm rambling, or stating the obvious, below, I apologize in advance. ;-) When you say 'multi-process', do you mean running one or more Quixote/Medusa processes (I think you mentioned Medusa as your front-end)? Or do you mean calling external processes to do the heavy lifting? The latter seems cleaner, IMO. Here's one way to do it: create a pool of processes, each connected to your DB and ready to go; dispatch the query to the process pool, returning a JobStatus object to your app. Jobs are submitted to processes via a Queue, and their state is updated when the job is taken and again when completed. Upon queuing the job, redirect the end-user (immediately) to a URL representing the JobStatus object. End-user can refresh until the job is ready; in the meantime, he gets just gets a status report. When it is ready, your Web app fetches data from the job-process, renders it, and the job-process is put back in the pool for a subsequent request. Results could be piped back between processes, or issued via shared memory, or what have you. You get to keep a single-process Web app, so session persistence is a no-brainer, yet you can scale and distribute to your heart's content. Or, instead of a process pool, you could also use a tuple space, connecting JobProcessor agents to the space which are hungry for new jobs to complete. It could make your concurrency issues much easier to implement. (I guess you could use a ZODB instead of a tuple space, and use ZEO to connect each of your agents to it; I'm not experienced with ZEO so I don't know what the overhead would be like.) -- Graham