I'm getting ready to test my chemical application on a server. It has two Durus databases, one for the data (readonly) and one for sessions (read/write). I'll be using SCGI with multiple children, so I'll have to have two Durus daemons running. What a number of processes to manage! Especially if I start using this model for other applications. I'm running Linux, so I first tried making a separate init.d script for each Durus server, with its own PID and Unix domain socket, using the 'durus' command-line tool and start-stop-daemon. It refuses to start and gives no error message, but that could be my fault (file permission error somewhere?). But my path configuration is in Python modules, so then I tried writing a program that called durus.run_durus.start_durus . But I'd still have to have start-stop-daemon manage three PID files. Now I'm trying to do it all in my top-level Python program, doing my own fork()'s for the Durus servers. It seems like this would be a common task, especially at the MEMS Exchange which has lots of Quixote programs with Durus databases. So I'm wondering if there's any existing code for this. Also, how do you manage logfiles? I'd like to get Quixote and the two Durus servers writing to the same logfile if possible. What about signals? Should I just override SIGTERM to stop the Durus servers on exiting? Or do I have to override the dozen other signals like SIGPIPE that might be raised? Also, stop_durus just sends a "Q" to the server and hopes it quits, rather than killing it. Has this been reliable in practice? -- Mike Orr