Thanks David, On 31/05/2006, at 11:33 PM, David Binger wrote: > Is the database very large? > Do all of your sites have a similar access pattern? All the sites run on the same base software, customised per site. The databases can contain up to 500k objects, most around a third of that. > > The critical performance factor, for sites with access patterns like > ours, anyway, is having the objects you need already loaded into the > durus client connection cache. Your architecture should keep that > in mind. Ideally, every client can have a cache big enough to hold > all of the objects it might ever use. Obviously, we don't always > have that option, but adding RAM may be the cheapest way to > improve performance. Definitely. This is one thing we find very useful about Durus, our access pattern is very much driven by site customisation done via templates and Durus allows for this much better than SQL which suffers dramatically if you can't optimise/plan the access pattern before hand. We currently run Medusa and are very happy with what a single process running asynchronously can do to ease memory pressure. When QP runs with its own HTTPRequestHandler presumably each of the workers it creates gets a client connection cache. Is this correct? > If all of your sites use the same data with a similar > access pattern, then you might want to use just one QP site and > have your Publisher sort out one from another, rather like > (I assume) your Medusa application does now. > > Running a single, external Durus process with multiple QP sites > as you suggest should also work. You could have a local QP site that > runs the shared Durus database server and maybe offers a administrator > view of the database, maybe something like the 'browse' feature in > the proto demo. You would need to do some customization though in > your sites Publisher constructors to make sure that they > get connections to the right Durus server. > > Also, note that it is possible to run an application with connections > to two or more independent Durus databases. I think what I might try and do is: * For each customer have a set of QP sites as you mention with one as the admin site. * Allow for QP to specify whether it should create its own db or rather connect to another sites db, this looks like it would be very simple, a couple of changes to bin/qp and a subclass of Site to add some extra options. This way we should have something like: admin.example.com -> QP site A -> Durus DB A www.example.com -> QP site B -> Durus DB A another.example.com -> QP site C -> Durus DB A This way we shouldn't need to run an extern Durus instance, QP site A can manage that for us, we get the benefits of being able to manage sites individually and don't require any changes to QP to fake the domains we sit in, request paths etc. Can you see any problems with that kind of setup? Peter W