On Sep 12, 2007, at 11:37 PM, Peter Wilkinson wrote: > One point to add to that which was part of my motivation for > messing with SQLite as a backend was simple replication of the > database for disaster recovery. Using SQL and its locking seems in > my mind much less prone to problems than something like pausing the > main server and running rsync. Getting reliable and simple > replication going with the standard storage engines is something > I'd like to work on at some point - if anyone has gone down this > path I'd be very interested in the various solutions. I did bring > this up a while ago and got some good pointers but haven't really > had a chance to did into it much. I think rsync can be used on a Durus database without pausing the main server. Here, we just use a cron job with scp from a remote machine to pull a copy at regular intervals. To test the idea, I remember writing a replicator that worked more on a record by record basis. You just connect to a (possibly remote) storage server that you want to replicate, and also a local FileStorage. You iterate on the oid-records of the remote database and copy those into the new. That establishes the base copy. After that, you use the invalidation lists to find out what records to re- retrieve, and do this in a polling loop, sleeping a little to give the remote server a chance to do something useful. Even though this is fairly simple, rsync or scp seems even simpler.