Peter Wilkinsonwrote: > Underlying a lot of my ongoing experimentation is growing databases, > many GBs so far and continuing to grow. Getting fool proof fast > replication running against big databases is a priority. I currently > use full rsyncs but have been trying thinking of ways to be more > efficient. I'm using the following script in combination with rsync. It makes the syncronization much faster since rsync will only check the filename and mtime for most chunks. http://python.ca/nas/python/split_durus_fs.py The test for a packed DB is the only weakness, AFAIK. I had requested a pack counter be added to the storage file header but David did not go for it. The inode check I use should be very safe. While Durus packs the DB, both the old and new file are present, guaranteeing that they have different inode numbers. The only way this check could be fooled is if there were multiple packs done between script runs and by chance the file ended up with original inode number. Regards, Neil