durusmail: durus-users: Re: Backup, redundancy etc.
Re: Backup, redundancy etc.
2007-04-11
2007-04-11
2007-04-11
2007-04-11
2007-04-12
2007-04-12
2007-04-11
2007-04-17
2007-04-17
2007-05-02
Re: Backup, redundancy etc.
David Binger
2007-04-11
On Apr 11, 2007, at 12:51 PM, Andrew Bettison wrote:

> Peter Wilkinson  wrote:
>> I've been thinking about how best to get backup and redundancy
>> working with Durus and was wondered how other people deal with these
>> issues.

One backup method would be to use rsync.  Since changes are just
appended (except for packs), I think this is efficient and easy.
If your database is small enough, you can also just scp the file
on a regular schedule.

Another method is to run a StorageServer on another machine that
defers to the primary StorageServer as a "master".  This method is
interesting in that the secondary server can actually be used by an
application.  (See "durus -s -h" for MASTERPORT and MASTERHOST options).

>
> I have recently implemented a variant of FileStorage, which I call
> SharedFileStorage, which allows more than one process to have a single
> storage file open for reading and writing at once.  It uses flock(2)
> advisory locks to prevent racing and to maintain integrity.  It also
> supports "live" packing while other processes are still using the
> file,
> which was no mean feat.  It only runs on Posix systems, and I haven't
> checked if it is thread-safe.  Its file format is almost identical to
> the FileStorage format, but with a few extra fields to support the
> concurrency.  When you have several processes running on the same
> machine, sharing the same storage, SharedFileStorage is more efficient
> than using a Durus server, but I haven't made any formal
> measurements to
> support this assertion.

SharedFileStorage does sound fast, and interesting.
One tricky part must be in making sure that every process detects
and reads every new transaction to figure out what records to
invalidate before continuing.  I guess you use frequent stat calls to
determine if the file has changed?

A downside is that every "client" process would need to hold the
offset index in RAM.  That can be a significant hit when you get
into the millions of persistent objects.

I'm working on a file storage variant that
uses the packed index straight from the disk, and only keeps the
index of changed records in RAM.  It seems like that would be
a useful feature to have for a future SharedFileStorage.

Another potential problem is oid allocation.  Be careful there.

A similar idea we've thought about is to use a storage server
to manage writes and invalidations, and to have clients that
use the disk-based-index storage and offsets delivered from the server
to read records directly from the filesystem.

>
> I have contemplated using SharedFileStorage to implement support for
> live "mirror" file storages, which would receive incremental
> changes in
> batches from a central, "master" storage, and update themselves
> accordingly.  "Mirror" storges would only permit read-only access in
> order to not violate transactional integrity.
>
> The problem I would hope to solve with this master/mirror scheme is a
> small business who keep their stock database on a local, in-house
> server, and occasionally connect to the Internet, whereupon all
> changes
> would be transmitted to a live mirror of their database which lives on
> their web presence provider's server, and is used in read-only mode by
> the web server to provide online shopping services.

I'm not sure what the real advantage of this system would be over
having rsync maintain constant mirror of the storage file on
another server.

>
> I suppose some kind of mechanism could be worked out to handle
> switching
> a "mirror" into "master" mode in the event that the master file were
> irretrievably lost and a mirror needed to be used as a backup.

It will be hard to tell automatically whether the loss of the master
file
is temporary or permanent.


>
> --
> Andrew Bettison 
> _______________________________________________
> Durus-users mailing list
> Durus-users@mems-exchange.org
> http://mail.mems-exchange.org/mailman/listinfo/durus-users

reply