[OpenAFS] any hack to get multiple read/write mirrors
Paul Blackburn
mpb@est.ibm.com
Fri, 19 Oct 2001 15:27:53 +0100
Hello Zachary,
We have a cell with 3 separate offices (in different countries).
Each office has it's own fileserver.
We locate RW volumes for $HOME on the server nearest the user.
We have one RW volume with shared data located at one office
and access is OK: users at that office get faster access.
The other offices don't miss out too much once files are cached
by the AFS Cache Manager on their client machines.
We replicate the key volumes (root.afs root.cell root.othercells
and others) at all three sites.
I have a script which sets the fileserver preferences on clients
to optimise access. This is based on ping times to the three fileservers.
Regarding file locking: I think you should be careful about this.
Don't assume it will be locked for you. I would establish some
method/convention for gaining exclusive access to a file.
I don't know what your needs are, but you might do something
simple like change the name (or location) of a file when you want
to have exclusive access to update it (and restore the name/location
to release the file for others).
I don't believe you need any special hacks but you do need a reliable
network.
--
cheers
paul http://acm.org/~mpb
#include <standard_disclaimer>
Zachary Denison wrote:
>Is there any hack one can use to get openafs to work
>so that I could get multiple read/write servers. I
>would like to setup a global company wide shared
>filesystem, where each local office gets a local
>mirror, and the filesystem propagates all changes to
>all other machines anytime an update is made, and when
>a file is locked on one machine, it is locked on all
>the mirrors.
>
>Can you make an AFS volume both a master and slave at
>the same time, so you could replicate in both
>directions?
>
>I am not too worried about bandwidth problems because
>we are only going to have about 40MB/hour filesystem
>motion of data between servers. I should mention
>also that the size of the volume we have in mind is
>about 1000GB.
>
>Thank you for any hints you could give me on such a setup.
>
>__________________________________________________
>Do You Yahoo!?
>Make a great connection at Yahoo! Personals.
>http://personals.yahoo.com
>_______________________________________________
>OpenAFS-info mailing list
>OpenAFS-info@openafs.org
>https://lists.openafs.org/mailman/listinfo/openafs-info
>