[OpenAFS] Holy Grail of High Availability
Paul Blackburn
mpb@est.ibm.com
Tue, 08 Oct 2002 14:15:23 +0100
I am not sure what you mean by "the Holy Grail".
AFS does have some good features that help provide a more highly
available service than you would get with other software.
For example, the AFS database servers which provide several
key information services to AFS clients can be "replicated"
in the sense that you can have multiple AFS db servers.
A good number is three. If you have 3 AFS db servers for your cell
then the services provided will continue even if one db server is
out-of-service (perhaps for a maintenance upgrade).
The data served from AFS _file_ servers can also be replicated
and this works very well for what I would call "mostly static" data.
Example: we have an AFS cell to share Linux and open source
resources. This is reference data and mostly ReadOnly.
So we have three fileservers in different countries with RO copies
of data files (which are updated on a regular basis from the
RW "master" using AFS "vos release" magic).
We had a situation where one fileserver was disconnected
and upgraded but users were still able to access the replicated
files (automagically from the other fileservers). Users did not
have to know one fileserver was down. They just continued
to access the filesystem for that cell.
If I had the resources, I would have two "cloned" fileservers
at each site so that even if one was down the 2nd would continue
to serve files to local users for that site.
As for replicating ReadWrite data, that is not done in AFS.
If you think about it, it's a pretty tough problem to solve:
how to maintain the state of multiple RW copies of data
across multiple servers in different networks?
Another good feature of AFS is scalability. You can grow
the number of database and/or file servers to meet your needs.
A good example of this is where you could have a cluster
of web servers serving data out of AFS.
see also:
http://web.archive.org/web/19961227000628/http://www.ncsa.uiuc.edu/InformationServers/Conferences/CERNwww94/www94.ncsa.html
I have also run a mailsystem that delivered to
mailboxes in /afs $HOMEs. It worked OK for me
but it takes quite a bit of configuring and setup.
You may be better off considering IMAP servers.
I hope this helps.
--
cheers
paul http://acm.org/~mpb
Chris Dos wrote:
> I'm been looking at distributed files systems lately and I may have
> been under the wrong impression that OpenAFS or Coda could be the Holy
> Grail for High Availability for the hosting company that I work for.
> The part that worries me right now is the server replication.
> According to the documentation that I've been reading, the server
> replication is only good for volumes that don't see many files
> changing. So, replicating a database such as Oracle or MySQL that
> changes data often would be a bad idea, but web sites or mail might be
> good? Would it replcate the entire changed file, or just the pieces
> of the file that has changed. I'm looking at putting three high end
> terabyte servers next to each via Gigabit Ethernet, and having
> replication take place. Would the replicants be read only? So all
> the write changes still have to take place to one server. If that
> server goes down, will write changes go to a server that is still up
> and running. Also, how does the client know which server go to if
> there are three servers with identical data on the same subnet? Is
> there any type of load balancing going on to help distribute the load?
>
> Am I totally off my rocker in thinking AFS might be able to provide
> all these things? And if I am loony in thinking AFS or another
> distributed file system might be my holy grail, are there any other
> alternatives I should be looking at?
>
> Thank you for any insight you might be able to provide. I sincerely
> appreciate it.
>
> Chris Dos
>
> _______________________________________________
> OpenAFS-info mailing list
> OpenAFS-info@openafs.org
> https://lists.openafs.org/mailman/listinfo/openafs-info