[OpenAFS] Some AFS Architectural Questions
Christopher D. Clausen
cclausen@acm.org
Fri, 27 Oct 2006 17:52:05 -0500
Daniel Clark <dclark@pobox.com> wrote:
> On 10/27/06, Leggett, Jeff <jeffrey.leggett@etrade.com> wrote:
>> How does AFS compare to its stateless communication with NFS? We
>> have had problems with vendors using NFS to maintain links (EMC
>> comes to mind) to storage arrays, that fail often and hard.
>
> I'm not quite sure what you mean by "using NFS to maintain links", but
> in any case AFS (like NFSv4 and CIFS) maintains state; is practice the
> behavior is roughly analogous to hard NFS mounts (i.e. client blocks
> until the server comes back).
One of the reasons we moved to AFS from NFS is because we would need to
completely reboot machines if NFS ever became wedged. (Well, that and
better Windows support.) AFS has always recovered from accidental
server reboots (don't name your servers afs1, afs2, afs3, its too easy
to reboot the wrong one by typo :-) and forced restarts of the fs
instance for whatever reason. Sometimes the machine needs to be helped
out a little with an fs checks and fs checkv command, but a reboot has
not been required to date.
> However for read-only volumes with copies on multiple servers, if one
> server goes down, the client automatically fails over to another
> server.
Yes, this is quite useful. You can also use the RW volume as staging
area and then vos release out to the RO volumes once tests have been
done. We use it for our main website, keeping a copy on every server in
case one goes down our website should stay up. I assume that you have
much more data with more importance than the web pages for a student
organization.
If you want to keep multiple seperate environments, you can trivially
copy volumes (vos copy,) or dump (vos dump) and restore (vos restore)
them to different ones. Should be significantly easier to maintain
long-term than NFS. Dumps can even be moved between seperate AFS cells,
with certain caveats.
<<CDC
--
Christopher D. Clausen
ACM@UIUC SysAdmin