[OpenAFS] OpenAFS on MacOS X

Patrick J. LoPresti patl@curl.com
10 Jul 2001 17:27:14 -0400


Derek Atkins <warlord@MIT.EDU> writes:

> Two ways around this:
> 
> 	a) Make two AFS cells with cross-realm authentication.  This
> 	   is more difficut to setup but is probably the better
> 	   long-term solution.

The offices are not that independent administratively.  The west coast
office is very small, and the people there are as likely to be
visitors from the east coast as to be permanent residents.

>          Each cell can act independently, but people have access
>          across the board.  You could even (if you wanted to) run a
>          single Krb5 realm so that there is even a single namespace
>          across both cells (similarly to how at MIT there is
>          'athena', 'dev', 'sipb', and a bunch of other cells that
>          all use the 'ATHENA.MIT.EDU' kerberos realm).

This is what I will do if we go with separate cells.  Multiple
Kerberos realms is just too much work :-).

> 	b) You can make sure that user's homedirs are located on
> 	   'local' servers.  This way you will be assured that users
> 	   can always get to their own homedir when they're sitting at
> 	   their home site.  Obviously this doesn't help the
> 	   'unavailable' situation, but quite honestly there is
> 	   practical difference between 'down' and 'unavailable' as
> 	   far as AFS is concerned.

This was my actual plan.  The basic structure of the AFS tree would
consist of replicated RO volumes, with the individual users' home
directories living on a server close to them.  This would work
reasonably well even during outages, I think, except that someone
using Windows or Mac could get stuck just browsing down to their own
home directory.

> In neither case would a single 'users' directory work in the case of a
> downed connection between the offices.

It would work all right if it were on a replicated volume and if
stat() did not need to talk to the file server for the mountpoints.
Ultimately, this really is almost exactly the same problem as the root
volume has, but within a single cell.

> This implies that if you're on the 'wrong' side you can't perform
> any administrative tasks such as volume changes, group changes,
> password changes, etc.

That is OK; we already have exactly this limitation for our NT domain,
where we have a BDC in the remote office but the PDC back here.  As I
said, our connectivity is pretty good, just not perfect.  It would be
fine for some operations to require that connectivity, but it will
occasionally be annoying if you need that connectivity just to browse
around a bit.

I suppose there are other tricks we could play, like dividing up the
/users tree (among others) according to geographic location.

> The difference is that 'root.afs' is considered "special" in the cache
> manager (well, the 'root volume' is considered special, and 'root.afs'
> is the default root volume).  This means it's a bit easier to deal
> with the root volume than anything else.  You would basically have to
> change the definition of a mountpoint in order to add this
> functionality, and I'm not sure how one could do that in a
> backwards-compatible way.

That is unfortunate.  It would be nice if there were a networked file
system that actually worried about things like the Mac Finder and the
Windows Explorer.  It requires keeping the metadata for the top-level
of a volume outside the volume; but that is not the way any of these
systems work, probably because it is not very Unix-like.  "The
permissions are in the root inode, of course; where else would they
be?"

> The question is: what kind(s) of loss are you willing to live with?

We could live with nothing but the ability for people to work with
their files.  And except for the #$%@^% Finder and Explorer hanging,
we can get there using just one AFS cell.

 - Pat