[OpenAFS] OpenAFS on MacOS X

Derek Atkins warlord@MIT.EDU
10 Jul 2001 18:25:05 -0400


"Patrick J. LoPresti" <patl@curl.com> writes:

> This was my actual plan.  The basic structure of the AFS tree would
> consist of replicated RO volumes, with the individual users' home
> directories living on a server close to them.  This would work
> reasonably well even during outages, I think, except that someone
> using Windows or Mac could get stuck just browsing down to their own
> home directory.

Well, they would only hang when the network drops, and only for as
long as it takes to time out the unavailable fileserver(s).  Note that
they don't have a timeout per mountpoint, but rather a timeout per
fileserver.  And that timeout will only happen ONCE (per fileserver,
per outage) on a single client until the network comes back.

Note: perhaps the real solution here is to NOT have users "browse" to
they own homedir, but rather provide a pointer to it directly, either
via another drive or via a top-level shortcut.

> It would work all right if it were on a replicated volume and if
> stat() did not need to talk to the file server for the mountpoints.
> Ultimately, this really is almost exactly the same problem as the root
> volume has, but within a single cell.

Well, the problem per se is the same, except that root.afs is a KNOWN
volume whereas you want this for arbitrary volumes.  Currently there
is no way to signal to the client that any one volume is _special_.

> fine for some operations to require that connectivity, but it will
> occasionally be annoying if you need that connectivity just to browse
> around a bit.

Keep in mind that it will timeout, and then browsing will continue to
work.  One reason /afs is SO BAD is that there are multiple timeouts
PER ENTRY.  In your case I doubt there would be more than 10 entries
en mass; more likely a half-dozen servers per side of the network.  So
I think you are highly over-estimating the time it would take to
timeout (especially if you break down the users' mountpoints into
relatively small subdirectories).

> I suppose there are other tricks we could play, like dividing up the
> /users tree (among others) according to geographic location.

That would work, too.  You could have users/ma/cam/[a-z] and
users/ca/pao/[a-z], or something like that.  Then you wouldn't lose at
all because users/ma/cam/[a-z] can be replicated and you only "lose"
once you get to a particular subdirectory.

Another alternative is that you could provide a users/<username>
symlink into users/path/to/username.  That would also solve your
problem if you train people to always use the symlink path.

> That is unfortunate.  It would be nice if there were a networked file
> system that actually worried about things like the Mac Finder and the
> Windows Explorer.  It requires keeping the metadata for the top-level
> of a volume outside the volume; but that is not the way any of these
> systems work, probably because it is not very Unix-like.  "The
> permissions are in the root inode, of course; where else would they
> be?"

Well, consider that most Unix file browsers don't have this problem
(yes, there are exceptions) and that most non-Unix systems weren't
meant to be networked (at least until VERY recently).

-derek

-- 
       Derek Atkins, SB '93 MIT EE, SM '95 MIT Media Laboratory
       Member, MIT Student Information Processing Board  (SIPB)
       URL: http://web.mit.edu/warlord/    PP-ASEL-IA     N1NWH
       warlord@MIT.EDU                        PGP key available