[OpenAFS] Red Hat Linux beta kernel implements AFS?
Rudolph T Maceyko
rtm@cert.org
Tue, 20 Aug 2002 16:56:02 -0400
--On Monday, August 19, 2002 16:16:52 -0400 Derrick J Brashear
<shadow@dementia.org> wrote:
> http://www.dementia.org/~shadow/linux-2.4.18-afs.patch
Perhaps this will be slightly off-topic, but I think it will be of
interest to some folks. I've taken a quick stab at using this AFS
implementation, and have it half working. There's really no
documentation. I'll also bring this up on the limbo-list.
There are no userland utilities, so you'd still need vos, fs, etc. on a
box using it.
Caching is disabled (#ifdef 0).
No references to *krb* anywhere.
No references to *res* anywhere (so no AFSDB records).
Client configuration is *much* different:
The local cell and servers are specified as options to the kafs kernel
module. Excerpt from /etc/modules.conf:
options kafs rootcell=example.com:192.168.0.1:192.168.0.2:192.168.0.3
It is also possible to add cells dynamically by writing to
/proc/fs/afs/cells:
# echo add sub.example.com 192.168.1.1:192.168.1.2:192.168.1.3 >
/proc/fs/afs/cells
There is no concept of a CellServDB otherwise, but there is the
following comment:
mkafscache preloads /etc/sysconfig/kafs/cell-serv-db into the cache
Presumably "mkafscache" is a utility that would be packaged separately.
For each cell there appears a directory under /proc/fs/afs/<cellname>
containing:
servers
vlserver
volumes
OK, so I added the following entry to /etc/fstab:
none /afs afs rwpath,vol=#root.afs 0 0
(I don't know at this point whether I need rwpath.)
Then:
# modprobe kafs
seems to report success loading the module and it started the following
threads:
krxtimod - Rx RPC timeout daemon
krxiod - Rx RPC I/O kernel interface
krxsecd - Rx RPC security kernel thread interface
kafstimod - timeout daemon
kafsasyncd - async daemon
kafscmd - cache manager daemon
/afs isn't mounted yet, so I issue the following command:
# mount /afs
mount: Not a directory
Oh darn. /afs *is* there, and *is* a directory. This error appears to
be the result of a failed fetch-status on the RW vol for root.afs.
Note that the parser for volume names required me to prefix the actual
volume name in fstab with either '#' for RO or '%' for RW and I
actually *did* specify '#' to get the RO.
A simple comparision of tcpdump output between the Red Hat kernel
module and the one from OpenAFS 1.2.6 shows a couple of differences:
- The first thing both implementations do at startup is a VLDB lookup
on "root.afs". Red Hat uses get-entry-by-name instead of
get-entry-by-name-u (OK, I guess)
- OpenAFS issues a get-addrs-u call and then another
get-entry-by-name-u but this time with the VID of root.afs.readonly.
Then it does 2 more get-addrs-u calls.
- Red Hat does a fetch-status fid <RWID>/1/1 while OpenAFS does a
fetch-status fid <ROID>/1/1. The server returns a -1 to the Red Hat
box, but does a whoareyou callback to the OpenAFS box, then sends the
answer. For some reason the OpenAFS box does yet another
get-entry-by-name-u on "root.afs".
The servers in this case are running Transarc AFS and we've had
problems occasionally where they blacklist clients. Perhaps that is
all that's in my way at the moment.
FYI,
Rudy