[OpenAFS] Deploying OpenAFS on a working server ?

Matthew Weigel unique@idempot.net
Thu, 28 Apr 2005 20:58:36 -0500


Madhusudan Singh wrote:

>Part of what had weakened my arguments 
>earlier was a realization that openafs had, for a period of time (which I 
>assume has ended), become incompatible with linux kernels 2.6.x.
>  
>

Someone will surely correct me, but I think that's just for the AFS 
client. I don't know if it's entirely solved or not yet.

>above, I do not wish to disrupt the workings of the server above (currently 
>accessible only through ssh/sftp). Partition / above is of type ext2 (which I 
>know OpenAFS does support). Is it possible for me to work with a tree such 
>as /afs/servername.domain.edu/u/m/s/msingh -> /home/msingh (arrow indicates 
>symlink) and make OpenAFS pick up its passwords from /etc/passwd ?
>

Yes, 'sort of.'

First, you can't simply export an existing filesystem space through AFS 
the way you can with NFS and samba; even using the namei server, which 
doesn't do anything below the normal filesystem level, uses the files in 
a 'different' way to translate disk space into files in the AFS 
namespace. However, once it is exported, then clients can link it around 
on their end however they like. Put another way, you would probably need 
to add a disk, get OpenAFS running, create volumes OpenAFS for users, 
and migrate files into OpenAFS - at which point the files would *only* 
be accessible to OpenAFS clients.

Second, as I understand it Kerberos (which OpenAFS uses) is a 'shared 
secret' authentication mechanism, meaning kaserver (or whatever) needs 
access to the unencrypted passwords: thus /etc/passwd would not provide 
everything required. You would have to migrate users over.

Finally, using Linux 2.6 as a client may involve a bit more work; a 
quick google suggests some people have gotten it to work at some points 
in time. Getting it to work would be required in order for users to ssh 
to the server and access their home directories there.

My general understanding of what 'works' - and the approach I am taking 
in my own cell - is to have a back-end server that users never see that 
provides Kerberos authentication and runs the OpenAFS server. You then 
provide another system, a Kerberos and OpenAFS client, to which end 
users can actually connect via ssh or whatever (it would also be where 
mail is stored if it's a mail server, where the web server would run if 
it were a web server, and so on). If they use an AFS client, they bypass 
the intermediary servers entirely, however.

The good news is that such a backend server is not as compute-intensive 
as a shell server, and so aside from reliability concerns even the 
cheapest server system from Dell could be a fine server for a limited 
number of clients (the general assumption is that as your client base / 
use go up, so too does your budget - and you can easily *add* OpenAFS 
servers as time goes on). This basically lets you separate your servers 
into computation servers and file servers.

To put that all together, my recommendation would be to
1- leave things on that server as they are for now; budget time and 
money for a second server (focusing and nothing but disks), and 
configure that as an OpenAFS server;
2- set up your current server as an OpenAFS client;
3- create Kerberos principals for all users and slowly get them all to 
set their Kerberos passwords;
4- and then have a flag day where the server is 'down' while you copy 
home directories into AFS and modify /etc/passwd to authenticate against 
OpenAFS and point to OpenAFS directories for $HOME.
-- 
 Matthew Weigel