[OpenAFS] Deploying OpenAFS on a working server ?

Madhusudan Singh singh.madhusudan@gmail.com
Fri, 29 Apr 2005 12:23:24 -0400


Thanks for your long and useful response.

> Madhusudan Singh wrote:
> >Part of what had weakened my arguments
> >earlier was a realization that openafs had, for a period of time (which I
> >assume has ended), become incompatible with linux kernels 2.6.x.
>
> Someone will surely correct me, but I think that's just for the AFS
> client. I don't know if it's entirely solved or not yet.

Oh, I am sorry. I had assumed that this was a problem on a server as well. 
Well, only three (including myself) run linux with kernel versions 2.6.x, the 
rest are windoze users and one is a mac user.

>
> >above, I do not wish to disrupt the workings of the server above
> > (currently accessible only through ssh/sftp). Partition / above is of
> > type ext2 (which I know OpenAFS does support). Is it possible for me to
> > work with a tree such as /afs/servername.domain.edu/u/m/s/msingh ->
> > /home/msingh (arrow indicates symlink) and make OpenAFS pick up its
> > passwords from /etc/passwd ?
>
> Yes, 'sort of.'

As they say, the devil is in the details.

>
> First, you can't simply export an existing filesystem space through AFS
> the way you can with NFS and samba; even using the namei server, which
> doesn't do anything below the normal filesystem level, uses the files in
> a 'different' way to translate disk space into files in the AFS
> namespace. However, once it is exported, then clients can link it around
> on their end however they like. Put another way, you would probably need
> to add a disk, get OpenAFS running, create volumes OpenAFS for users,
> and migrate files into OpenAFS - at which point the files would *only*
> be accessible to OpenAFS clients.

I see.

>
> Second, as I understand it Kerberos (which OpenAFS uses) is a 'shared
> secret' authentication mechanism, meaning kaserver (or whatever) needs
> access to the unencrypted passwords: thus /etc/passwd would not provide
> everything required. You would have to migrate users over.

Hmm. This could raise some hackles, but I guess it cannot be helped.

Second question - isn't storing passwords unencrypted a serious security 
weakness ? I speak as someone who does not know a whole lot about kerberos.

>
> Finally, using Linux 2.6 as a client may involve a bit more work; a
> quick google suggests some people have gotten it to work at some points
> in time. Getting it to work would be required in order for users to ssh
> to the server and access their home directories there.
>
> My general understanding of what 'works' - and the approach I am taking
> in my own cell - is to have a back-end server that users never see that
> provides Kerberos authentication and runs the OpenAFS server. You then
> provide another system, a Kerberos and OpenAFS client, to which end
> users can actually connect via ssh or whatever (it would also be where
> mail is stored if it's a mail server, where the web server would run if
> it were a web server, and so on). If they use an AFS client, they bypass
> the intermediary servers entirely, however.

Aha. This is a little more complicated than the simple client connects to the 
openafs server directly.

Now, if the openafs client needs to run on a different server (to which users 
connect via ssh), two questions arise :

1. Can windows users (who could not be bothered with the internal details) 
mount their directories as drives on their machines ?

2. If there are issues with kernel 2.6, running this client on a second 
machine (which I have and is going to be running a mail server and a 
webserver) could be a problem. One possible "bright" spot is that the second 
server is not running linux but FreeBSD 5.3-RELEASE. However, the hard disk 
space on that machine is severely limited (about 10Gs) for any non web/ non 
mail task (I just partitioned it that way).

>
> The good news is that such a backend server is not as compute-intensive
> as a shell server, and so aside from reliability concerns even the
> cheapest server system from Dell could be a fine server for a limited
> number of clients (the general assumption is that as your client base /
> use go up, so too does your budget - and you can easily *add* OpenAFS
> servers as time goes on). This basically lets you separate your servers
> into computation servers and file servers.

Hmm. Well, my idea was to run the openafs server on the fastest machine that I 
have.

Let us say I have two machines (both with their FQDNs) A and B.

A is a "slow" machine with less memory that has to run a webserver and a 
mailserver. It runs FreeBSD 5.3-RELEASE (as I indicated earlier). It also has 
very little hard disk space left over for users.

B is a "fast" machine with a lot of memory that has to run a zope server (that 
the webserver above connects to), has a lot of hard disk space (I listed it 
in my initial email) and will also be used for running intensive 
computations. This is what I was planning to deploy my openafs server on.

>
> To put that all together, my recommendation would be to
> 1- leave things on that server as they are for now; budget time and
> money for a second server (focusing and nothing but disks), and
> configure that as an OpenAFS server;

Hmm.

> 2- set up your current server as an OpenAFS client;

Ok. So, the where the files are stored is the client. And a second machine 
(say A above) serves as the openafs authentication server.

> 3- create Kerberos principals for all users and slowly get them all to
> set their Kerberos passwords;

On B I presume.

> 4- and then have a flag day where the server is 'down' while you copy
> home directories into AFS and modify /etc/passwd to authenticate against
> OpenAFS and point to OpenAFS directories for $HOME.

Ok.