[OpenAFS] Currently correct info for Debian sarge OpenAFS install?

Eric Bennett eric@umbralservices.com
Tue, 03 May 2005 12:52:41 +1000


Now this is an interesting one, I did run afs-newcell as the quick and 
dirty debian guide advised that it needed to be done, however, here is 
the exact output of the command;

raven:/usr/share/doc/openafs-fileserver# afs-newcell
                            Prerequisites

In order to set up a new AFS cell, you must meet the following:

1) You need a working Kerberos realm with Kerberos4 support.  You
   should install Heimdal with Kth-kerberos compatibility or MIT
   Kerberos5.

2) You need to create the single-DES AFS key and load it into
   /etc/openafs/server/KeyFile.  If your cell's name is the same as
   your Kerberos realm then create a principal called afs.  Otherwise,
   create a principal called afs/cellname in your realm.  The cell
   name should be all lower case, unlike Kerberos realms which are all
   upper case.  You can use asetkey from the openafs-krb5 package, or
   if you used AFS3 salt to create the key, the bos addkey command.

3) This machine should have a filesystem mounted on /vicepa.  If you
   do not have a free partition, then create a large file by using dd
   to extract bytes from /dev/zero.  Create a filesystem on this file
   and mount it using -oloop.

4) You will need an administrative principal created in a Kerberos
realm.  This principal will be added to susers and
system:administrators and thus will be able to run administrative
commands.  Generally the user is a root instance of some administravie
user.  For example if jruser is an administrator then it would be
reasonable to create jruser/root and specify jruser/root as the user
to be added in this script.

5) The AFS client must not be running on this workstation.  It will be
at the end of this script.

Do you meet these requirements? [y/n] y
If the fileserver is not running, this may hang for 30 seconds.
/etc/init.d/openafs-fileserver stop
Stopping AFS Server: bosserver.
What administrative principal should be used? eric
echo \>umbralservices.com >/etc/openafs/server/CellServDB
/etc/init.d/openafs-fileserver start
Starting AFS Server: bosserver.
bos addhost raven raven -localauth ||true
bos: could not find entry (can't find cell '<default>' in cell database)
bos adduser raven eric -localauth
bos: could not find entry (can't find cell '<default>' in cell database)
Failed: 256
bos: could not find entry (can't find cell '<default>' in cell database)

As you can see, it's not getting what server it ought to be adding to, 
as far as I can see this is due to echo \>umbralservices.com > 
/etc/openafs/server/CellServDB overwriting the correct content which 
should be

 >umbralservices.com # cell
69.60.123.88    # raven

Those automatically executed commands however are actually included in 
the configuration documentation, so it appeared that the author expected 
us to figure out the way to make CellServDB work on our own and then 
execute those commands individually. I daresay you're right about the 
ptserver not having an initialised DB, but as you can see, afs-newcell 
is not doing the job there, is there a direct way to do it, or a way to 
fix afs-newcell? I might try hashing out the line in the perl script 
which botches my CellServDB right now actually.

Regards
Eric


Jeffrey Hutzelman wrote:

>
>
> On Monday, May 02, 2005 07:30:50 PM -0700 Russ Allbery 
> <rra@stanford.edu> wrote:
>
>> Eric Bennett <eric@umbralservices.com> writes:
>>
>>> I've been having a nightmare of a time trying to get openafs installed
>>> under debian, I've gotten to the point where you create a volume (I
>>> assume) with the command vos create (host) a root.afs -localauth and it
>>> just hangs, I've tried stracing the filesserver process as well as the
>>> bosserver process, it appears to be hanging on
>>
>>
>>> [pid  7169] connect(3, {sa_family=AF_INET, sin_port=htons(2040),
>>> sin_addr=inet_addr("127.0.0.1")}, 16) = -1 ECONNREFUSED (Connection
>>> refused)
>>
>>
>> Well, this particular problem is because of /etc/hosts, as previously
>> mentioned.  (That's the only reason why something would be connecting to
>> 127.0.0.1.)
>
>
> Actually, no.  Port 2040/tcp is the fssync interface, which is the 
> communication channel between the fileserver, volserver, and other 
> volume utilities running on the same machine.  It listens _only_ on 
> 127.0.0.1, and connections via that address are perfectly normal.
>
> The connection is being refused because the fileserver hasn't finished 
> initializing yet.  This is also perfectly normal, for a time, but not 
> indefinitely.  In this case, the reason the fileserver has not 
> finished initializing is called out clearly in the logs -- it can't 
> get a CPS for system:anyuser, because that entry doesn't yet exist in 
> the PRDB (error 267268 is PRNOENT, "User or group doesn't exist").
>
> The reason for the PRNOENT is also called out clearly in the logs.  
> The ptserver was started with _no_ database, and because it is not 
> running in noauth mode, it will not construct one from scratch.  This 
> is somewhat expected; running the ptserver in noauth mode has 
> significant security implications, and so it's desirable to avoid ever 
> doing so.  The Debian scripts accomplish this by using pt_util to spin 
> an initial PRDB from scratch before starting the ptserver for the 
> first time.  Since these scripts were apparently not used in this 
> case, there is no PRDB.
>
>
> IMHO the simplest solution would be to use the afs-newcell script 
> provided with the Debian packages to emit a new PRDB.
>
> -- Jeffrey T. Hutzelman (N3NHS) <jhutz+@cmu.edu>
>   Sr. Research Systems Programmer
>   School of Computer Science - Research Computing Facility
>   Carnegie Mellon University - Pittsburgh, PA
>
> _______________________________________________
> OpenAFS-info mailing list
> OpenAFS-info@openafs.org
> https://lists.openafs.org/mailman/listinfo/openafs-info