[OpenAFS-devel] no access during afs-rootvol

Josh Huber huber+keyword+openafs-devel.8b4f6e@alum.wpi.edu
Sat, 24 Aug 2002 12:00:48 -0400


I'm using the Debian packages from testing (not the packages available
from openafs.org) trying to setup a new cell.  I've never set up AFS
before so this has been quite the challenge :)

I successfully created the cell.  Here is the output from afs-newcell:

If the fileserver is not running, this may hang for 30 seconds.
/etc/init.d/openafs-fileserver stop
Stopping AFS Server: bosserver.
What administrative principal should be used? huber
echo \>paradoxical.net >/etc/openafs/server/CellServDB
/etc/init.d/openafs-fileserver start
Starting AFS Server: bosserver.
bos addhost fs.paradoxical.net fs.paradoxical.net -localauth ||true
bos adduser fs.paradoxical.net huber -localauth
pt_util: /var/lib/openafs/db/prdb.DB0: Bad UBIK_MAGIC. Is 0 should be 354545
Ubik Version is: 2.0
Error while creating system:administrators: Entry for id already exists
pt_util: Ubik Version number changed during execution.
Old Version = 2.0, new version = 33554432.0
bos create fs.paradoxical.net ptserver simple /usr/lib/openafs/ptserver -localauth
bos create fs.paradoxical.net vlserver simple /usr/lib/openafs/vlserver -localauth
bos create fs.paradoxical.net fs fs -cmd /usr/lib/openafs/fileserver -cmd /usr/lib/openafs/volserver -cmd /usr/lib/openafs/salvager -localauth
Waiting for database elections: done.
vos create fs.paradoxical.net a root.afs -localauth
Volume 536870912 created on partition /vicepa of fs.paradoxical.net
echo paradoxical.net >/etc/openafs/ThisCell
/etc/init.d/openafs-client force-start
Starting AFS services: afsd: All AFS daemons started.
 afsd.
Now, get tokens as huber in the paradoxical.net cell.  Then, run
afs-rootvol.


Here is the output showing the AFS volume mounted:

Filesystem           1k-blocks      Used Available Use% Mounted on
/dev/hda1               964500    680760    234744  75% /
/dev/hdb2               735252    621444     76460  90% /usr
/dev/evms/afs         10321140     32884   9763972   1% /vicepa
AFS                    9000000         0   9000000   0% /afs

Next, I get my kerberos ticket for the principal I specified in
afs-newcell (huber), and get a token using aklog:

fs:/# kinit huber
Password for huber@PARADOXICAL.NET: 
fs:/# aklog paradoxical.net -k PARADOXICAL.NET
fs:/# klist
Ticket cache: FILE:/tmp/krb5cc_0
Default principal: huber@PARADOXICAL.NET

Valid starting     Expires            Service principal
08/24/02 11:51:13  08/24/02 21:51:12  krbtgt/PARADOXICAL.NET@PARADOXICAL.NET
08/24/02 11:51:49  08/24/02 21:51:12  afs@PARADOXICAL.NET


Kerberos 4 ticket cache: /tmp/tkt0
klist: You have no tickets cached
fs:/# tokens

Tokens held by the Cache Manager:

User's (AFS ID 1) tokens for afs@paradoxical.net [Expires Aug 24 21:51]
   --End of list--


/vicepa is a volume managed by EVMS, but I don't think that's a
problem. (http://evms.sf.net/)

When I try to create the root root.afs volume, I get the following
output:

You will need to select a server (hostname) and AFS
partition on which to create the root volumes.
What AFS Server should volumes be placed on? fs
What partition? [a] a
fs sa /afs system:anyuser rl
fs: You don't have the required access rights on '/afs'
Failed: 256


Here is the privilege database information:

fs:/# pt_util -u
Ubik Version is: 33554432.0
huber      128/20 1 -204 -204
anonymous  128/20 32766 -204 -204
fs:/# pt_util -g
Ubik Version is: 33554432.0
system:backup 2/0 -205 -204 -204
system:administrators 130/20 -204 -204 -204
system:ptsviewers 2/0 -203 -204 -204
system:authuser 2/0 -102 -204 -204
system:anyuser 2/0 -101 -204 -204
fs:/# pt_util -m
Ubik Version is: 33554432.0
system:backup 2/0 -205 -204 -204
system:administrators 130/20 -204 -204 -204
   huber    1
system:ptsviewers 2/0 -203 -204 -204
system:authuser 2/0 -102 -204 -204
system:anyuser 2/0 -101 -204 -204


I am running krb524d, but I'm only using krb5 tickets for
authentication of other services. (mostly ssh, but also LDAP).
Kerberos is MIT krb5 v1.2.5 and it's been working fine with several
machines for a few weeks now.

Does anyone else have any ideas on what could be going wrong here?

Thanks,

-- 
Josh Huber