[OpenAFS] Debian - openafs -noauth problems
Russ Allbery
rra@stanford.edu
Fri, 19 Aug 2005 21:20:49 -0700
--=-=-=
Madhusudan Singh <singh.madhusudan@gmail.com> writes:
> On Friday 19 August 2005 11:28 pm, Russ Allbery wrote:
>> I have new instructions and new copies of the scripts if you'd like to
>> give them a try instead as a test. They're the ones that will be in
>> the next release of the Debian packages.
> I would like to try them out.
Attached. (Review from anyone else would also be welcome.) I still want
to add a check to afs-newcell to be sure that the client CellServDB
contains an entry for the new cell, but other than that, they should
hopefully be nearly done.
> My /etc/openafs/cacheinfo contains :
> /afs:/usr/vice/cache/openafs:10000
[...]
> Is there something in Debian packages that *assumes* that the cache is
> located in /var ?
Nope. That should be fine.
--=-=-=
Content-Disposition: inline; filename=README.servers
Setting up a Debian OpenAFS Server
Introduction
This document describes how to set up an OpenAFS server using the Debian
packages. If you are not already familiar with the basic concepts of
OpenAFS, you should review the documentation at:
<http://www.openafs.org/doc/index.htm>
particularly the AFS Administrator's Guide. This documentation is
somewhat out of date (it doesn't talk about how to use a Kerberos v5 KDC
instead of the AFS kaserver, for example), but it's a good introduction
to the basic concepts and servers you will need to run.
The Debian OpenAFS packages follow the FHS and therefore use different
paths than the standard AFS documentation or the paths that experienced
AFS administrators may be used to. In the first column below are the
traditional paths, and in the second column, the Debian paths:
/usr/afs/etc /etc/openafs/server
/usr/afs/local /etc/openafs/server-local
/usr/afs/db /var/lib/openafs/db
/usr/afs/logs /var/log/openafs
/usr/afs/bin /usr/lib/openafs
/usr/vice/etc /etc/openafs
The AFS kaserver (a Kerberos v4 KDC) is not packaged for Debian. Any
new OpenAFS installation should use Kerberos v5 for authentication in
conjunction with either the tools packaged in the openafs-krb5 package
or the Heimdal KDC. When setting up a new cell, you should therefore
not set up a kaserver as described in the AFS Administrator's Guide, and
you will need to follow a slightly different method of setting the cell
key.
Creating a New Cell
For documentation on adding a server to an existing cell, see below.
These instructions assume that you are using MIT Kerberos and the
openafs-krb5 package. If you are using Heimdal instead, some of the
steps will be slightly different (Heimdal can write the AFS KeyFile
directly, for example, so you don't have to use asetkey). The
afs-newcell and afs-rootvol scripts are the same, however.
/usr/share/doc/openafs-dbserver/configuration-transcript.txt.gz has a
transcript of the results of these directions, which you may want to
follow along with as you do this.
1. If you do not already have a Kerberos KDC (Key Distribution Center,
the daemon that handles Kerberos authentication) configured, do so.
You can run the KDC on the same system as your OpenAFS db server,
although if you plan on using Kerberos for other things, you may
eventually want to use separate systems. If you do not have a
Kerberos realm set up already, you can do so in Debian with:
apt-get install krb5-admin-server
krb5_newrealm
This will install a KDC and kadmind server (the server that handles
password changes and account creations) on the local system. Please
be aware that the security of everything that uses Kerberos for
authentication, including AFS, depends on the security of the KDC.
The name of your Kerberos realm should, for various reasons, be in
all uppercase and be a domain name that you control, although
neither is technically required.
Right now, for the aklog from openafs-krb5 to work, you need to
enable krb4 support (either full or preauth) and run krb524d.
Eventually this will no longer be necessary.
2. It is traditional (and recommended) in AFS (and for Kerberos) to
give administrators two separate Kerberos principals, one regular
principal to use for regular purposes and a separate admin principal
to use for privileged actions. This is similar to the distinction
between a regular user and the root user in Unix, except that
everyone can have their own separate root identity. Kerberos
recommends username/admin as the admin principal for username, and
this will work for AFS as well.
If you have not already created such an admin principal for yourself
in your Kerberos realm, do so now (using kadmin.local on your KDC,
unless you have a local method that you prefer). Also create a
regular (non-admin) principal for yourself if you have not already;
this is the identity that you'll use for regular operations, like
storing files or reading mail.
If the KDC is not on the same system that the OpenAFS db server will
be on, you will also need to give your admin principal the rights to
download the afs keytab in /etc/krb5kdc/kadm5.acl by adding a lines
like:
username/admin@REALM *
where REALM is your Kerberos realm and username/admin is the admin
principal that you created. That line gives you full admin access
to the Kerberos v5 realm. You can be more restrictive if you want;
see the kadmind man page for the syntax.
3. Install the OpenAFS db server package on an appropriate system with:
apt-get install openafs-dbserver openafs-krb5
The openafs-krb5 package will be used to create the AFS KeyFile.
As part of this installation, you will need to configure
openafs-client with the cell you are creating as the local cell name
and the server on which you're working as the db server. This name
is technically arbitrary but should, for various reasons, be a valid
domain name that you control; unlike Kerberos realms, it should be
in all lowercase. Enter the name of the local system when prompted
for the names of your OpenAFS db servers. Don't start the client;
that will happen below. For right now, say that you don't want it
to start at boot. You can change that later with dpkg-reconfigure
openafs-client.
If you have already installed openafs-client and configured it for
some other cell, you do need to configure it to point to your new
cell for these instructions to work. Stop the AFS client on the
system with /etc/init.d/openafs-client stop and then run:
dpkg-reconfigure openafs-client
pointing it to the new cell you're about to create instead.
Remember, your cell name should be in lowercase. If you have had to
do this several times, double-check /etc/openafs/CellServDB when
you're done and make sure that there is only one entry for your new
cell at the top of that file and that it lists the correct IP
address for your new db server.
In order to complete the AFS installation, you will also need a
working AFS client installed on that system, which means that you
need to install an OpenAFS kernel module. Please see:
/usr/share/doc/openafs-client/README.modules
for information on how to do that.
4. Create an AFS principal in Kerberos. This is the AFS service
principal, used by clients to authenticate to AFS and for AFS
servers to authenticate to each other. It *must* be a DES key; AFS
does not support any other encryption type. Run kadmin.local on
your KDC and then, at the kadmin.local prompt, run:
addprinc -randkey -e des-cbc-crc:v4 afs
If your Kerberos realm name does not match your AFS cell name (if,
for instance, you have one Kerberos realm with multiple AFS cells),
use "afs/cell.name" as the name of the principal above instead of
just "afs", where cell.name is the name of your new AFS cell.
5. On the db server, download this key into a keytab. If this is the
same system as the KDC, you can use kadmin.local again. If not, you
should use kadmin (make sure that krb5-user is installed), and you
may need to pass -p username/admin to kadmin to tell it what
principal to authenticate as. Whichever way you get into kadmin,
run:
ktadd -k /tmp/afs.keytab -e des-cbc-crc:v4 afs
(or afs/cell.name if you used that instead). In the message that
results, note the kvno number reported, since you'll need it later
(it will normally be 3).
Don't forgoet the -e des-cbc-crc:v4 to force the afs key to be DES.
You can verify this with:
getprinc afs
and checking to be sure that the only key listed is a DES key. If
there are multiple keys listed, delprinc the afs principal, delete
the /tmp/afs.keytab file, and then start over with addprinc, making
sure not to forget the -e option.
6. Create the AFS KeyFile with:
asetkey add <kvno> /tmp/afs.keytab afs
(or afs/cell.name if you used that instead). <kvno> should be
replaced by the kvno number reported by kadmin. This tells AFS the
Kerberos key that it should use, making it match the key in the
Kerberos KDC.
7. If the name of your Kerberos realm does not match the name of your
AFS cell, tell AFS what Kerberos realm to use with:
echo REALM > /etc/openafs/server/krb.conf
where REALM is the name of your Kerberos realm. If your AFS cell
and Kerberos realm have the same name, this is unnecessary.
7. Create some space to use for AFS volumes. You can set up a separate
AFS file server on a different system from the Kerberos KDC and AFS
db server, and for a larger cell you will want to do so, but when
getting started you can make the db server a file server as well.
For a production cell, you will want to create a separate partition
devoted to AFS and mount it as /vicepa (and may want to make
multiple partitions mounted as /vicepb, /vicepc, etc.), but for
testing purposes, you can use the commands below to create a
zero-filled file, create a file system in it, and then mount it:
dd if=/dev/zero of=/var/lib/openafs/vicepa bs=1024k count=32
mke2fs /var/lib/openafs/vicepa
mkdir /vicepa
mount -oloop /var/lib/openafs/vicepa /vicepa
mke2fs will ask you if you're sure you want to create a file system
on a non-block device; say yes.
8. Run afs-newcell. This will prompt you to be sure that the above
steps have been complete and will ask you for the Kerberos principal
to use for AFS administrative access. You should use the
username/admin principal discussed above.
At the completion of this step, you should see bosserver and several
other AFS server processes running, and you should be able to see
the status of those procesess with:
bos status localhost -local
Now, you should be able to run:
kinit username/admin@REALM
aklog cell.name -k REALM
where username/admin is the admin principal discussed above, REALM
is the name of your Kerberos realm, and cell.name is the name of
your AFS cell. This will obtain Kerberos tickets and AFS tokens in
your Kerberos realm and new AFS cell. You should be able to see
your AFS tokens by running:
tokens
Finally, you should be able to see the status of the AFS server
processes with:
bos status <hostname>
where <hostname> is the hostname of the local system, once you've
done the above. This tests authenticated bos access as your admin
principal (rather than using the local KeyFile to authenticate).
9. Run afs-rootvol. This creates the basic AFS volume structure for
your new cell. It will prompt you to be sure that the above steps
are complete and then will ask you what file server and partition to
create the volume on. If you were following the above instructions,
use the local hostname and "a" as the partition (without the
quotes), which will use /vicepa.
After this command completes, you should be able to /bin/ls /afs and
see your local cell (and, if you aren't using dynroot, mount points
for several other cells). Note that if you're not using dynroot,
run /bin/ls rather than just ls to be sure that ls isn't aliased to
ls -F, ls --color, or some other option that would stat each file in
/afs, since this would require contacting lots of foreign cells and
could take a very long time.
You should now be able to cd to /afs/cell.name where cell.name is
the AFS cell name that you used. Currently, there isn't anything in
your cell. To make modifications, cd to /afs/.cell.name (note the
leading period) and make changes there. To make those changes show
up at /afs/cell.name, run vos release root.cell. For more details
on what you can do now, see the AFS Administrator's Reference.
10. While this is optional, you probably want to add AFSDB records to
DNS for your new AFS cell. These special DNS records let AFS
clients find the db servers for your cell without requiring local
configuration. To do this, create a DNS record like:
<cell>. 3600 IN AFSDB 1 <server>.
where <cell> is the name of your AFS cell and <server> is the name
of your db server. Note the trailing periods to prevent the DNS
server from appending the origin. You can, of course, choose what
you prefer for the lifetime. The 1 is not a priority; it's a
special indicator saying that this record is for an AFS database
server.
If you have multiple db servers (see below for adding new ones), you
should create multiple records of this type, one per db server.
Congratulations! You now have an AFS cell. If any of the above steps
failed, please check the steps carefully and make sure that you've done
them all in order. If that doesn't reveal the cause of the problem,
please feel free to submit a bug report with reportbug. Include as many
details as possible on exactly what you typed and exactly what you saw
as a result, particularly any error messages.
Adding Additional Servers
If you decide one server is not enough, or if you're adding a server to
an existing cell, here is roughly what you should do:
1. Copy securely (using scp, encrypted Kerberos rcp, or some other
secure method) all of /etc/openafs/server to the new server.
2. Install the openafs-fileserver package on the new server.
3. If the machine is to be a file server, create an fs instance using
bos create:
bos create <host> fs fs -cmd /usr/lib/openafs/fileserver \
-cmd /usr/lib/openafs/volserver \
-cmd /usr/lib/openafs/salvager -localauth
For a file server, this is all you have to do.
4. For database servers, also install openafs-dbserver and then use bos
addhost to add the new server to /etc/openafs/server/CellServDB:
bos addhost <server> <new-server>
for each db server <server> in your cell (including the new one).
Then, create ptserver and vlserver instances on the new server:
bos create <host> ptserver simple /usr/lib/openafs/ptserver \
-localauth
bos create <host> vlserver simple /usr/lib/openafs/vlserver \
-localauth
The existing servers should then propagate the database to the new
server. Note that you do not need to run a file server on a db
server if you don't want to (and larger sites probably will not want
to), but you always need to have the openafs-fileserver package
installed on db servers. It contains the bosserver binary and some
of the shared infrastructure.
5. If you added a new db server, configure your clients to use it. If
you are using AFSDB records in DNS, you can just add a new record
(see point 10 in the instructions for creating a new cell).
Otherwise, clients will need to have the new server IP address added
to their /etc/openafs/CellServDB file (or /usr/vice/etc/CellServDB
for non-Debian clients using the standard AFS paths), and the client
will have to be restarted before it will know about the new db
server.
The standard rule of thumb is that all of your database servers and file
servers should ideally be running the same version of OpenAFS. However,
in practice OpenAFS is fairly good at backward compatibility and you can
generally mix and match different versions. Be careful, though, to
ensure that all of your database servers are built the same when it
comes to options like --enable-supergroups (enabled in the Debian
packages).
Upgrades
Currently, during an upgrade of the openafs-fileserver package, all
services will be stopped and restarted. If openafs-dbserver is upgraded
without upgrading openafs-fileserver, those server binaries will not be
stopped and restarted; that restart will have to be done by hand.
It is possible that future versions of this package will install for
example /usr/lib/openafs/fileserver.package instead of
/usr/lib/openafs/fileserver and then create links to the actual binaries
in postinst. Upgrades would then not replace the old binaries, but
instead a script will be provided to roll the links forward to the new
versions. The intent is that people could install the new package on
all their servers and then quickly move the links before restarting the
bosserver. This has not yet been implemented.
--=-=-=
Content-Disposition: attachment; filename=afs-newcell
#!/usr/bin/perl -w
# Copyright (C) 2000 by Sam Hartman
# This file may be copied either under the terms of the GNU GPL or the IBM
# Public License either version 2 or later of the GPL or version 1.0 or later
# of the IPL.
use Term::ReadLine;
use strict;
use Debian::OpenAFS::ConfigUtils;
use Getopt::Long;
use Socket qw(inet_ntoa);
use vars qw($admin $server $requirements_met $shutdown_needed);
my $rl = new Term::ReadLine('afs-newcell');
=head1 NAME
afs-newcell - Set up initial database server for AFS cell
=head1 SYNOPSIS
B<afs-newcell> [B<--requirements-met>] [B<--admin> admin_user]
=head1 DESCRIPTION
This script sets up the initial AFS database and configures the first
database/file server.
The B<--requirements-met> option specifies that the initial requirements have
been met and that the script can proceed without displaying the initial
banner or asking for confirmation.
The B<--admin> option specifies the name of the administrative user. This
user will be given system:administrators and susers permission in the cell.
=head1 AUTHOR
Sam Hartman <hartmans@debian.org>
=cut
# Flush all output immediately.
$| = 1;
GetOptions ("requirements-met" => \$requirements_met, "admin=s" => \$admin);
unless ($requirements_met) {
print <<eoreqs;
Prerequisites
In order to set up a new AFS cell, you must meet the following:
1) You need a working Kerberos realm with Kerberos4 support. You
should install Heimdal with Kth-kerberos compatibility or MIT
Kerberos5.
2) You need to create the single-DES AFS key and load it into
/etc/openafs/server/KeyFile. If your cell's name is the same as
your Kerberos realm then create a principal called afs. Otherwise,
create a principal called afs/cellname in your realm. The cell
name should be all lower case, unlike Kerberos realms which are all
upper case. You can use asetkey from the openafs-krb5 package, or
if you used AFS3 salt to create the key, the bos addkey command.
3) This machine should have a filesystem mounted on /vicepa. If you
do not have a free partition, then create a large file by using dd
to extract bytes from /dev/zero. Create a filesystem on this file
and mount it using -oloop.
4) You will need an administrative principal created in a Kerberos
realm. This principal will be added to susers and
system:administrators and thus will be able to run administrative
commands. Generally the user is a root or admin instance of some
administravie user. For example if jruser is an administrator then
it would be reasonable to create jruser/root (or jruser/admin) and
specify that as the user to be added in this script.
5) The AFS client must not be running on this workstation. It will be
at the end of this script.
eoreqs
$_ = $rl->readline("Do you meet these requirements? [y/n] ");
unless (/^y/i ) {
print "Run this script again when you meet the requirements\n";
exit(1);
}
if ($> != 0) {
die "This script should almost always be run as root. Use the\n"
. "--requirements-met option to run as non-root.\n";
}
}
# Make sure the AFS client is not already running.
open(MOUNT, "mount |") or die "Failed to run mount: $!\n";
while(<MOUNT>) {
if (m:^AFS:) {
print "The AFS client is currently running on this workstation.\n";
print "Please restart this script after running"
. " /etc/init.d/openafs-client stop\n";
exit(1);
}
}
close MOUNT;
# Make sure there is a keyfile.
unless ( -f "/etc/openafs/server/KeyFile") {
print "You do not have an AFS keyfile. Please create this using asetkey"
. " from openafs-krb5\n";
print "or the bos addkey command\n";
exit(1);
}
# Stop the file server.
print "If the fileserver is not running, this may hang for 30 seconds.\n";
run("/etc/init.d/openafs-fileserver stop");
# Get the local hostname. Use the fully-qualified hostname to be safer.
$server = `hostname -f`;
chomp $server;
my $ip = gethostbyname $server;
if (inet_ntoa($ip) eq '127.0.0.1') {
print "Your hostname $server resolves to 127.0.0.1, which AFS cannot\n";
print "cope with. Make sure your hostname resolves to a non-loopback\n";
print "IP address. (Check /etc/hosts and make sure that your hostname\n";
print "isn't listed on the 127.0.0.1 line. If it is, removing it from\n";
print "that line will probably solve this problem.)\n";
exit(1);
}
# Determine the admin principal.
$admin = $rl->readline("What administrative principal should be used? ")
unless $admin;
print "\n";
die "Please specify an administrative user\n" unless $admin;
my $afs_admin = $admin;
$afs_admin =~ s:/:.:g;
if ($afs_admin =~ /@/) {
die "The administrative user must be in the same realm as the cell and\n"
. "no realm may be specified.\n";
}
# Determine the local cell. This should be configured via debconf, from the
# openafs-client configuration, when openafs-fileserver is installed.
open(CELL, "/etc/openafs/server/ThisCell")
or die "Cannot open /etc/openafs/server/ThisCell: $!\n";
my $cell = <CELL>;
chomp $cell;
# Write out a new CellServDB for the local cell containing only this server.
if (-f "/etc/openafs/server/CellServDB") {
print "/etc/openafs/server/CellServDB already exists, renaming to .old\n";
rename("/etc/openafs/server/CellServDB",
"/etc/openafs/server/CellServDB.old")
or die "Cannot rename /etc/openafs/server/CellServDB: $!\n";
}
open(CELLSERVDB, "> /etc/openafs/server/CellServDB")
or die "Cannot create /etc/openafs/server/CellServDB: $!\n";
print CELLSERVDB ">$cell\n";
print CELLSERVDB inet_ntoa($ip), "\t\t\t#$server\n";
close CELLSERVDB or die "Cannot write to /etc/openafs/server/CellServDB: $!\n";
# Now, we should be able to start bos and add the admin user.
run("/etc/init.d/openafs-fileserver start");
$shutdown_needed = 1;
run("bos adduser $server $afs_admin -localauth");
unwind("bos removeuser $server $afs_admin -localauth");
# Create the initial protection database using pt_util. This is safer than
# the standard mechanism of starting the cell in noauth mode until the first
# user has been created.
if (-f "/var/lib/openafs/db/prdb.DB0") {
die "Protection database already exists; cell already partially created\n";
}
open(PRDB, "| pt_util -p /var/lib/openafs/db/prdb.DB0 -w")
or die "Unable to start pt_util: $!\n";
print PRDB "$afs_admin 128/20 1 -204 -204\n";
print PRDB "system:administrators 130/20 -204 -204 -204\n";
print PRDB " $afs_admin 1\n";
close PRDB;
unwind("rm /var/lib/openafs/db/prdb*");
# We should now be able to start ptserver and vlserver.
run("bos create $server ptserver simple /usr/lib/openafs/ptserver -localauth");
unwind("bos delete $server ptserver -localauth");
run("bos create $server vlserver simple /usr/lib/openafs/vlserver -localauth");
unwind("bos delete $server vlserver -localauth");
# Create a file server as well.
run("bos create $server fs fs"
. " -cmd /usr/lib/openafs/fileserver"
. " -cmd /usr/lib/openafs/volserver"
. " -cmd /usr/lib/openafs/salvager -localauth");
unwind("bos delete $server fs -localauth");
# Pause for a while for ubik to catch up.
print "Waiting for database elections: ";
sleep(30);
print "done.\n";
# Past this point we want to control when bos shutdown happens.
$shutdown_needed = 0;
unwind("bos shutdown $server -localauth");
run("vos create $server a root.afs -localauth");
# We should now be able to bring up the client (it may need root.afs to exist
# if not using dynroot). We override whatever default cell was configured for
# the client, just in case it was pointing to some other cell.
open(THIS, "> /etc/openafs/ThisCell")
or die "Cannot create /etc/openafs/ThisCell: $!\n";
print THIS "$cell\n";
close THIS or die "Cannot write to /etc/openafs/ThisCell: $!\n";
run("/etc/init.d/openafs-client force-start");
# Verify that AFS has managed to start.
my $afs_running = 0;
open(MOUNT, "mount |") or die "Failed to run mount: $!\n";
while(<MOUNT>) {
if (m:^AFS:) {
$afs_running = 1;
}
}
unless ($afs_running) {
print "The AFS client failed to start.\n";
print "Please fix whatever problem kept it from running.\n";
exit(1);
}
print "\n";
print "Now, get tokens as $admin in the $cell cell.\n";
print "Then, run afs-rootvol.\n";
# Success, so clear the unwind commands.
@unwinds = ();
# If we fail before all the instances are created, we need to back out of
# everything we did as much as possible.
END {
system("bos shutdown $server -localauth") if $shutdown_needed;
run(pop @unwinds) while @unwinds;
}
--=-=-=
Content-Disposition: attachment; filename=afs-rootvol
#!/usr/bin/perl -w
# Copyright (C) 2000 by Sam Hartman
# This file may be copied either under the terms of the GNU GPL or the IBM
# Public License either version 2 or later of the GPL or version 1.0 or later
# of the IPL.
use strict;
use Debian::OpenAFS::ConfigUtils;
use Term::ReadLine;
use Getopt::Long;
use vars qw($rl $server $part $requirements_met);
=head1 NAME
afs-rootvol - Generate and populate root volumes for new AFS cells.
=head1 SYNOPSIS
B<afs-rootvol> [B<--requirements-met>] [B<--server> I<server-name>]
[B<--partition> I<partition-letter>]
=head1 DESCRIPTION
This script sets up an AFS cell's root volumes. It assumes that you already
have a fileserver and database servers. The fileserver should have an empty
root.afs. This script creates root.cell, user, and service and populates
root.afs.
=head1 AUTHOR
Sam Hartman <hartmans@debian.org>
=cut
# This subroutine creates a volume, mounts it and then sets the access
# to allow read by anyuser. The volume is scheduled for deletion in
# case of error.
sub mkvol($$) {
my ($vol, $mnt) = @_;
run("vos create $server $part $vol -localauth");
unwind("vos remove $server $part $vol -localauth");
run("fs mkm $mnt $vol ");
run("fs sa $mnt system:anyuser rl");
}
# Main script. Flush all output immediately.
$| = 1;
$rl = new Term::ReadLine('AFS');
GetOptions ("requirements-met" => \$requirements_met,
"server=s" => \$server,
"partition=s" => \$part);
unless ($requirements_met) {
print <<eotext;
Prerequisites
In order to set up the root.afs volume, you must meet the following
pre-conditions:
1) The cell must be configured, running a database server with a
volume location and protection server. The afs-newcell script will
set up these services.
2) You must be logged into the cell with tokens in for a user in
system:administrators and with a principal that is in the UserList
file of the servers in the cell.
3) You need a fileserver in the cell with partitions mounted and a
root.afs volume created. Presumably, it has no volumes on it,
although the script will work so long as nothing besides root.afs
exists. The afs-newcell script will set up the file server.
4) The AFS client must be running pointed at the new cell.
eotext
$_ = $rl->readline("Do you meet these conditions? (y/n) ");
unless (/^y/i ) {
print "Please restart the script when you meet these conditions.\n";
exit(1);
}
if ($> != 0) {
die "This script should almost always be run as root. Use the\n"
. "--requirements-met option to run as non-root.\n";
}
}
# Get configuration information we need.
open(CELL, "/etc/openafs/server/ThisCell")
or die "Unable to find out what cell this machine serves: $!\n";
my $cell = <CELL>;
close CELL;
chomp $cell;
unless ($server) {
print <<eotext;
You will need to select a server (hostname) and AFS partition on which to
create the root volumes.
eotext
$server = $rl->readline("What AFS Server should volumes be placed on? ");
die "Please select a server.\n" unless $server;
}
unless ($part) {
$part = $rl->readline("What partition? [a] ");
$part = "a" unless $part;
}
# Figure out where root.afs is. There are two possibilities: either we aren't
# running with dynroot, and root.afs is therefore accessible as /afs, or we
# are running with dynroot, in which case we have to create root.cell first
# and then mount root.afs under it.
#
# Always create root.cell first; we may need it if running with dynroot, and
# it doesn't hurt to do it now regardless.
my $rootmnt = "/afs";
run("vos create $server $part root.cell -localauth");
unwind("vos remove $server $part root.cell -localauth");
my $dynroot = (-d "$rootmnt/$cell/.");
if ($dynroot) {
run("fs mkm /afs/$cell/.root.afs root.afs -rw");
unwind("fs rmm /afs/$cell/.root.afs");
$rootmnt = "/afs/$cell/.root.afs";
}
run("fs sa $rootmnt system:anyuser rl");
# Scan CellServDB and create the cell mount points for every cell found there.
# Force these commands to succeed, since it's possible to end up with
# duplicate entries in CellServDB (and the second fs mkm will fail).
open(CELLSERVDB, "/etc/openafs/CellServDB")
or die "Unable to open /etc/openafs/CellServDB: $!\n";
while (<CELLSERVDB>) {
chomp;
if (/^>\s*([a-z0-9_\-.]+)/) {
run("fs mkm $rootmnt/$1 root.cell -cell $1 -fast || true");
unwind("fs rmm $rootmnt/$1 || true");
}
}
# Now, create the read/write mount points for root.cell and root.afs and set
# root.cell system:anyuser read.
run("fs sa /afs/$cell system:anyuser rl");
run("fs mkm $rootmnt/.$cell root.cell -cell $cell -rw");
unwind("fs rmm $rootmnt/.$cell");
run("fs mkm $rootmnt/.root.afs root.afs -rw");
unwind("fs rmm $rootmnt/.root.afs");
# Create the user and service mount point volumes to fit the semi-standard AFS
# cell layout.
mkvol("user", "/afs/$cell/user");
mkvol("service", "/afs/$cell/service");
# Strip the domain off of the cell name and create the short symlinks.
$cell =~ /^([^.]+)/;
my $cellpart = $1;
if ($cellpart && $cellpart ne $cell) {
run("ln -s $cell $rootmnt/$cellpart");
unwind("rm $rootmnt/$cellpart");
run("ln -s .$cell $rootmnt/.$cellpart");
unwind("rm $rootmnt/.$cellpart");
}
if ($dynroot) {
run("fs rmm /afs/$cell/.root.afs");
unwind("fs mkm /afs/$cell/.root.afs root.afs -rw");
}
# Now, replicate the infrastructure volumes.
run("vos addsite $server $part root.afs -localauth");
run("vos addsite $server $part root.cell -localauth");
run("vos release root.afs -localauth");
run("vos release root.cell -localauth");
unwind("vos remove $server $part root.cell.readonly -localauth");
unwind("vos remove $server $part root.afs.readonly -localauth");
# Success, so clear the unwind commands.
@unwinds = ();
# If we fail before all the instances are created, we need to back out of
# everything we did as much as possible.
END {
run(pop @unwinds) while @unwinds;
}
--=-=-=
--
Russ Allbery (rra@stanford.edu) <http://www.eyrie.org/~eagle/>
--=-=-=--