[OpenAFS] restoring volumes

Dexter 'Kim' Kimball dhk@ccre.com
Mon, 8 Aug 2005 14:59:29 -0600


Hi Ron,

Shut down on Friday.

User accounts can be moved in a similar fashion.

I'm assuming that the fileserver is also the AFS DB server in a =
single-node
cell?

And that you are not going to change the cell name?

If these two assumptions are correct, when you bring up the second
fileserver bring it up as an AFS DB server as well.

Refer to the AFS Admin guide/Quick Start "bringing up a second DB
server/fileserver" as I'm pretty sure the following is not complete.

Make sure the CellServDB files in /usr/vice/etc and /usr/afs/etc on the =
new
server contain the server info for both the old DB server and the new DB
server.

The server encryption keys on the two nodes will need to match.  The =
easiest
way to ensure this is to copy the key file (howls of protest from =
everyone
who ever took one of my classes :) from one machine to another, using =
sftp
or some sort of secure transfer (howls of protest diminish somewhat.)
(After your two-node cell is working properly I suggest changing the
encryption key making sure to add the new key to the KeyFile first (bos
addkey on each server machine, same secret word) and then, using the =
same
password/secret word, update the password for the user "afs" to match -- =
and
make sure to specify the same key version number (kvno) for bos addkey =
and
kas setpasswd.  After the cell is functioning correctly.)

Once the KeyFile and CellServDB are in place on the new server start the
bosserver and add yourself to the UserList on the new node (bos =
adduser).
Then "bos create" away.

Add the new AFS DB server info to the /usr/vice/etc/CellServDB on the =
old
AFS DB node and then as root run "fs newcell" making sure to specify the
complete list of DB servers for the cell.

Then add the new AFS DB server info to the /usr/afs/etc/CellServDB on =
the
old AFS DB node and use "bos addhost" to update the server CellServDB.  =
I
believe you have to restart all of the AFS server DB processes on the =
old DB
node so that they will see the changes but this may no longer be =
required.
(I've been doing this a long time and things change.)  If you want you =
can
wait five minutes or so and look in /usr/afs/db on the new node -- ls -l
should show the same sizes as in /usr/afs/db on the old node.  And/or =
more
conventionally, you can use "udebug" against each of the DB server ports =
on
the new machine -- it should report a DB version number that matches the =
old
machine.

Make sure to add your admin account(s) to the UserList on the new node =
--
"bos adduser"

After the DB servers synchronize you'll have the user accounts (PTS and =
KAS
entries) on both machines.

Once you get this far you should be able to "vos move" volumes from old
fileserver to new fileserver.

How many AFS clients do you have?  Their CellServDB files will be out of
date.  Choices -- update the CellServDB file on each client =
(conventional
approach), or leave the old AFS DB server running so that clients can =
find
the DB services, or change the IP/hostname of the new fileserver to =
match
the old and shut down/retire the old if retirement is what you're up to.

If you shutdown the existing DB server before you make sure your AFS =
clients
can find the new one AFS will stand for the "Ain'tNo File System" -- the
clients won't be able to find volumes, users won't be able to log in, =
things
in generally will gradually (within an hour or so) succumb to entropy.

Do follow the Quick Start procedures where applicable.

Let me know if you get stuck.

Kim


=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D
Kim (Dexter) Kimball
CCRE, Inc.
kim<dot>kimball<at>jpl.nasa.gov
dhk<at>ccre.com



     -----Original Message-----
     From: Ron Croonenberg [mailto:ronc@depauw.edu]=20
     Sent: Saturday, August 06, 2005 4:29 PM
     To: dhk@ccre.com; openafs-info@openafs.org
     Subject: RE: [OpenAFS] restoring volumes
    =20
    =20
     Sounds like a plan.
    =20
     Can useraccounts be moved too ?
    =20
     >>> "Dexter 'Kim' Kimball" <dhk@ccre.com> 08/05/05 1:56 PM >>>
     Another way to do this is to make the new fileserver a=20
     member of the current
     cell and, use vos move to get the volumes across, and=20
     retire/shoot/fix the
     current fileserver.
    =20
     The advantage of this is that users won't be disturbed and=20
     you'll get all of
     the data including changes made while the move is in progress.
    =20
     If you dump the .backup volumes any changes made between=20
     "vos backup" and
     "vos dump | vos restore" will be lost.
    =20
     Another caution: if you're using the AFS kaserver the new=20
     cell name will
     invalidate all your passwords since the cell name is used=20
     in key encryption.
     IOW you can't copy the KADB to the new cell and use=20
     existing passwords.  You
     may have already taken care of this some other way, but=20
     thought I'd mention
     it.
    =20
     You'll also have to update the CellServDB on all the=20
     clients so that they'll
     see the new cell (afs-1).
    =20
     Is there a reason for the new cell name?  If not I'd bring=20
     up the new
     fileserver as a member of the existing cell.
    =20
     Otherwise:
    =20
    =20
     1. Get admin tokens in both cells.
        a. One of the cells will have to have CellServDB=20
     entries for both cells.
           i.  If not, update the CellServDB and use "fs=20
     newcell" (as root)
          ii.  I'm assuming that the csc.depauw.edu client=20
     you're using has CSDB
     info for afs-1.csc.depauw.edu
    =20
     2. Get admin tokens for both cells
        a. klog ..., klog ... -cell
    =20
     3. Recommended:  in old cell issue "vos backupsys"
        a. If you dump the RW volumes in the old cell they'll=20
     be unusable during
     the dump.
        b. vos backupsys gives a fresh snapshot to dump from
           Alternatively you might want to issue "vos backup=20
     <volname>" just
     before dumping <volname.backup>, especially for volumes=20
     that are in use.
    =20
     4. vos dump <volname.backup> | vos restore <server> <part>=20
     <volname> -cell
     afs-1.csc.depauw.edu
    =20
     This restores the .backup snapshot from csc.depauw.edu=20
     (users won't lose
     access) to a RW volume in cell afs-1.csc.depauw.edu
    =20
     Assuming that UIDs map across the two cells (AFS PTS UIDS)=20
     the ACLs will be
     OK in the new cell.
    =20
     If AFS accounts (PTS UIDs) don't match in the new cell=20
     things access will
     not be what you intend:  numeric PTS UIDs are stored on=20
     ACLs/in PTS groups.
    =20
    =20
    =20
     Kim
    =20
    =20
    =20
    =20
          btw the 2 servers are not in the same cell.
         =20
          the old cell is called csc.depauw.edu the new cell is=20
          called afs-1.csc.depauw.edu
         =20
          >>> "Dexter 'Kim' Kimball" <dhk@ccre.com> 08/05/05 1:16 PM >>>
          Let's see.
         =20
          If the two servers are in the same cell the vos restore=20
          will fail -- can't
          have 2 instances of a RW volume, and the example you give=20
          would leave
          homestaff.cowboy as a RW on two different servers.
         =20
          If the two servers are in different cells then you want to=20
          get admin tokens
          for both cells and use "vos dump .... | vos restore ....=20
          -cell <othercell>"
         =20
          If you want to replicate the volume within a given cell=20
          use "vos addsite "
          "vos release"
         =20
          If you want to replicate the volume within a given cell=20
          but want default
          access to be to the RW volume, create a RW mount point (fs=20
          mkm .... -rw).
         =20
          Not sure what you're after.  Probably a case I didn't cover :)
         =20
          Kim
         =20
         =20
          =
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D
          Kim (Dexter) Kimball
          CCRE, Inc.
          kim<dot>kimball<at>jpl.nasa.gov
          dhk<at>ccre.com
         =20
         =20
         =20
               -----Original Message-----
               From: Ron Croonenberg [mailto:ronc@depauw.edu]=20
               Sent: Friday, August 05, 2005 12:05 PM
               To: dhk@ccre.com; openafs-info@openafs.org
               Subject: RE: [OpenAFS] restoring volumes
              =20
              =20
               Ahh...ok...
              =20
               Well let me explain what I am trying to do, at=20
     least it's=20
               a plan I have.
              =20
               - I want to dump a volume with vos on the old server,=20
          let's say
                 "homestaff.cowboy"
               - move the dumpfile to the new server
               - restore the dumpfile with vos on the new server to a=20
               volume called
                 homestaff.cowboy
              =20
               So maybe I need a bit different approach ?
              =20
               Ron
              =20
              =20
               >>> "Dexter 'Kim' Kimball" <dhk@ccre.com>=20
     08/05/05 1:02 PM >>>
               Ron,
              =20
               Specify a different value after "-name"
              =20
               The volume will be restored to the specified server and=20
               partition -- as a
               read write volume.
              =20
               The ".backup" volume name extension won't work (it's=20
               reserved) and is
               causing your "restore name" to exceed the 22 char limit.
              =20
               If you want to restore over the existing RW=20
     volume put it=20
               on the same server
               and partition, use the same name, and specify -overwrite.
              =20
               Otherwise give it a new name (homestaff.cowboy.R e.g.),=20
               mount it, fix the
               existing volume ... etc.
              =20
               Kim
              =20
              =20
              =20
               =
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D
               Kim (Dexter) Kimball
               CCRE, Inc.
               kim<dot>kimball<at>jpl.nasa.gov
               dhk<at>ccre.com
              =20
              =20
              =20
                    -----Original Message-----
                    From: openafs-info-admin@openafs.org=20
                    [mailto:openafs-info-admin@openafs.org] On=20
     Behalf Of Ron=20
                    Croonenberg
                    Sent: Friday, August 05, 2005 11:28 AM
                    To: openafs-info@openafs.org
                    Subject: [OpenAFS] restoring volumes
                   =20
                   =20
                    Hello,
                   =20
                    I dumped a volume on an old afs server and try=20
          to restore=20
                    it on the new server. This is what I see:
                   =20
                    [root@afs-1 vicepb]# vos restore -server=20
                    afs-1.csc.depauw.edu -partition /vicepa -name=20
                    homestaff.cowboy.backup -file=20
                    /vicepb/homestaff.cowboy.backup -cell=20
          afs-1.csc.depauw.edu=20
                                                               =20
              =20
                        =20
                   =20
                    vos: the name of the volume homestaff.cowboy.backup=20
                    exceeds the size limit
                   =20
                    Does vos restore create the volume ?  I=20
     didn't see the=20
                    volume homestaff.cowboy.backup.  There is a=20
          volume called=20
                    homestaff.cowboy on the new server though.
                   =20
                    Ron
                   =20
                   =20
                   =20
                   =20
                    _______________________________________________
                    OpenAFS-info mailing list
                    OpenAFS-info@openafs.org
                   =20
     https://lists.openafs.org/mailman/listinfo/openafs-info
                   =20
              =20
              =20
              =20
              =20
         =20
         =20
          _______________________________________________
          OpenAFS-info mailing list
          OpenAFS-info@openafs.org
          https://lists.openafs.org/mailman/listinfo/openafs-info
         =20
         =20
    =20
    =20
     _______________________________________________
     OpenAFS-info mailing list
     OpenAFS-info@openafs.org
     https://lists.openafs.org/mailman/listinfo/openafs-info
    =20
    =20