[OpenAFS] restoring volumes

Ron Croonenberg ronc@depauw.edu
Sat, 06 Aug 2005 17:29:14 -0500


Sounds like a plan.

Can useraccounts be moved too ?

>>> "Dexter 'Kim' Kimball" <dhk@ccre.com> 08/05/05 1:56 PM >>>
Another way to do this is to make the new fileserver a member of the =
current
cell and, use vos move to get the volumes across, and retire/shoot/fix the
current fileserver.

The advantage of this is that users won't be disturbed and you'll get all =
of
the data including changes made while the move is in progress.

If you dump the .backup volumes any changes made between "vos backup" and
"vos dump | vos restore" will be lost.

Another caution: if you're using the AFS kaserver the new cell name will
invalidate all your passwords since the cell name is used in key encryption=
.
IOW you can't copy the KADB to the new cell and use existing passwords.  =
You
may have already taken care of this some other way, but thought I'd =
mention
it.

You'll also have to update the CellServDB on all the clients so that =
they'll
see the new cell (afs-1).

Is there a reason for the new cell name?  If not I'd bring up the new
fileserver as a member of the existing cell.

Otherwise:


1. Get admin tokens in both cells.
   a. One of the cells will have to have CellServDB entries for both =
cells.
      i.  If not, update the CellServDB and use "fs newcell" (as root)
     ii.  I'm assuming that the csc.depauw.edu client you're using has =
CSDB
info for afs-1.csc.depauw.edu

2. Get admin tokens for both cells
   a. klog ..., klog ... -cell

3. Recommended:  in old cell issue "vos backupsys"
   a. If you dump the RW volumes in the old cell they'll be unusable =
during
the dump.
   b. vos backupsys gives a fresh snapshot to dump from
      Alternatively you might want to issue "vos backup <volname>" just
before dumping <volname.backup>, especially for volumes that are in use.

4. vos dump <volname.backup> | vos restore <server> <part> <volname> -cell
afs-1.csc.depauw.edu

This restores the .backup snapshot from csc.depauw.edu (users won't lose
access) to a RW volume in cell afs-1.csc.depauw.edu

Assuming that UIDs map across the two cells (AFS PTS UIDS) the ACLs will =
be
OK in the new cell.

If AFS accounts (PTS UIDs) don't match in the new cell things access will
not be what you intend:  numeric PTS UIDs are stored on ACLs/in PTS =
groups.



Kim




     btw the 2 servers are not in the same cell.
    =20
     the old cell is called csc.depauw.edu the new cell is=20
     called afs-1.csc.depauw.edu
    =20
     >>> "Dexter 'Kim' Kimball" <dhk@ccre.com> 08/05/05 1:16 PM >>>
     Let's see.
    =20
     If the two servers are in the same cell the vos restore=20
     will fail -- can't
     have 2 instances of a RW volume, and the example you give=20
     would leave
     homestaff.cowboy as a RW on two different servers.
    =20
     If the two servers are in different cells then you want to=20
     get admin tokens
     for both cells and use "vos dump .... | vos restore ....=20
     -cell <othercell>"
    =20
     If you want to replicate the volume within a given cell=20
     use "vos addsite "
     "vos release"
    =20
     If you want to replicate the volume within a given cell=20
     but want default
     access to be to the RW volume, create a RW mount point (fs=20
     mkm .... -rw).
    =20
     Not sure what you're after.  Probably a case I didn't cover :)
    =20
     Kim
    =20
    =20
     =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
     Kim (Dexter) Kimball
     CCRE, Inc.
     kim<dot>kimball<at>jpl.nasa.gov
     dhk<at>ccre.com
    =20
    =20
    =20
          -----Original Message-----
          From: Ron Croonenberg [mailto:ronc@depauw.edu]=20
          Sent: Friday, August 05, 2005 12:05 PM
          To: dhk@ccre.com; openafs-info@openafs.org
          Subject: RE: [OpenAFS] restoring volumes
         =20
         =20
          Ahh...ok...
         =20
          Well let me explain what I am trying to do, at least it's=20
          a plan I have.
         =20
          - I want to dump a volume with vos on the old server,=20
     let's say
            "homestaff.cowboy"
          - move the dumpfile to the new server
          - restore the dumpfile with vos on the new server to a=20
          volume called
            homestaff.cowboy
         =20
          So maybe I need a bit different approach ?
         =20
          Ron
         =20
         =20
          >>> "Dexter 'Kim' Kimball" <dhk@ccre.com> 08/05/05 1:02 PM >>>
          Ron,
         =20
          Specify a different value after "-name"
         =20
          The volume will be restored to the specified server and=20
          partition -- as a
          read write volume.
         =20
          The ".backup" volume name extension won't work (it's=20
          reserved) and is
          causing your "restore name" to exceed the 22 char limit.
         =20
          If you want to restore over the existing RW volume put it=20
          on the same server
          and partition, use the same name, and specify -overwrite.
         =20
          Otherwise give it a new name (homestaff.cowboy.R e.g.),=20
          mount it, fix the
          existing volume ... etc.
         =20
          Kim
         =20
         =20
         =20
          =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
          Kim (Dexter) Kimball
          CCRE, Inc.
          kim<dot>kimball<at>jpl.nasa.gov
          dhk<at>ccre.com
         =20
         =20
         =20
               -----Original Message-----
               From: openafs-info-admin@openafs.org=20
               [mailto:openafs-info-admin@openafs.org] On Behalf Of Ron=20
               Croonenberg
               Sent: Friday, August 05, 2005 11:28 AM
               To: openafs-info@openafs.org
               Subject: [OpenAFS] restoring volumes
              =20
              =20
               Hello,
              =20
               I dumped a volume on an old afs server and try=20
     to restore=20
               it on the new server. This is what I see:
              =20
               [root@afs-1 vicepb]# vos restore -server=20
               afs-1.csc.depauw.edu -partition /vicepa -name=20
               homestaff.cowboy.backup -file=20
               /vicepb/homestaff.cowboy.backup -cell=20
     afs-1.csc.depauw.edu=20
                                                                    =20
                   =20
              =20
               vos: the name of the volume homestaff.cowboy.backup=20
               exceeds the size limit
              =20
               Does vos restore create the volume ?  I didn't see the=20
               volume homestaff.cowboy.backup.  There is a=20
     volume called=20
               homestaff.cowboy on the new server though.
              =20
               Ron
              =20
              =20
              =20
              =20
               _______________________________________________
               OpenAFS-info mailing list
               OpenAFS-info@openafs.org
               https://lists.openafs.org/mailman/listinfo/openafs-info
              =20
         =20
         =20
         =20
         =20
    =20
    =20
     _______________________________________________
     OpenAFS-info mailing list
     OpenAFS-info@openafs.org
     https://lists.openafs.org/mailman/listinfo/openafs-info
    =20
    =20


_______________________________________________
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info