[OpenAFS] Changes for Mosaic's AFS cell...

Marcus Watts mdw@umich.edu
Thu, 06 Apr 2006 02:19:11 -0400

Various wrote:
> Message-ID: <04fd01c65936$9523b850$1f2016ac@ad.uiuc.edu>
> From: "Christopher D. Clausen" <cclausen@acm.org>
> To: <openafs-info@openafs.org>
> References: <>
> Subject: Re: [OpenAFS] Changes for Mosaic's AFS cell...
> Sender: openafs-info-admin@openafs.org
> Date: Wed, 5 Apr 2006 23:54:55 -0500
> Rodney M Dyer <rmdyer@uncc.edu> wrote:
> > Does it matter whether the cell servers are upgraded
> > first?  Obviously not, since our existing test server already works. 
> > I've never upgraded a cell server myself, and the person who last
> > upgraded our cell servers has "left the building".
> Other than having servers that support whatever features you need, no, 
> it shouldn't matter.  I had a problem with "foreign users" from other 
> Kerberos realms not working b/c my servers were too old.  If you are not 
> intending to use any new functionality you should be fine upgrading 
> either the clients or the servers in any order.
> I would recomend doing server upgrades as close as possible though to 
> minimize problems that may occur.
> > Our current
> > back-end systems guy just wanted some indication about the sequence
> > of events in which things should take place.  Because of issues with
> > the UBIK quorum, if no accounts, or volumes are being added, removed,
> > or replicated during an upgrade, is the sequence of cell server
> > upgrades important?  I mean our cell is fairly small so can we just
> > upgrade each one without worry right?
> I've updated to various dev builds and rc versions in random order, one 
> AFS server at a time, on the three AFS DB servers for the acm.uiuc.edu 
> cell without issues (at least not issues related to upgrade order.) 
> Just vos move all volumes off, shut it down, do whatever system upgrades 
> at the same time, and restart with the newer version.
> I will say that 1.4.1-rc10 appears to be running just fine on sun4x_510 
> after crashes with previous versions forced upgrades.
> I'd say to minimize the amount of time that the server is down and of 
> course do it during off-peak times.

You should always check the release notes for any exceptions.
In general however, you can run with db servers that are not
particularly close in software versions to the file servers,
and different file servers need not all be at the same version number.

For the file servers, it's usually possible to upgrade fileserver
software without moving volumes.  I haven't found many production
sites willing to risk this with real data.  There have been and
probably will be upgrades where this is not true.  For instance
the current namei fileserver has a 3 bit integer field which can't
be easily enlarged without breaking the on-disk format.

For the DB servers the rules are a bit different; there are some ubik
issues you should be very careful about.

* The first rule is that for a given server (ptserver, vlserver,
  buserver), all the servers that are up should be running exactly the
  same version.  There have been various changes going from transarc to
  modern openafs in the ubik inter-server protocol; often these don't
  happen between versions, but unless you know for certain that your old
  & new version can interoperate, you should not assume that they will.
  { many versions of openafs may be compatible.  Still best to ask first. }
	- you should be able to update ptserver on all your db servers,
	then vlserver, then buserver.  You can update the underlying
	OS in any old way you please and that need not be in sync;
	you can even run different hardware, but you want the
	servers binaries in your varied environment to be built
	all from the same afs source at the same revision level.
	- update means shutting *all* old binaries down, then starting
	the new binaries.  The absolute outage will be short, but there
	will a "readonly" period until a new sync site is elected.
* If you are adding or deleting ubik servers, and restarting or running
  them with server side CellServDB files that are out of sync, you should
  be very careful that you do not end up with more than one sync site.
  Older versions of ubik could scramble the database if this happened.
  Newer versions will probably just drop changes and confuse the users.
	See below for sequence.

> > 2.  We need to shut down an older cell server and bring up a new one
> > in another building.
> >
> > For issue 2, we have set the vlserver prefs on each client so that the
> > clients won't select the cell server we want to move to another
> > building (or it will be last in the pref list).  Can we just shut
> > down the old cell server and bring up another (in another building)
> > without much worry about UBIK issues?  This is somewhat similar to
> > issue 1.
> I have done this as well without any issues.
> I assume you have two other AFS DB servers to maintain quorum?
> Is the IP address of the new server going to match the old server?

If you're changing DB ip address -- server side:
	add the ip adress to all other servers.
	restart each other server, separately.  wait several minutes
	between restarts, until they start voting "yes" (udebug).
	start the new server last.  The new host will not be the sync
	site, even if it normally would be.  (usually, that's the lowest
	ip address.)

	wait.  see "client side" below.

	later on -- delete the ip address on all servers.
	shut down the old server.
	if the host to be deleted was the sync site (udebug),
	then you'll get a short read-only outage.
	restart each other server.  wait between each as before.
	if you do weekly restarts, you can let the weekly restart
	do this for you.  it won't hurt (too much) to leave the old
	host "dead" ubik-wise, tho if you have active writes to
	ubik going on when you shut it off, there will be a pause
	until the sync site decides the old host is really dead.

Client side.

	By doing the server side in two steps as above,
	you have a window in the middle where you can find all
	clients, update cellservdb, and either reboot or do
	"fs newcell".  No rush.  Take a week.  Take a month.
	When do you have to be out of your old space?

	Note that if the old or new server is the "sync" site,
	the clients that don't have the sync site won't be able
	to make changes.  This can be finessed too.  Easiest
	way would be to start by turning the old server off
	long enough for a new sync site to be elected.

> > 3.  We'd like to turn off the old KAS from Transarc and rely totally
> > on Kerb 5 (finally).  We are already using Kerb 5 everywhere and none
> > of our AFS clients use KAS anymore, but we've never actually disabled
> > it.
> I setup Kerberos 5 only from the start, so I can't comment on this other 
> than we don't have any Transarc stuff and everything appears to be 
> working.

You're already using kerberos 5?  So no kaserver, right?  Or are you
running both in parallel, separate databases?  That would be scary.
Anyways, if you're running mit kerberos 5, you should be able to run
fakeka if you want to support "klog".  Unless you have a strong need to
run fakeka and do klog, you probably shouldn't.  I presume you're using
kadmin to do all your kerberos administration, not kas.

I vaguelly recall kaserver kept some counters and things that you could
use to see if it's being used - also you can use udebug to see
if the database is being changed.  Or you could just turn it off
and see who screams.  You may want to perform the proper ritual sacrifices

> > 4.  We'd like to try real K5 AFS service tickets without using the 5
> > to 4 daemon.
> >
> > For issue 4, I am under the impression (from my conversation at the
> > last BPW) that we can disable our 5 to 4 daemon that AKLOG uses and
> > AKLOG will just take the K5 encrypted part and just stuff it into the
> > AFS cred manager.  The only thing we need to do is update our key
> > files on the file servers right?  Can AKLOG do what it needs to do
> > without having access to a 5 to 4 daemon?
> If you have an aklog that uses pure krb5, yes, it should just work 
> without a krb524d running.
> AFAIK, you shouldn't need to update your AFS key files, but its possible 
> that mine are new enough to not need to refreshed to a new enc_type.

Any existing distribution of openafs can only use plain des-cbc-crc keys,
just like always.  { there is reason to hope though - I've seen
a version of ptserver run with aes.  :-) }

If you already have kerberos 5 running with afs, then you must already
have used asetkey or the equivalent and have an AFS keyfile that
matches what's in your kerberos 5 database; you don't need to get a new
key again.

When openafs uses kerberos 5 today, it only does des.  It can use
any of 3 ticket types:
* regular plain kerberos 4, as created by kaserver, fakeka, krb5kdc, or krb524d.
* kerberos 5 tickets -- ONLY with des-cbc-crc keys.
* 2b tickets.  These are kerberos 5 tickets with some "useless"
	wrappings removed (such as the bit that says it's
	using des-cbc-crc.)

I think there may be versions of aklog that do all these of these
things.  You want either the "2b" or the full "kerberos 5" thing;
makes no difference which.  You can use "tcpdump", "truss", "snoop",
"strace" or maybe even "nm" or "gdb" to find out if you have a version
that does the 524 thing; or you might be able to examine various kerberos
logs to see what's happening there.

				-Marcus Watts

While I was writing this Jeffrey Hutzelman <jhutz@cmu.edu> said
lots of good stuff including:
> AFS key files can store only single-DES keys.  The thing you have to do to 
> make pure V5 tickets work is upgrade the server _software_.  I believe 
> 1.2.10 is the oldest version that will work, but don't hold me to that (and 
> don't run 1.2.10 after January 10, 2004 if you have more than one dbserver).

I think what I said above is still useful so I'm going to to post
it anwyays.  I have a patch for really ancient versions of transarc
afs if you want to run them after Jan 10 2004.  For a bit we ended
up running hand-patched binaries.  As I recall, the aix compiler had
scrambled the immediate constant to two widely separated instructions
so it was a bit interesting to figure out what to patch.