[OpenAFS] newbie admin question
Mon, 04 Apr 2005 11:22:55 +0100
I think this is simpler that you imagine.
AFS fileservers have partitions.
AFS partitions house AFS volumes.
AFS volumes are mounted (as you choose) under /afs/@cell/.
(Remember, it's AFS so the volumes all get mounted under /afs/).
You choose how many fileservers, partitions, volumes you need to use.
If you want to place a particular AFS volume in a partition of a
particular fileserver, you can do that.
You can move AFS volume "live" between fileservers with minimal impact
on users. Typically, users have no idea a volume is being moved.
Try doing that with NFS!
You might (and I personally think it's a good idea) to have dedicated
AFS fileservers which use RAID5 "below" the partitions.
In general, you get higher data availablility by having more fileservers
(ideally with identical hardware configurations). You can have a
"standby" fileserver used to off-load an active fileserver thus allowing
re-install/upgrade of the (previously) active fileserver. The "standby"
fileserver is also useful if you find hardware problems starting to
to appear on one of you active fileservers: you can move the AFS volumes
off the failing fileserver and onto the "standby" fileserver.
Wherever you have readonly (os mostly readonly) data you should use
AFS ReadOnly volume replication across two (or more) fileservers.
Certainly, the "principal" AFS volumes at the top of your AFS cell
should be replicated RO (eg root.afs, root.cell, root,othercells, etc)
Paul Blackburn http://acm.org/~mpb
IBM Managed Security Services Delivery EMEA
> On Sun, 3 Apr 2005, Lars Schimmer wrote:
>> -----BEGIN PGP SIGNED MESSAGE-----
>> Hash: SHA1
>> firstname.lastname@example.org schrieb:
>> | This is probably a question answered in the documentation, but I've
>> | Googling and looking through the docs for a few hours now with no
>> | Hopefully someone has the quick seven-command answer for me.
>> | | I have two fileservers, identically configured running Linux.
>> Both are
>> | 16 disk machines with 3Ware 3w9500-8 hardware raid cards,
>> configured as
>> | 1.0TB RAID10 arrays. Their filesystems are laid out as such:
>> | | sfs0 root # df -h
>> | Filesystem Size Used Avail Use% Mounted on
>> | /dev/sda2 3.8G 1.8G 2.1G 47% /
>> | none 2.0G 0 2.0G 0% /dev/shm
>> | /dev/sda3 913G 32K 913G 1% /vicepa
>> | /dev/sdb3 913G 32K 913G 1% /vicepb
>> | | What I would like is for all 4 of the data partitions (sfs0:/vicepa,
>> | sfs0:/vicepb, sfs1:/vicepa, sfs1:/vicepb) to be part of one
>> | afs volume (so I can mount it as, say, /data on my client servers).
>> | I've not found what is needed to add additional partitions to a
>> You can only mount a volume in one partition.
>> So for your setup you need to first make a big partition of that 4
>> ones and than
>> make /vicepa on this and create a volume in this.
>> But thats really a bad one.
>> Just make more smaller volumes, like volume data on top, than volume
>> data.cd1 ,
>> data.cd2 data.cd3 under this and so on.
>> In this case you've got more volumes and if a HD/raid fails, the
>> others are
>> still online and available. And speed is better, to.
>> With afs and this kinds of volume mount there is no need of real big
>> volumes, as
>> long as you don't really need a directory with 4TB data in it :-)
> Unfortunately, yes, I do.
> Since the 4 partitions are on two physical controllers in two different
> machines, I can't "combine them" (otherwise I'd just use NFS, as we
> already do). I was apparently under the misguided impression that AFS
> could do this for me. There are problems with distributing data into
> multiple volumes, the largest of which is making sure the read load is
> properly distributed - overwhelming a single partition is too easy, and
> avoiding it requires lots of data shuffling, and the next is unified
> access - I need to be able to present a single, unified directory of
> data to my application to serve to customers.
> I can get around these problems with symlinks and load distribution
> scripts, but at that point I might as well use NFS (which we're already
> doing extensively), and save myself the authentication and additional
> software headaches. I get the disturbing feeling that tomorrow and
> Monday I will be neck-deep in ugly perl scripts.
> I guess AFS won't do what I need. But thank you for the quick response.
> OpenAFS-info mailing list