[OpenAFS] newbie admin question

jdisher@parad.net jdisher@parad.net
Sun, 3 Apr 2005 04:18:14 -0700 (PDT)


On Sun, 3 Apr 2005, Lars Schimmer wrote:

> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
>
> jdisher@parad.net schrieb:
> |  This is probably a question answered in the documentation, but I've been
> |  Googling and looking through the docs for a few hours now with no love.
> |  Hopefully someone has the quick seven-command answer for me.
> | 
> |  I have two fileservers, identically configured running Linux.  Both are
> |  16 disk machines with 3Ware 3w9500-8 hardware raid cards, configured as
> |  1.0TB RAID10 arrays.  Their filesystems are laid out as such:
> | 
> |  sfs0 root # df -h
> |  Filesystem            Size  Used Avail Use% Mounted on
> |  /dev/sda2             3.8G  1.8G  2.1G  47% /
> |  none                  2.0G     0  2.0G   0% /dev/shm
> |  /dev/sda3             913G   32K  913G   1% /vicepa
> |  /dev/sdb3             913G   32K  913G   1% /vicepb
> | 
> |  What I would like is for all 4 of the data partitions (sfs0:/vicepa,
> |  sfs0:/vicepb, sfs1:/vicepa, sfs1:/vicepb) to be part of one distributed
> |  afs volume (so I can mount it as, say, /data on my client servers).
> |  I've not found what is needed to add additional partitions to a volume.
>
> You can only mount a volume in one partition.
> So for your setup you need to first make a big partition of that 4 ones and 
> than
> make /vicepa on this and create a volume in this.
> But thats really a bad one.
> Just make more smaller volumes, like volume data on top, than volume data.cd1 
> ,
> data.cd2 data.cd3 under this and so on.
> In this case you've got more volumes and if a HD/raid fails, the others are
> still online and available. And speed is better, to.
> With afs and this kinds of volume mount there is no need of real big volumes, 
> as
> long as you don't really need a directory with 4TB data in it :-)

Unfortunately, yes, I do.

Since the 4 partitions are on two physical controllers in two different 
machines, I can't "combine them" (otherwise I'd just use NFS, as we 
already do).  I was apparently under the misguided impression that AFS 
could do this for me.  There are problems with distributing data into 
multiple volumes, the largest of which is making sure the read load is 
properly distributed - overwhelming a single partition is too easy, and 
avoiding it requires lots of data shuffling, and the next is unified 
access - I need to be able to present a single, unified directory of data 
to my application to serve to customers.

I can get around these problems with symlinks and load distribution 
scripts, but at that point I might as well use NFS (which we're already 
doing extensively), and save myself the authentication and additional 
software headaches.  I get the disturbing feeling that tomorrow and Monday 
I will be neck-deep in ugly perl scripts.

I guess AFS won't do what I need.  But thank you for the quick response.

-j