[OpenAFS] Suitability of OpenAFS for distributed FS needs
James Black
jfb@iparadigms.com
Mon, 09 Feb 2004 13:05:35 -0800
Hello, all,
I've been chosen, here at work, to find out as much as I can about the
state of AFS, based on a prior life when I was an admin on an AFS cell
at the University of Chicago.
We have specific needs: the filesystem isn't large (only about 400gb),
but does have lots of small files, in the 10-50kb range. And by lots,
read "millions". Probably tens of millions, in thousands of
directories. Our instruction mix is at least 80/20% reads to writes,
probably closer to 90/10%. As currently constituted, the server as
well as the clients read and write to disk. All machines are Linux
2.4, Debian distro.
We're currently running naked: ReiserFS backed up via rsync every 24
hours. We can't actually sync more often than that, as rsync pegs the
cpu on the machine and takes more than five hours to complete. The
filesystem is remotely mounted (NFS) by a group of machines for a
primitive, application-layer sort of load-balancing. For all the
reasons that I needn't go into here, we're desperate to ditch NFS.
It seems that OpenAFS running on a Reiser partition with the file
caching in RAM would offer not only the structural advantages of AFS,
but also best the average performance of the current setup. Does
this seem reasonable?
TIA,
'jfb
--
C++: an octopus made by nailing extra legs onto a dog.