[OpenAFS] Summary of comments about experiences with large AFS volumes

dhk@ccre.com dhk@ccre.com
Thu, 25 Aug 2005 11:10:24 -0600

Thanks very much for all the responses to my inquiry about large =

I expect I'll be doing some experimenting of my own and if so will post

Here's a summary of the comments received thus far.

We have a software repository in one volume which exceeds 600 GB . Last=20
time I had to dump and restore it it was 500 Gbs and I encountered no=20
problems. Running 1.3.86 servers on linux 2.6 and 1.3.87 clients. In the =

same cell we also have 2  volumes of around 100 GB and 3000  small=20
volumes (<100 MB).


Dimitris Zilaskos

Department of Physics @ Aristotle University of Thessaloniki , Greece
PGP key : http://tassadar.physics.auth.gr/~dzila/pgp_public_key.asc
MD5sum  : de2bd8f73d545f0e4caf3096894ad83f  pgp_public_key.asc

I've been using 45GB volumes for a few years on transarc afs servers,
it just worked, including the backup system.  I've now started using
OpenAFS on my newer fileservers, and I've gone up to a couple hundred
GB volumes, and they've just worked so far, too, although I don't yet
have anything working that can back them up.  Let me know if you'd
like any other information.



We run a mirror site for software distributions (mirror.cs.wisc.edu) =
that is

backed in afs, with many volumes in the 10-50 GB range.  We've seen very =

operational issues, although we do have annoyance problems with 'vos =
timing out.

Dave Thompson
Computer Systems Lab
Computer Sciences Department


There are 2 50GB volumes here. Storing info in large volumes is not a =
idea but the windows boxes the data came from was set up that way and 3
years ago I wasn't experienced enough to set up the correct mount points =
smaller volumes. Now all volumes are set at 8GB to fit on a DVD.

Using a disk cache with 1.3.87 had some problems, they have now gone =
Perhaps changing the cache partition from ext2 to ext3 had something to =
with it but I don't know. See the "stress testing" thread.
ftp://creedon.dhs.org/afs_stress_test/ contains a shell script that will
build a volume of any size (change the dd line to create a test file of =
size) with real life Windows long filenames. The filenames all work =
both Linux and Windows. There's good troubleshooting information in the
"stress test" thread.

All systems are now 1.4.0-rc1/krb5 and seem to be running fine. 3 =
and 8 clients. 200GB total.



On 8/24/05, Russ Allbery <rra@stanford.edu> wrote:
> Tracy Di Marco White <gendalia@gmail.com> writes:
> > I've had a vos move run about 36 hours without timing out running =
> > versions of OpenAFS on the servers, and I have had very small =
> > time out on vos moves using Transarc AFS on the servers.  We also =
> > vos releases failing when vos moves were failing.
> The latter is the standard problem with a single-threaded volserver.  =
> have great hopes for 1.4 finally putting that one to bed.

Things got much better when I upgraded all my general afs fileservers
to openafs 1.2.11, actually.



We've been running a few large (>20GB) volumes with no problem,
but those volumes have a manageable number of files in them.
Our backup medium is 100/200/400GB LTO-2 tape.

Recently, one of our researchers created a 40GB volume that has 5.5
million small files in it.  The dump transaction for this volume
has been running for 23 hours so far and looks like it might
take another day.



Hi Kim,

We've had volumes as large as 300gb in the past during phases of rapid =
collection. While they work for accessing data, vos operations tend to =
out on excessively large (somewhere above 100gb IIRC) volumes. Backups =
become problematic past this point, and performance seems to take a hit =

I try to keep volumes under 20gb these days, and prefer to keep them =
if possible. The largest volume we currently have online is about 60gb. =
don't have a problem backing it up, although I'd probably copy the data =
to a

new volume rather than attempting a "vos move" on it if I had to take =
partition offline.

Lester Barrows