[OpenAFS] Summary of comments about experiences with large AFS volumes
dhk@ccre.com
dhk@ccre.com
Thu, 25 Aug 2005 11:10:24 -0600
Thanks very much for all the responses to my inquiry about large =
volumes.
I expect I'll be doing some experimenting of my own and if so will post
results.
Here's a summary of the comments received thus far.
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
We have a software repository in one volume which exceeds 600 GB . Last=20
time I had to dump and restore it it was 500 Gbs and I encountered no=20
problems. Running 1.3.86 servers on linux 2.6 and 1.3.87 clients. In the =
same cell we also have 2 volumes of around 100 GB and 3000 small=20
volumes (<100 MB).
--
Dimitris Zilaskos
Department of Physics @ Aristotle University of Thessaloniki , Greece
PGP key : http://tassadar.physics.auth.gr/~dzila/pgp_public_key.asc
http://egnatia.ee.auth.gr/~dzila/pgp_public_key.asc
MD5sum : de2bd8f73d545f0e4caf3096894ad83f pgp_public_key.asc
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D
=3D
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
I've been using 45GB volumes for a few years on transarc afs servers,
it just worked, including the backup system. I've now started using
OpenAFS on my newer fileservers, and I've gone up to a couple hundred
GB volumes, and they've just worked so far, too, although I don't yet
have anything working that can back them up. Let me know if you'd
like any other information.
-Tracy
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
We run a mirror site for software distributions (mirror.cs.wisc.edu) =
that is
backed in afs, with many volumes in the 10-50 GB range. We've seen very =
few
operational issues, although we do have annoyance problems with 'vos =
move's=20
timing out.
Dave Thompson
Computer Systems Lab
Computer Sciences Department
UW-Madison
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
There are 2 50GB volumes here. Storing info in large volumes is not a =
good
idea but the windows boxes the data came from was set up that way and 3
years ago I wasn't experienced enough to set up the correct mount points =
to
smaller volumes. Now all volumes are set at 8GB to fit on a DVD.
Using a disk cache with 1.3.87 had some problems, they have now gone =
away.
Perhaps changing the cache partition from ext2 to ext3 had something to =
do
with it but I don't know. See the "stress testing" thread.
ftp://creedon.dhs.org/afs_stress_test/ contains a shell script that will
build a volume of any size (change the dd line to create a test file of =
any
size) with real life Windows long filenames. The filenames all work =
under
both Linux and Windows. There's good troubleshooting information in the
"stress test" thread.
All systems are now 1.4.0-rc1/krb5 and seem to be running fine. 3 =
servers
and 8 clients. 200GB total.
tedc
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
On 8/24/05, Russ Allbery <rra@stanford.edu> wrote:
> Tracy Di Marco White <gendalia@gmail.com> writes:
>=20
> > I've had a vos move run about 36 hours without timing out running =
recent
> > versions of OpenAFS on the servers, and I have had very small =
volumes
> > time out on vos moves using Transarc AFS on the servers. We also =
had
> > vos releases failing when vos moves were failing.
>=20
> The latter is the standard problem with a single-threaded volserver. =
I
> have great hopes for 1.4 finally putting that one to bed.
Things got much better when I upgraded all my general afs fileservers
to openafs 1.2.11, actually.
-Tracy
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
We've been running a few large (>20GB) volumes with no problem,
but those volumes have a manageable number of files in them.
Our backup medium is 100/200/400GB LTO-2 tape.
Recently, one of our researchers created a 40GB volume that has 5.5
million small files in it. The dump transaction for this volume
has been running for 23 hours so far and looks like it might
take another day.
---Bob.
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Hi Kim,
We've had volumes as large as 300gb in the past during phases of rapid =
data=20
collection. While they work for accessing data, vos operations tend to =
time=20
out on excessively large (somewhere above 100gb IIRC) volumes. Backups =
can=20
become problematic past this point, and performance seems to take a hit =
as=20
well.
I try to keep volumes under 20gb these days, and prefer to keep them =
under
2gb=20
if possible. The largest volume we currently have online is about 60gb. =
We=20
don't have a problem backing it up, although I'd probably copy the data =
to a
new volume rather than attempting a "vos move" on it if I had to take =
the=20
partition offline.
Regards,
Lester Barrows