[OpenAFS] Performance Questions / Optimizatoin
Penney Jr., Christopher (C.)
cpenney@ford.com
Fri, 16 Jan 2004 09:48:40 -0500
I'm experimenting with OpenAFS in an enviornment where large files (~500mb
frequently, but sometimes quite a bit larger) would be moved around. I
understand that this is not an optimal use of AFS, but we are interested in
the security of AFS. I want to get an idea of if the performance I'm
getting in my little sandbox is low/high/normal.
Right now I have a couple of Sun 480R boxes with dual CPUs, 4GB RAM, and
GigE. They are running Solaris 9 with recent patches. I've bumped the TCP
window size to 64k. Using ttcp (ttcp -r on one and ttcp -t -n20000 on the
other) I can get into the 60MB/s range on TCP throughput. For disk all I
have right now is a stripe of two disks. When I run 'iozone -c -e -i 0 -s
512m -r 8k -f /vicepa/tmp/testfile' I can create a 512MB file at about
75MB/s (ufs logging off).
With NFS if I do the above iozone test I can read and write at about 45MB/s.
That's seems a bit low, but I'm not running on optimal disk setup.
With AFS the client has a 1GB AFS cache on /var, which is mirrored. I'm
using the following options to AFSD: -stat 2000 -chunksize 19 -daemons 6
-volumes 128 -afsdb -nosettime. The main difference from the built in
options is the chunksize of 512K. Using the above iozone command I get
about 33MB/s writing to /var.
When I do the iozone command to write to my AFS space I get about 16MB/s as
long as the file fits in cache. I'm noticing the client is periodicaly
pushing data to the server from cache at about 20-25MB/s. When I try using
a memory cache I only get 8MB/s writing and I get about the same if cache is
full (like having a 100MB cache and writing a 512MB file). If I ensure
cache is empty and do the iozone read test I get about 12MB/s.
So what I'm observing is:
* Read performance when a file isn't in cache is only about 12MB/s
* Write performance when cache is full is about 8MB/s
* When using a memory cache write() doesn't return until data is committed
to the file server simulating a disk cache being full
* Write performance to cache seems substantally worse than the actual disk
performance (of course some is to be expected)
My questions:
Are the performance numbers for out of cache throughput (esp. with reads)
something that I can improve with tweaks or is that more or less the limit
of AFS when files are not in cache? I'm not sure I can live with only
getting 12MB/s reading a file from AFS space.
If we used AFS we'd have huge caches so that if several gigabyes were being
written they would fit in cache. What kind of write performance can we
expect with really nice disk subsystems for the afs cache (I assume there is
a limit)? I'd like to be able to write at least 50MB/s to afs cache... is
that possible?
Thanks,
Chris Penney
cpenney@ford.com