[OpenAFS-devel] Default params for afsd?

Troy Benjegerdes hozer@hozed.org
Wed, 18 May 2005 20:07:44 -0500


> There's also another issue, which is that setting the transfer size too 
> large may impact fileserver performance on large files for which there is 
> lots of demand, because currently only one client can be fetching from a 
> given file at a time.  While it's probably a good idea to change that in 
> the long term, doing so would probably mean non-trivial changes to the way 
> the volume package works, so it's not going to be a quick fix.
> 
> In addition, a large transfer size means that when you access a large file, 
> you have to wait longer before you get to access any of it.  So while the 
> average performance goes up if you are transferring entire large files, you 
> lose if you are only interested in a couple of pages out of a large 
> database.

Hrrm.. how often is this the case that someone is running a database out
of AFS?

For gigabit and other fast networks (aka, infiniband), I'm finding that
a chunksize of 20 limits me to about half a gigabit, and a chunksize of
18 is just way too small.

I just ran a couple of tests on what might very well be the worlds first
InfiniBand connected AFS server. The server is a dell 2650 running
Debian linux (kernel 2.6.11-1-686-smp), with a 5 disk raidset for the
/vicepa partition, with an XFS filesystem. Local benchmarks (using plain
old 'dd') show around 100mb/sec read throughput on a 4GB file.

The client machine is a dual opteron, with kernel 2.6.11, and a 1.3.79
or so afs client. I'm running the IPoIB infiniband drivers in the 2.6.11
kernel release. The client machine has a 100mb memcache.

Reading a 9GB file via NFS with wsize=32k,rsize=32k results in
approximagely 65MB/sec (as reported by 'dd if=file of=/dev/null bs=65536'
(65563500 bytes/sec)

Reading a 9GB file via AFS with chunksize=18 gives around 5-7MB/sec.

Reading the same file via AFS with chunksize=20 gives around 55MB/sec.
(55155504 bytes/sec)