[OpenAFS] concurrent requests to AFS
Ken Dreyer
ktdreyer@ktdreyer.com
Fri, 9 Sep 2011 17:58:44 -0600
I'm doing some benchmarking on with Apache and OpenAFS, and I ran
across an unexpected problem. The client is a RHEL 6 VM in KVM, 24
3GHz cores, 95GB RAM.
Using a MaxClients setting in Apache of 1, 2, or 4, I get OK
performance (1600-2400 requests/sec). I'm using Apache's prefork MPM,
and in theory I should have enough RAM enough to bump this limit up to
around 1800.
For benchmarking, I'm using ab, requesting the same 10k file many
times simultaneously. When I set Apache's MaxClients parameter beyond
4, I can see with strace that Apache's open() calls to the files in
AFS start to run slower and slower. When there are between 100 and 200
child processes serving concurrent requests, each open() call takes
several seconds. 4 concurrent children seems to be the sweet spot; the
performance degrades quickly even going from 4 to 8 child processes.
Apache is only opening the same files over and over for each request.
The files reside on a readonly volume in AFS. One is a plaintext file,
and three are nonexistent .htaccess files. Is there a way to avoid the
degradation that happens when I've got a lot of processes opening the
same files for reading at the same time?
When I run the same test using a 10k file on the local disk, I can get
a fairly steady 7000 reqs/sec. I'm using OpenAFS 1.4 with memcache.
I've tried increasing -daemons to 12, but that did not seem to have an
effect. I didn't know what else to try besides upgrading to 1.6?
- Ken