[OpenAFS] Regarding OpenAFS performance (on a small / home single node deployment)

Ciprian Dorin Craciun ciprian.craciun@gmail.com
Fri, 8 Mar 2019 22:28:15 +0200


[I've changed the subject to reflect the new topic.]


On Fri, Mar 8, 2019 at 9:58 PM Mark Vitale <mvitale@sinenomine.net> wrote:
> >>> (I'm struggling to get AFS to go over the 50MB/s, i.e. half a GigaBit,
> >>> bandwidth...  My target is to saturate a full GigaBit link...)
> >
> > Perhaps you know:  what is the maximum bandwidth that one has achieved
> > with OpenAFS?  (Not a "record" but in the sense "usually in enterprise
> > deployments we see zzz MB/s".)
>
> I think this may be a question like "how long is a piece of string?".
> The answer is "it depends".  Could you be more specific about your use cases,
> and what you are seeing (or need to see) in terms of OpenAFS performance?


So my use-case is pretty simple:
* small (home / office) single node deployment on Linux (OpenSUSE Leap
15.0) running OpenAFS 1.8;
* three `/vicepX` partitions on the same Ext4 over RAID5, backed by
rotative HDD's, capable of ~300 MiB/s sequential I/O (in total per
RAID);  (these are migrated from three old disks, and I mean to merge
them into a single one;)
* 1x GigaBit network, 32 GiB RAM, Core i7, currently not used for anything else;
* I have around 600 GiB of personal files, in ~20 volumes;  some
(around 50%) of these files are largish ~20 MiB files (in one volume),
meanwhile the rest are "usual" smallish ~128 KiB to mediumish ~4 MiB
(these last figures are an assumption) (all in 2 or 3 volumes);


My intention is to saturate the GigaBit network card from one client
(in the same LAN) (both with 9k Jumbo frames support), while accessing
these files read-only.  (The client has an 6 GiB cache over TMPFS,
with 8 GiB RAM and 64 GiB swap.  I know this last one is not
"advisable", but the cache is not swapped, thus it is not impacting
the performance.)


I've tried to read either sequentially and in parallel (from 8 to 64
processes), all the available files (either sorted by path or
randomly), and never get over 40-50 MiB/s in network traffic.  (I've
done this test both from the server, thus over `lo`, and the networked
client, with almost the same performance.)


The following is my current configuration:

* for the `fileserver`:
/usr/lib/openafs/fileserver -syslog -sync always -p 128 -b 524288 -l
524288 -s 1048576 -vc 4096 -cb 1048576 -vhandle-max-cachesize 32768
-jumbo -udpsize 67108864 -sendsize 67108864 -rxmaxmtu 8192 -rxpck 4096
-busyat 65536

* for the `volserver`:
/usr/lib/openafs/volserver -syslog -sync always -p 16 -jumbo -udpsize 67108864

* for the server `afsd`:
-memcache -blocks 4194304 -chunksize 17 -stat 524288 -volumes 4096
-splitcache 25/75 -afsdb -dynroot-sparse -fakestat-all -inumcalc md5
-backuptree -daemons 8 -rxmaxfrags 8 -rxmaxmtu 8192 -rxpck 4096
-nosettime

* for the LAN client `afsd`:
-blocks 7864320 -afsdb -chunksize 20 -files 262144 -files_per_subdir
1024 -dcache 128 -splitcache 25/75 -volumes 256 -stat 262144
-dynroot-sparse -fakestat-all -backuptree -daemons 8 -rxmaxfrags 8
-rxmaxmtu 8192 -rxpck 4096 -nosettime



> > (I think my issue is with the file-server not the cache-manager...)
>
> It is easy to get bottlenecks on both.  One way to help characterize this
> is to use some of the OpenAFS test programs and see how they perform against your fileservers:
> - afscp  (tests/afscp)
> - afsio  (src/venus/afsio)
>
> There is also the test server/client pair for checking raw rx network throughput:
> - rxperf  (src/tools/rxperf)


I'll try to look at them.  (None of them seem to be part of the
OpenSUSE RPM, thus I'll have to build them.)

Thanks,
Ciprian.