Linux client performance - was Re: [OpenAFS] OpenAFS 1.6.5 Linux client cache sizes

Volkmar Glauche volkmar.glauche@uniklinik-freiburg.de
Fri, 17 Apr 2015 16:02:29 +0200


Dear all,

for the sake of completeness, I would like to revive this thread and =20
share some of my findings. As I wrote below, I intended to make better =20
use of large disks for AFS cache partitions, but my initial attempts =20
actually made the AFS clients unusable for our requirements.
One problem we faced was high memory consumption of the libafs module. =20
This is related to the number of cache files OpenAFS creates given a =20
certain size of the cache device. In my default installation, there =20
was one file created per 32 1k-blocks. For a ~250GB cache, this =20
amounted to ~7900000 cache files, blocking ~8GB RAM. According to fs =20
getcacheprms, it turned out that with our typical data, all cache =20
blocks were filled already when only ~10% of the cache files were =20
used. Therefore I now decided to set both the -blocks and -files =20
parameters such that I get one cache file per 384 1k-blocks. Using =20
this setting, both block and cache file usage corresponds very well.

Another problem was caused by setting the -chunksize parameter. In =20
many of our use cases, we only write portions of a large file at a =20
time. After increasing the -chunksize value, performance of fseek and =20
write operations dropped from ~15MB/s to ~70kB/s for partial access to =20
large files. Here, the best solution I found was to leave the default =20
-chunksize setting unmodified.

Although I'm fine now with my settings, I'd be interested in some =20
explanations for my findings.

Best regards,

Volkmar

Zitat von Volkmar Glauche <volkmar.glauche@uniklinik-freiburg.de>:

> Dear all,
>
> we are running OpenAFS clients with up to 1TB space available for cache
> partitions. These are multiuser cluster nodes where cache space should
> be as large as possible. Until now, there was (inadvertently) a limit
> of ~200GB set to the cache size by restrictive afsd options.
> I now removed most of these options, my current command line for afsd
> looks like
>
> /usr/bin/afsd -chunksize 30 -fakestat -blocks <SPACE_ON_DEVICE>
>
> Now it takes a very long time (~hours) to start up afsd, in some cases
> the afs cache scan even fails with a kernel panic. Is there any way to
> make efficient use of ~1TB cache partitions?
>
> Volkmar
>
> --=20
> Freiburg Brain Imaging
> http://fbi.uniklinik-freiburg.de/
> Tel. +761 270-54783
> Fax. +761 270-54819
>
>
> _______________________________________________
> OpenAFS-info mailing list
> OpenAFS-info@openafs.org
> https://lists.openafs.org/mailman/listinfo/openafs-info



--=20
Freiburg Brain Imaging
http://fbi.uniklinik-freiburg.de/
Tel. +761 270-54110
Fax. +761 270-53100