Linux client performance - was Re: [OpenAFS] OpenAFS 1.6.5 Linux
client cache sizes
Rich Sudlow
rich@nd.edu
Fri, 17 Apr 2015 10:11:42 -0400
On 04/17/2015 10:02 AM, Volkmar Glauche wrote:
> Dear all,
>
> for the sake of completeness, I would like to revive this thread and share some
> of my findings. As I wrote below, I intended to make better use of large disks
> for AFS cache partitions, but my initial attempts actually made the AFS clients
> unusable for our requirements.
> One problem we faced was high memory consumption of the libafs module. This is
> related to the number of cache files OpenAFS creates given a certain size of the
> cache device. In my default installation, there was one file created per 32
> 1k-blocks. For a ~250GB cache, this amounted to ~7900000 cache files, blocking
> ~8GB RAM. According to fs getcacheprms, it turned out that with our typical
> data, all cache blocks were filled already when only ~10% of the cache files
> were used. Therefore I now decided to set both the -blocks and -files parameters
> such that I get one cache file per 384 1k-blocks. Using this setting, both block
> and cache file usage corresponds very well.
>
> Another problem was caused by setting the -chunksize parameter. In many of our
> use cases, we only write portions of a large file at a time. After increasing
> the -chunksize value, performance of fseek and write operations dropped from
> ~15MB/s to ~70kB/s for partial access to large files. Here, the best solution I
> found was to leave the default -chunksize setting unmodified.
>
> Although I'm fine now with my settings, I'd be interested in some explanations
> for my findings.
>
> Best regards,
>
> Volkmar
Thanks Volkmar
This is very informative! I've been looking at increasing our
AFS cache sizes (not as large as yours) this is great info to have.
Rich
>
> Zitat von Volkmar Glauche <volkmar.glauche@uniklinik-freiburg.de>:
>
>> Dear all,
>>
>> we are running OpenAFS clients with up to 1TB space available for cache
>> partitions. These are multiuser cluster nodes where cache space should
>> be as large as possible. Until now, there was (inadvertently) a limit
>> of ~200GB set to the cache size by restrictive afsd options.
>> I now removed most of these options, my current command line for afsd
>> looks like
>>
>> /usr/bin/afsd -chunksize 30 -fakestat -blocks <SPACE_ON_DEVICE>
>>
>> Now it takes a very long time (~hours) to start up afsd, in some cases
>> the afs cache scan even fails with a kernel panic. Is there any way to
>> make efficient use of ~1TB cache partitions?
>>
>> Volkmar
>>
>> --
>> Freiburg Brain Imaging
>> http://fbi.uniklinik-freiburg.de/
>> Tel. +761 270-54783
>> Fax. +761 270-54819
>>
>>
>> _______________________________________________
>> OpenAFS-info mailing list
>> OpenAFS-info@openafs.org
>> https://lists.openafs.org/mailman/listinfo/openafs-info
>
>
>
--
Rich Sudlow
University of Notre Dame
Center for Research Computing - Union Station
506 W. South St
South Bend, In 46601
(574) 631-7258 (office)
(574) 807-1046 (cell)