[OpenAFS] Re: Client Cache Question
Andrew Deason
adeason@sinenomine.net
Mon, 24 Jun 2013 15:39:09 -0500
On Fri, 21 Jun 2013 16:26:22 -0700
Timothy Balcer <timothy@telmate.com> wrote:
> This seems counter intuitive... the 100 or so files do not go over the
> 500,000 block cache size. They are fairly small (10's to 100's of
> kilobytes). Why would increasing cache size impact performance
> Negatively in such a case?
When you say 500,000 or 50,000, etc, you mean 50,000... KiB? So, a
500MiB vs 50MiB cache? About how big is the entire amount of data pushed
to AFS compared to the cache size?
Anyway, one _guess_ as to why a larger cache may be slower for that is
that you're invalidating/overwriting a larger amount of data in the
cache. That is, for the 50M cache, you're writing and overwriting <=50M
of data on disk; for the 500M cache, you're writing and ovewriting >50M
of data, possibly all over the disk as we kick out different things from
the cache. If we're limited to overwriting 50M of disk data, the disk
i/o may perform better since our i/o is able to stay inside various
caches at lower levels (OS page cache, disk or controller caches, etc).
If you're not actually using the cached data, the cache can easily be a
hindrance to performance, and a larger cache can make that worse.
That's just a guess, but I think it's one way you could see the larger
cache seem to perform more slowly. If you want to get more information,
you could run fstrace while the copies are running and provide that. And
as Jeffrey said, details of the platforms and versions in question would
be useful to have, though as I recall, you are running Linux. The
filesystems in use could be useful to know, too.
--
Andrew Deason
adeason@sinenomine.net