[OpenAFS-devel] PATCH: limit afsd auto-tuning to 500000 files

Jeffrey Hutzelman jhutz@cmu.edu
Tue, 23 Aug 2005 19:43:14 -0400


On Tuesday, August 23, 2005 05:43:16 PM -0500 Troy Benjegerdes 
<hozer@hozed.org> wrote:

> The following patch limits the auto-tuning code to a max upper limit of
> 500K files. Is this a reasonable upper limit? Should it be
> smaller/larger? I have tested this with 20GB and 80GB caches. (The 80GB
> cache machine happens to have a 1M file upper limit)
>
> At any rate, if the user really wants something larger, they should
> specify '-files' on the command line.


Ugh Ugh Ugh.

This seems extremely arbitrary.  Instead of imposing an arbitrary limit on 
the number of cache files, we should adjust the autotuning function, if 
that's necessary, so it doesn't produce absurd values.  Perhaps the 
67%-full assumption is incorrect.  Perhaps we were too conservative in 
bumping the average filesize assumption from 10K to 32K, and it should 
really be bigger.

We won't know unless we can collect and analyze some data.  Until we do 
that, repeatedly making changes to the autotuning algorithm isn't going to 
make things better; it's just going to make it unpredictable.


So, let's hear from people who are actually using large caches (and no, "I 
just upped my cache to 80GB 10 minutes ago" is not useful)...

a. How large is your cache?
b. What's your chunk size?
c. How full is the average chunk (that is, what is the average size of
   V-files, excluding those whose size is zero)?
d. What's the average _file_ size in your working set?

We probably should be tuning the 32K number toward c, and the 67% number 
toward c/b.  I'd be very interested to see how either of these numbers 
varies with cache size, working set size, average file size, or "site 
size", whatever that means....

-- Jeff