[OpenAFS-devel] PATCH: limit afsd auto-tuning to 500000 files

Troy Benjegerdes hozer@hozed.org
Wed, 24 Aug 2005 11:28:10 -0500


On Wed, Aug 24, 2005 at 10:36:40AM -0400, chas williams - CONTRACTOR wrote:
> In message <20050824034146.GA1685@kalmia.hozed.org>,Troy Benjegerdes writes:
> >>fs getcacheparms
> >AFS using    64% of cache blocks (12751138 of 20000000 1k blocks)
> >              2% of the cache files (8242 of 500000 files)	      
> 
> this is really cool!  a step in the right direction.  can you also
> compute/print out the average size of the cache files?  a short histogram
> based on chunksizes (4k, 8k, 16k, ...CURRENT_CHUNKSIZE) would be helpful
> as well.  this should help people decide which chunksize is "right"
> for them.  this would tend to make the histogram small enough to send
> across the kernel/user space boundary.

Hrrm, I seem to have some issues.. Am I looking at the wrong flags?

avg.pl /var/cache/openafs | head
72068 files
1824410029 total bytes
25315 avg bytes

troy@talia:/afs/hozed.org$ fs getcache
AFS using    10% of cache blocks (2061671 of 20000000 1k blocks)
              2% of the cache files (9991 of 500000 files)

Also, I'm a little concerned that an unpriviledged user call can trigger
that 'for i=0, i < cacheFiles' loop in kernel code. Although I don't
think it's that big a deal since it holds no locks.

How much is backwards compatability an issue if we change the output?
Most users are going to run 'fs getcacheparms', with no arguments..
would it be okay to add a '-old' option to get the old behavior for
people with scripts that depend on the old output?