[OpenAFS] File server memory requirements

Russ Allbery rra@stanford.edu
Sun, 19 Jun 2011 13:54:14 -0700


Jaap Winius <jwinius@umrk.nl> writes:
> Quoting Chaz Chandler <clc31@inbox.com>:

>> A memory cache is a client-side thing.  So although technically you
>> could have one on a server (if you also had the AFS cache manager, aka
>> client, on there), it would probably not do what you are thinking.

> Okay, so a memory cache, like a disk cache, is just client stuff. But,
> this still leaves me wondering how to tell when an OpenAFS file server  is
> happy with the memory that it has and when it is better to give it  more.

Basically, if you run the file server with the default parameters, or even
just -L, it will be happy with a fairly small amount of memory.  It will
also be ridiculously resource-constrained for any typical large workload.

If you have a typical site, you probably want to crank a bunch of the
parameters way up.  We're currently using:

/usr/lib/openafs/fileserver -L -l 1000 -s 1000 -vc 1000 -cb 200000 \
    -rxpck 800 -udpsize 1048576 -busyat 200 -vattachpar 4

and a bunch of those are probably still too small.  Once you increase all
of those parameters, the file server will indeed use a fair bit more
memory, and will be using it effectively to not have to drop callbacks
because it's out of space and to cache more information in memory.

Our file servers have a virtual size of 1.2GB but an RSS of only about
45MB, which I suspect means we're not taking advantage of the memory on
the system anywhere near as much as we should.

-- 
Russ Allbery (rra@stanford.edu)             <http://www.eyrie.org/~eagle/>