[OpenAFS] calculating memory

Steve Simmons scs@umich.edu
Mon, 31 Jan 2011 12:22:37 -0500


On Jan 28, 2011, at 3:24 PM, Gary Gatling wrote:

> I am in charge of several afs servers in our college. Right now there =
are 5 afs servers running on 5 SPARC based servers. We are ditching =
Solaris since it sucks so bad and are going to move to Linux VM's =
running inside of VMware.
>=20
> I was asked how much disk and memory we would need for the VMs. I was =
then critized for suggesting that each VM needs 4GB just like the real =
servers. I did not suggest the 4GB in each server. That was decided by a =
guy who quit a few years ago.
>=20
> So....
>=20
> Is there some rule of thumb for how one calculates the amout of RAM a =
AFS file server should have. The 5 servers serve out about 2 TB of data. =
So each fileserver has like about 500 GB of afs volumes on. Also how to =
go about calculating will be good when the justifications for RAM are =
asked?
>=20
> Thanks for any ideas anyone will have.

YMMV, depending on the activity level of your users, volume and file =
sizes, etc, etc. For our system, 4GB is barely acceptable once load =
ramps up. We run 8GB, and as of right now a typical memory footprint is:

afs-a-52-root# more /proc/meminfo
MemTotal:        8236068 kB
MemFree:           39188 kB
Buffers:          547196 kB
Cached:          7218020 kB
SwapCached:          748 kB
Active:          2031164 kB
Inactive:        5900212 kB
Active(anon):     145508 kB
Inactive(anon):    20752 kB
Active(file):    1885656 kB
Inactive(file):  5879460 kB
. . .

But it's lunchtime. Things will ramp up this afternoon.


I also strongly second the comments made about VM vs large quantities of =
IOPs. We're debating virtualization of servers here (umich) and the jury =
is still out. Thomas Kula has more to say on that topic in his note, but =
in general I have strong doubts about any file server that runs in =
anything except a one-server-per-phyisical-host model. I can envision a =
few benefits to having a vhost running AFS on top of virtualized =
storage; for example, one could migrate an AFS server to a different =
physical host and then do physical maintenance on the vacated box. On =
the other hand, in a non-virutal env one could also vos move the volumes =
to another unit and take it down for physical maintenance. The latter =
has the side benefit of letting you occasionally reboot the server and =
afs service.

Of course, if you update your afs binaries more than say, once a year, =
you've gotta be restarting afs anyway. Most of our machines haven't been =
updated since June 8, 2010, and I'm getting antsy. But that's another =
topic.