[OpenAFS] calculating memory

Simon Wilkinson sxw@inf.ed.ac.uk
Fri, 28 Jan 2011 23:24:01 +0000


On 28 Jan 2011, at 20:24, Gary Gatling wrote:

> I am in charge of several afs servers in our college. Right now  
> there are 5 afs servers running on 5 SPARC based servers. We are  
> ditching Solaris since it sucks so bad and are going to move to  
> Linux VM's running inside of VMware.

Firstly, I would be cautious about running I/O intensive services like  
fileservers within  a VM. You'll almost certainly get better  
performance from bare metal, especially if you end up sharing the same  
physical hardware between multiple fileservers.

> I was asked how much disk and memory we would need for the VMs. I  
> was then critized for suggesting that each VM needs 4GB just like  
> the real servers. I did not suggest the 4GB in each server. That was  
> decided by a guy who quit a few years ago.

So, there are two separate considerations here. The first is making  
sure that the processes on the machine have sufficient memory. The key  
thing here is tuning your fileserver to suit its workload, and making  
sure that you have enough callbacks to handle your number of clients.  
You should be able to take a look at your existing servers, and see  
how much memory the fileserver processes on them are consuming, and  
use this as a rough guide. You definitely want to be in a position  
where there is no way that your fileservers end up swapping.

The second consideration is page cache. Linux uses all of the "spare"  
memory on a machine  as a backing cache for the disk. Depending on the  
working set of your fileservers, this can have significant performance  
benefits. Without doing any analysis, it's hard to say how much memory  
is necessary here, but in general, the more memory you can have as  
cache, the better.

I don't think that 4G of memory of each of your 5 fileservers sounds  
unreasonable.

Hope that helps,

Simon.