[OpenAFS] cache use topping out at around 3GB?
Matthew Miller
mattdm@mattdm.org
Tue, 7 Jun 2005 01:42:56 -0400
I have a system which, for sad, sad reasons I cannot change, is on a 10mbit
half-duplex connection, and I want to regularly process about 6GB of data
from AFS through it. (About 4000 files.)
I made an 8GB cache partition, and am using the standard Linux init script
options of "-fakestat -stat 4000 -dcache 4000 -daemons 6 -volumes 256
-files 50000".
Here's the problem: the cache fills up nicely and everything seems fine
until it reaches slightly below 3,000,000K. Then, it bounces around the
2,9xx,xxx range, never exceeding the three million mark. For example:
AFS using 2968603 of the cache's available 8200000 1K byte blocks.
...
AFS using 2969391 of the cache's available 8200000 1K byte blocks.
...
AFS using 2970271 of the cache's available 8200000 1K byte blocks.
...
AFS using 2969728 of the cache's available 8200000 1K byte blocks.
...
AFS using 2970824 of the cache's available 8200000 1K byte blocks.
And since this is smaller than the dataset, it basically makes the cache get
completely flushed through on every run, making it useless.
Odds are pretty good I'm missing something that should be obvious here, but
I can't figure out what. Your help is very much appreciated!
This system is Linux 2.6.x with 1.3.84. (I think the issue is there with
earlier 1.3.x too -- now that I look, I see a system with 1.3.80 with a
cache stuck at 2968345 of 8254000.) I have some FTP server systems running
Linux 2.4.x and OpenAFS 1.2.13, and they happily go along with "AFS using
7447189 of the cache's available 8254000 1K byte blocks" or so.
--
Matthew Miller mattdm@mattdm.org <http://www.mattdm.org/>
Boston University Linux ------> <http://linux.bu.edu/>
Current office temperature: 80 degrees Fahrenheit.