[OpenAFS] openafs on Fedora 12?

Chas Williams (CONTRACTOR) chas@cmf.nrl.navy.mil
Fri, 11 Dec 2009 07:56:29 -0500


In message <4B220A1A.8080001@pclella.cern.ch>,Rainer Toebbicke writes:
>With the current "dir" package this means a chunk size of 2MB. Assuming the 
>unit of transfer is still "chunksize" and you do not intentionally fill chunks
>partially you'd give up a valuable tuning parameter.

hmm... well it is a future problem.  i would actually suggest 1MB chunks
for a disk cache anyway.  the directory problem is interesting.  perhaps
afs should be able to handle partial dir chunks.  i would have to look
into this.

>We typically create a 10GiB AFS cache with ~100000 cache files, but a 
>chunksize of 256 kB. What's wrong with that? The cache occupancy is measured 
>in kiB anyway and the cache manager figures out whom to recycle. As bigger 
>chunks have an increased probability of being only partially filled (because, 
>after all, we also have "small" files), this all works out without the user 
>seeing any adverse effect. With your 2 MB chunk size suggest above such a 
>cache would have to be... 200 GB.

you are simply assuming that people wont read multi-gigabyte files.  this
is becoming more and more common.  the free space on the partition needs
to be more than the worst possible case.

>> i am not convinced that the well placed truncate calls have any meaning.
>> the filesystems in question tend to just do what they want.
>
>They do! They free the blocks used up by the cache file, just in case the 
>chunk you're writing is smaller. They also make sure that while re-writing 
>non-block/page-aligned parts data do not have to be read in just to be thrown 
>away on the next write.

i recall a problem with ext3's lazy truncate.  i am not sure this was
every completely resolved.  ergo you need to make sure you have a little
free space besides the maximum possible subscription of the cache filesystem.

>So if you want to put the cache into one big file you'll at least have to 
>think about space allocation and fragmentation. You'd also better ensure 
>page-aligned writes.

there are issues to solve, but i still think a single file would perform
better (and be easier to support) than hundreds of files that you need
to open and close.

as far as writes, they should be atleast cache aligned.  ideally page
aligned, but this issue should apply to any of the existing (or future)
caches.