[OpenAFS] Maximum volume size (again)

Jeffrey Hutzelman jhutz@cmu.edu
Wed, 17 May 2006 13:27:41 -0400


On Wednesday, May 17, 2006 12:05:25 PM -0400 Steve Simmons <scs@umich.edu> 
wrote:

>
> On May 16, 2006, at 2:15 PM, Jeffrey Hutzelman wrote:
>
>> On Tuesday, May 16, 2006 02:06:22 PM -0400 Derrick J Brashear
>> <shadow@dementia.org> wrote:
>>
>>> On Tue, 16 May 2006, John W. Sopko Jr. wrote:
>>>
>>>> .... Is the recommendation still
>>>> 8GB for OpenAFS 1.4.1? Are there any notes on maximum volume
>>>> and file sizes? Thanks for the info.
>>>
>>> You'll be sad if you ever need to move it unless you have fast
>>> stuff all
>>> around.
>>
>> You'll be sad because big volumes take a long time to move.
>> You'll be more sad if the volume consists of a very large number of
>> small files, rather than a few large files.  If it's going to be
>> lots of little files, you should encourage the user to structure
>> his data so that multiple volumes can be used.
>
> Has anyone done any comparisons to determine if the time to move a  single
> large volume is significantly different than the time to move the
> equivalent files in smaller volumes? I'm doing some testing, and may
> give this a shot.
> (And, before anybody jumps in to point out the advantages of being  able
> to deal with things that come in smaller chunks: yes, I know  that. I
> just want to know if you more more than an additive penalty  for the
> large volume vs the volume set.)

I'm not aware of any actual experimentation that's been done in that area, 
but I'm going to guess that it doesn't make a lot of difference.  The bulk 
of the work in cloning, dumping, and restoring volumes is per-vnode, and 
the bulk of the actual data in a volume dump consists of a per-vnode 
component plus, of course, the actual data dumped.  The amount of constant 
work and data is relatively small.

-- Jeff