[OpenAFS] Re: Max number of files in a volume

Tom Keiser tkeiser@sinenomine.net
Mon, 26 Apr 2010 16:33:26 -0400


On Mon, Apr 26, 2010 at 10:58 AM, Rich Sudlow <rich@nd.edu> wrote:
> Andrew Deason wrote:
>>
>> On Mon, 26 Apr 2010 10:14:01 -0400
>> Rich Sudlow <rich@nd.edu> wrote:
>>
>>> I'm having problems with a volume going off-line and not
>>> coming back with Salvage - what is the maximum number
>>> of files per volume? I believe the volume in question
>>> has over 20 million.
>
> Looks like there were actually 30 million files.
>

Hi Rich,

On most platforms we build the salvager as a 32-bit binary (excluding
certain 64-bit linux platforms where the platform maintainers decided
to simplify things by making everything a 64-bit binary).  One
operation that the salvager performs is to build an in-memory index of
critical details for every vnode in the volume [see SalvageIndex() in
src/vol/vol-salvage.c].  Each entry in this array requires 56 bytes in
a 32-bit process, which comes out to 1602MB of virtual memory for 30
million files.  Likewise, we require 56 bytes per directory vnode,
which for 30 million files requires a minimum of ~462 directories, and
thus an additional 26MB of heap.  My suspicion is your salvager is
core dumping because the heap and the stack have grown into each
other.  Depending on the hardware, it may be possible to build a
custom 64-bit salvager to work around this issue.

The first step here is to figure out whether your salvager binary is
32-bit or 64-bit; the output of file /usr/afs/bin/salvager should be
sufficient.

Cheers,

-Tom