[OpenAFS] Performance issue with "many" volumes in a single /vicep?

Tom Keiser tkeiser@sinenomine.net
Thu, 25 Mar 2010 02:03:57 -0400

On Wed, Mar 24, 2010 at 4:44 PM, Steve Simmons <scs@umich.edu> wrote:
> On Mar 24, 2010, at 4:38 PM, Russ Allbery wrote:
>> Steve Simmons <scs@umich.edu> writes:
>>> Our estimate too. But before drilling down, it seemed worth checking if
>>> anyone else has a similar server - ext3 with 14,000 or more volumes in =
>>> single vice partition - and has seen a difference. Note, tho, that it's
>>> not #inodes or total disk usage in the partition. The servers that
>>> exhibited the problem had a large number of mostly empty volumes.
>> That's a *lot* of volumes from our perspective. =A0The biggest partition
>> we've got has about 7000 volumes on it. =A0It must be really fun when yo=
>> have to restart that file server and reattach volumes.
> Nightmare is a better word. Fortunately very recent 1.4 releases have got=
ten a lot faster on that front. It's also another reason why we're desperat=
ely trying to carve out time so we can test dynamic attach, but that's gris=
t for another thread.

If your group (or anyone else on this list, for that matter) can the
find time, please please test DAFS.  Any feedback whatsoever would be
helpful and deeply appreciated.  In the unlikely event that problems
ensue, then by all means open bugs, start a discussion on -devel,
contact myself or Deason, etc.  Getting a 1.6 release out the door is
a high priority for all of us, and to some extent that is going to be
predicated on DAFS success stories.

As it stands, we believe the DAFS architecture shipping in 1.5.x will
provide a significant speedup for all moderate-to-large namei
fileserver deployments.  However, the true proof will be in the
pudding, and this is where we need the help of the community.  If
there are unforeseen corner cases where DAFS causes a regression, we
need to know about them ASAP.