[OpenAFS-devel] Large-Cache Initialization (Proposal)

Patrick J. LoPresti patl@curl.com
06 Jul 2001 13:26:26 -0400


Derek Atkins <warlord@MIT.EDU> writes:

> In my experience, _MOST_ file systems aren't up for the task.  Show
> my _ANY_ file system that works well creating 3 million files in one
> directory.

SGI's XFS.  All of the data structures are designed for extreme
scalability.  The directory entries are kept in a B-tree (or
somesuch), which has essentially the same effect on performance as
your proposed user-space solution.

Not that I disagree with you...  Most Linux systems are still ext2,
and most in the future are likely to be ext3, and other Unices' file
systems typically have similar O(n) performance for directory
operations, so your user-space optimization makes sense.

While I have your attention, and almost on-topic :-), would you mind
answering a couple of quick questions for me?  I am considering
rolling out AFS here, but I have a couple of practical issues yet to
determine.

  1) What Linux kernel do you recommend for file servers?  I am
     interested in stability first, performance second.  The options
     are 2.2.x versus 2.4.x, and SMP versus non-SMP.  (I know, perhaps
     Solaris would be a better choice.  But I need to work within our
     existing infrastructure.)

  2) If I want to create a 1G or 2G cache on the clients, is there any
     reasonable way to do it right now (e.g., increasing the chunk
     size or whatever it's called)?  If not, what is the largest
     reasonable cache size for Linux systems?

 - Pat