[OpenAFS] vice partition sizes, and large disks
Jeffrey Hutzelman
jhutz@cmu.edu
Wed, 20 Jun 2001 15:09:09 -0400 (EDT)
On Sat, 16 Jun 2001, Bill Zumach wrote:
> > In message <200106161507.PAA32863269@smtp6ve.mailsrvcs.net>,Bill Zumach write
> > s:
> > >That's a good start. Linux 2.2.18 is about 7,000 files, so if 3 users
> > >are un-taring at the same time we're up to about 21,000 files. Where I'm
> > >going with this is that you also want to check what happens when the log fil
> > ls
> >
> > just some more silly tests (i did a umount/mount between these tests so
> > that should flush the log):
> >
> > 1000 real 0.2 user 0.0 sys 0.1
> > 5000 real 1.8 user 0.0 sys 0.8
> > 10000 real 3.8 user 0.0 sys 1.5
> > 20000 real 8.3 user 0.1 sys 3.1
> > 50000 real 25.8 user 0.2 sys 8.0
> > 100000 real 56.5 user 0.5 sys 17.5
> >
> > if you graph that, it seems pretty linear so i dont think that i am
> > filling up the log. the test partition is 10G, which implies (according
> > to the man page) a 10M logfile. cool eh?
> > _______________________________________________
> That looks good. I'd guess they're writing the transactions back frequently
> enough. Once the OS has a chance to properly order the blocks it writes to
> disk one can get this sort of performance for creates.
>
> BTW, are you creating all these files in the same directory? I would not have
> thought the create times would start to be non-linear just becase of the
> time taken to look up file names in the directory.
Of course they should. The lookup may involve a small time constant, but
it's still nonzero, and it's linear in the size of the directory. What's
interesting is that the above numbers seem to should the reverse -- as the
number of files increases, the time per file goes _down_. I can only
assume this is because either (1) optimization is being done when all
those updates are committed at once, or (2) the fixed time involved dwarfs
the per-file time, even at 100000 files.
-- Jeff