[OpenAFS] namei/inode fileserver performance tests
Warren.Yenson@morganstanley.com
Warren.Yenson@morganstanley.com
Wed, 20 Nov 2002 17:08:48 -0500 (EST)
Note that our tests were inode (on UFS of course) versus namei on VxFS.
It isn't really fair to compare inode fileserver performance on UFS versus
namei fileserver performance on UFS, since the whole point of going to
namei is to use a better underlying filesystem.
Note that raw performance was only one criteria. fsck performance on
logging filesystems is O(1).
Regarding validity of tests, in our experience, the most common place we
have noticed throughput problems is with vos_dump / vos_restore /
vos_release. That's a by-product of:
1. the volserver not being properly threaded like the fileserver, and
2. our aggressively replicated environment (with a large number of cells
having copies of the same data), which utilizes those operations
extensively.
3. vos operations by their nature operate on larger sets of data than
individual file operations. Any one vos command can cause GB worth
of I/O. To get that amount of I/O with fileserver operations require
either a file which is GB in length or a large number of operations.
Having a stream of file operations allows the fileserver to throttle these
requests by bite-size chunks of work.
- Warren
On Wed, 20 Nov 2002, Neulinger, Nathan wrote:
> Ick. Makes me wonder if it might be worthwhile to try and make a
> fileserver/volserver that could hook into reiserfs/reiser4 directly for
> it's file store to get good linux performance. (Although namei probably
> suffers less of a performance hit on linux filesystems than solaris.
>
> -- Nathan
>
> ------------------------------------------------------------
> Nathan Neulinger EMail: nneul@umr.edu
> University of Missouri - Rolla Phone: (573) 341-4841
> Computing Services Fax: (573) 341-4216
>
>
> > -----Original Message-----
> > From: chas williams [mailto:chas@cmf.nrl.navy.mil]
> > Sent: Wednesday, November 20, 2002 3:38 PM
> > To: openafs-info@openafs.org
> > Subject: [OpenAFS] namei/inode fileserver performance tests
> >
> >
> > since people have questioned my results i have run a more 'definitive'
> > test. the test machine was an ultra 60 with two processors. the
> > cache was configured to 150M. /vicepa (18G) was on a
> > seperate single disk
> > (with respect to the cache). the test volume was 500M
> > (although thats probably
> > not very relevant). bonnie++ was run locally (on the
> > fileserver) to reduce
> > variation from network load.
> >
> > testing with bonnie++ (-s 256 -r 128) on an 'inode' fileserver shows:
> >
> > Version 1.02c ------Sequential Output------
> > --Sequential Input- --Random-
> > -Per Chr- --Block-- -Rewrite- -Per Chr-
> > --Block-- --Seeks--
> > Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP
> > K/sec %CP /sec %CP
> > nova 256M 1725 26 1676 13 1397 11 2108 25
> > 2335 7 19.1 4
> > ------Sequential Create------
> > --------Random Create--------
> > -Create-- --Read--- -Delete-- -Create--
> > --Read--- -Delete--
> > files /sec %CP /sec %CP /sec %CP /sec %CP
> > /sec %CP /sec %CP
> > 16 71 27 745 90 85 15 72 28
> > 522 79 124 15
> >
> > the same test, same machine, substituting a 'namei' fileserver shows:
> >
> > Version 1.02c ------Sequential Output------
> > --Sequential Input- --Random-
> > -Per Chr- --Block-- -Rewrite- -Per Chr-
> > --Block-- --Seeks--
> > Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP
> > K/sec %CP /sec %CP
> > nova 256M 1056 16 1218 9 961 8 1805 21
> > 1951 5 19.0 4
> > ------Sequential Create------
> > --------Random Create--------
> > -Create-- --Read--- -Delete-- -Create--
> > --Read--- -Delete--
> > files /sec %CP /sec %CP /sec %CP /sec %CP
> > /sec %CP /sec %CP
> > 16 40 16 739 91 36 6 40 15
> > 523 79 27 3
> >
> > so on the first set of tests ("raw" i/o performance) namei is
> > (on the average)
> > 74.3% slower. no change on seek performance. for file
> > operations, only
> > create/delete operations seem to suffer -- to the tune of about 50%.
> >
> > [of course, before you ask, is bonnie++ the right benchmark?
> > if you dont
> > like my answers feel free to run your own.]
> > _______________________________________________
> > OpenAFS-info mailing list
> > OpenAFS-info@openafs.org
> > https://lists.openafs.org/mailman/listinfo/openafs-info
> >
> _______________________________________________
> OpenAFS-info mailing list
> OpenAFS-info@openafs.org
> https://lists.openafs.org/mailman/listinfo/openafs-info
>