[OpenAFS] Poor performance on new ZFS-based file server

Brian Sebby sebby@anl.gov
Thu, 12 Jul 2007 15:03:04 -0500


The box I'm using has 8GB of RAM.  I ran the command to find the size of
the ARC, and it looks like (at least right now), it's only using about 4.2GB
of memory.  There are no other major processes running on the box besides 
our monitoring program, so it shouldn't be affecting it that much.

# echo "arc::print -d size" | mdb -k
size = 0t4274690048

There are only 5 disks in the RAID-Z configuration.  Each is 300GB, for a
total of 1.36T usable.  Right now, it's only using about 500GB of it.

Oh, how do I find out if fileserver and the callback table has been swapped
out?

This is sort of frustrating.  I want to move forward with ZFS, but I
don't want to get worse performance than we're currently getting on our
E250s.  This is all on Sparc Solaris, by the way.


Thanks for the help,

Brian

On Wed, Jul 11, 2007 at 08:57:02PM -0400, Dale Ghent wrote:
> 
> Also, Brian, how much RAM does your box have?
> 
> To expound on Rob's first point, the spindle count of a RAIDZ (or Z2)  
> set is important. It's generally urged to keep the disks that  
> comprise a raidz(2) set in the single digits and no more than 10 or  
> so (note this is not per pool, but per set. You can of course have  
> multiple sets in a pool, and that would also be better in terms of  
> fault tolerance)
> 
> /dale
> 
> On Jul 11, 2007, at 8:11 PM, Robert Banz wrote:
> 
> >
> >A couple things to check, Brian...
> >
> >1) How large is your RAID-Z2 pool (# of spindles)?  If it's rather  
> >large (say, above 8), you might be running into problems from that.
> >
> >2) Check to see if your fileserver process is fully resident in  
> >memory (not swapped out.)  ZFS's ARC can get VERY greedy and end up  
> >pushing out real stuff to swap.  If you've got a callback table  
> >size on your fileserver, there will be quite a few chunks of memory  
> >that it uses which may look like good candidates for swapping-out  
> >because they don't get accessed much -- but when they do, it'll  
> >drag your fileserver to a crawl for the time when its got to swap  
> >them in.  If this is the case, figure out how much ram you can  
> >dedicate to the ARC, and pin its maximum size.  (see: http:// 
> >www.solarisinternals.com/wiki/index.php/ 
> >ZFS_Best_Practices_Guide#Memory_and_Dynamic_Reconfiguration_Recommenda 
> >tions )
> >
> >-rob
> >
> >
> >
> >On Jul 11, 2007, at 16:49, Brian Sebby wrote:
> >
> >>Hello,
> >>
> >>I've been getting intermittant reports of slow read performance on  
> >>a new
> >>AFS file server that I recently set up based on ZFS.  It is using  
> >>locally
> >>attached disks in a RAID-Z2 (double parity) configuration.  I was  
> >>wondering
> >>if anyone might be able to provide any ideas for tuning /  
> >>investigating
> >>the problem.  The slow performance that's been reported seems to  
> >>be against
> >>a RW volume with no replicas.
> >>
> >>Right now, I am using OpenAFS 1.4.4 with the "no fsync" patch.  The
> >>options I'm using for the fileserver are "-nojumbo" and "-nofsync".
> >>I've also set the ZFS parameters "atime" to "off" and "recordsize"
> >>to "64K" as recommended in Dale Ghent's presentation at the OpenAFS
> >>workshop.
> >>
> >>There are a bunch of file server options that I'm not sure if they  
> >>would
> >>help or not.  Any advice would be appreciated as I'm looking at  
> >>ZFS-based
> >>file servers for some new file servers I'm setting up, but my  
> >>experience
> >>so far has been mostly with the OpenAFS 1.2 inode-based file server.
> >>
> >>
> >>Brian
> >>
> >>-- 
> >>Brian Sebby  (sebby@anl.gov)  |  Unix and Operation Services
> >>Phone: +1 630.252.9935        |  Computing and Information Systems
> >>Fax:   +1 630.252.4601        |  Argonne National Laboratory
> >>_______________________________________________
> >>OpenAFS-info mailing list
> >>OpenAFS-info@openafs.org
> >>https://lists.openafs.org/mailman/listinfo/openafs-info
> >
> >_______________________________________________
> >OpenAFS-info mailing list
> >OpenAFS-info@openafs.org
> >https://lists.openafs.org/mailman/listinfo/openafs-info
> 
> --
> Dale Ghent
> Specialist, Storage and UNIX Systems
> UMBC - Office of Information Technology
> ECS 201 - x51705
> 
> 
> 
> _______________________________________________
> OpenAFS-info mailing list
> OpenAFS-info@openafs.org
> https://lists.openafs.org/mailman/listinfo/openafs-info

-- 
Brian Sebby  (sebby@anl.gov)  |  Unix and Operation Services
Phone: +1 630.252.9935        |  Computing and Information Systems
Fax:   +1 630.252.4601        |  Argonne National Laboratory