[OpenAFS] vice partition sizes, and large disks

Nathan Neulinger nneul@umr.edu
Fri, 15 Jun 2001 20:49:16 -0500

chas williams wrote:
> >Does it make sense to use the entire array as a single partition(~500GB)
> >or should we break it up.
> i would say break it up.  depending on your config (raid3/5 in
> particular) a single disk failure can make an entire partition
> readonly. of course, mirrors dont have this problem but they
> eat more space.  i have no idea if raid3 or raid5 is better for
> an afs partition.  when i last checked, i seemed to get the same
> general performance.

What raid controller/software are you using where a disk failure makes
the raid read-only... Yuck. That kindof defeats the purpose of raid5. If
you keep a hot-spare in place, worst that should happen is you run in
degraded (slowed down) mode for a while it rebuilds parity onto the

As far as the performance goes, rx is the bottleneck... we've got some
raid controllers that can do 20MB/sec sustained (Chapparal), but still
only get 3mb/sec vos dumps.

We're going to be moving to 3Ware for future boxes though... 

I'd definately split it up though, if for no other reason than to limit
the damage if a filesystem gets hosed for whatever reason. 

> >thanks for any information you can share on this subject.
> i have about 8 16G partitions (raid5) on our 3 fileservers w/o any problems.
> there are a few striped partitions at 30G each, but i dont put anything
> critical on those partitions since a single disk failure will kill
> those partitions (its a good place for readonly's though since they are
> easy enough to regenerate)  the 30G stripes were done with solaris's
> disksuite software (its one of the old fibre channel boxes with a bunch
> of 9G drives).  disksuite's raid5 performance seems awful (well it is in
> software after all, and the e6000 is no speed demon). the 16G partitions
> are 'hardware-based' raid5 (ala clariion).
> btw, i really really really wish ufs logging would work with the fileserver.
> i corrupted a couple src volumes when i tried this one.  saving the fsck
> time on the partitions after a crash would be a big win.

Reiserfs works good, except for the namei issue with 2.4 (works at 2.2
ok, but haven't tried with large servers). 

I would think that the ufs logging would work ok if you ran the namei
server on solaris, but have never tried it.

> just my thoughts -- i could be completely off my rocker.
> _______________________________________________
> OpenAFS-info mailing list
> OpenAFS-info@openafs.org
> https://lists.openafs.org/mailman/listinfo/openafs-info

-- Nathan

Nathan Neulinger                       EMail:  nneul@umr.edu
University of Missouri - Rolla         Phone: (573) 341-4841
CIS - Systems Programming                Fax: (573) 341-4216