[OpenAFS] Which storage technology to use for terabytes of storage with AFS?

Rob Banz rob@nofocus.org
Fri, 30 Nov 2007 11:39:05 -0500

Don't forgo some sort of live data protection -- most likely via  
RAID.  If these volumes are RW, and you have a storage failure, you're  
going to be putting yourself and your users through hell waiting for  
stuff to be restored.

If you're not looking for super performance, go for a RAID5ish  
solution (or something like a JBOD + raidz using ZFS on Solaris), or  
if performance is an issue, hardware or software mirroring is the way  
to go.

Storage is cheap -- your time and your users isn't.  There are  
affordable FC-attached options out there, such as the Apple XRAID*,  
and other similar options from smaller vendors.  You can probably find  
yourself some similar-priced options that do iSCSI from some of the  
smaller storage vendors out there.

Also -- avoid the Linux game.  Go with Solaris 10 and ZFS.  You've got  
a solid storage architecture, and a top-of-the-line filesystem, and  
you can skip these silly ext3/ext2/reiser/xfs/xvm discussions on the  
list ;)


* Apple basically gives these things away to higher-ed, and they offer  
pretty damn nice pricing on Qlogic's stackable switches.  Talk to your  
apple rep.  They work great with Solaris too ;)

On Nov 30, 2007, at 10:58, Jason Edgecombe wrote:

> Hi everyone,
> Traditionally, we have used direct-attached scsi disk packs on Sun  
> Sparc
> servers running Solaria 9 for OpenAFS. This has given us the most bang
> for the buck. We forgo RAID because we have the backup capabilities  
> of AFS.
> What types of storage technologies are other AFS sites using for their
> AFS vicep partitions? We need to figure our future direction for the
> next couple of years. Fibre channel seems all the rage, but it's quite
> expensive. I'm open to any and all feedback. What works? What doesn't?
> What offers the best bang for the buck on an OpenAFS server?
> This is for an academic environment that fills both academic and
> research needs. Researchers are asking for lots of AFS space (200GB+).
> Of course this needs to be backed up as well.
> Thanks,
> Jason
> _______________________________________________
> OpenAFS-info mailing list
> OpenAFS-info@openafs.org
> https://lists.openafs.org/mailman/listinfo/openafs-info