[OpenAFS] Overview? Linux filesystem choices

Andy Cobaugh phalenor@gmail.com
Thu, 30 Sep 2010 16:16:11 -0400 (EDT)

On 2010-09-30 at 21:00, Robert Milkowski ( milek@task.gda.pl ) said:
> On 30/09/2010 15:12, Andy Cobaugh wrote:
>>  I don't think anybody has mentioned the block level compression in ZFS
>>  yet. With simple lzjb compression (zfs set compression=on foo), our AFS
>>  home directories see ~1.75x compression. That's an extra 1-2TB of disk
>>  that we don't need to store. Of course that makes balancing vice
>>  partitions interesting when you can only see the compression ratio at the
>>  filesystem level and not the volume level.
>>  Checksums are nice too. There's no longer a question of whether your
>>  storage hardware wrote what you wanted it to write. This can go a long way
>>  to helping to predict failures if you run zpool scrub on a regular basis
>>  (otherwise, zfs only detects checksum mismatches upon read, scrub checks
>>  the whole pool).
>>  So, just to add us to the list, we're either ext3 on linux for small stuff
>>  (<10TB), and zfs on solaris for everything else. Will probably consider
>>  XFS in the future, however.
> Why not ZFS on Solaris x86 for "smaller stuff" as well?

That's just the way things have worked out over the years. "smaller stuff" 
tends to be older machines that were here when I started, and a couple of 
those have hardware raid controllers (like, 3ware pata, for example), that 
will be decom'd soon. There are also cases where the machine with the 
storage attached to it also needs to be used interactively by people 
(like, a PI wants a new machine to run stuff on, but also wants 10TB, 
which we set up as a vice partition so they can access it from any 

Solaris is great for storage if that's all you use it for, but 
anything else gets to be a pain when people start asking for really weird 
and complicated stuff to be installed.

If I were doing everything over again, we would eliminate all of the 
storage islands, and run all the storage through solaris.