[OpenAFS-devel] AFS + GFS = crazy?

Tom Keiser tkeiser@gmail.com
Wed, 2 Aug 2006 19:01:48 -0400


On 8/2/06, John Hascall <john@iastate.edu> wrote:
>
> > In message <200608021454.JAA06668@malison.ait.iastate.edu>,
> John Hascall writes:
> > >The idea behind GFS is that these file chunks are
> > >stored on multiple (N, usually 3) chunkservers -- so
> > >you can lose N-1 of them and still be up and have
> > >your data.  And the chunkservers could even be
>
> > and yet people still backup raid filesystems.  there are certain
> > critical failures that cannot be solved with this solution.
>
> And which critical failures would that be?
>

BERs on modern storage devices are not decreasing anywhere near fast
enough to counteract the rate of capacity increase.  Data mirroring
gives you error detection, not error correction (unless you set N=4,
do quorum error resolution and assume simultaneous multi-disk failures
will never happen).  With the recent trends in storage device
properties, simple mirroring is quickly becoming an unacceptable way
of archiving data.  Unless GFS is doing things like hierarchical
checksumming, periodic checksum validation and disk surface scans,
never overwriting live data (sounds like ZFS, huh?), you'll eventually
wish you had backups.

-- 
Tom Keiser
tkeiser@gmail.com