[OpenAFS] Re: Change in volume status during vos dump in OpenAFS 1.6.x
Thu, 14 Mar 2013 09:58:53 -0500
On Thu, 14 Mar 2013 07:41:48 -0400 (EDT)
Andy Malato <email@example.com> wrote:
> So while a volume can grow to more than 2 terabytes in size, the
> various tools may not work correctly with volumes this large ?
If you are running 1.6.2 fileservers, or fileservers patched for the 2TB
issue, everything works. The issues we've been talking about are bugs
for specific older versions, not an architectural limitation. I don't
know how to make that any clearer.
Allowing quotas over 2TB is planned to be addressed, but it may take
awhile, since some more important things are in front of it.
> As more and more researchers continue to work with big data, the
> request for multi terabyte volumes is becoming more frequent. We are
> starting to get requests from researchers for volumes in the range of
> 5 to 7TB and we expect that future requests will probably be even
> From what I read here it appears that AFS may be "impractical" to
> support large datasets of this size? Can you (or anyone else) confirm
> other sites that are using AFS to support the use of big data ?
A few other sites reached the "multi terabyte volumes are becoming more
frequent" stage years ago. Some bugs were found and fixed around the
late 1.4 / early 1.6 period, and these days should work fine. They are
Usually sites avoid moving larger volumes via the typical means of "vos
move", but I believe that is the only restriction, and that restriction