[OpenAFS-devel] Large File Support?

Jeffrey Hutzelman Jeffrey Hutzelman <jhutz@cmu.edu>
Mon, 7 Jan 2002 19:26:22 -0500 (EST)


On Mon, 7 Jan 2002, R. Lindsay Todd wrote:

> Is anyone presently working on adding large file (>2Gig) support to OpenAFS?
> 
> If not, does anyone have any pointers on what needs to be done?  Is 
> there any previous work to build on?

Not particularly.  To support >2GB files in OpenAFS, you'd have to do the
following at a minimum:

- Add a whole new set of fileserver RPC's that use 64-bit file sizes,
  offsets, and lengths.  This would affect at least FetchData, StoreData,
  FetchStatus, StoreStatus, BulkStatus, InlineBulkStatus, and possibly
  some others.

- Define semantics for large files, particularly in cases where clients
  try to manipulate them using the old RPC's.

- Modify the fileserver backend to support large files.  This may mean
  changing the vnode index format, among other things.

- Modify the cache manager to implement the new RPC's, falling back
  on the old ones as appropriate.

- Extend the volume dump format to support dumping files with >2GB of
  content.

Backward compatibility is very important.  Old clients must be able to
talk to new fileservers and vice versa.  It should be possible to move a
volume containing no large files between new and old fileservers.  It
should be possible to perform a dump of a new volume, even if it contains
large files, using an existing volume dump client.

Remember also that AFS is a wire protocol with multiple implementors.
Things like new RPC numbers and probably new volume dump tags should be
coordinated.  If you're really interested in working on this, I suggest 
coming up with a design proposal and asking for comments both here and on
arla-drinkers@stacken.kth.se.  

-- Jeffrey T. Hutzelman (N3NHS) <jhutz+@cmu.edu>
   Sr. Research Systems Programmer
   School of Computer Science - Research Computing Facility
   Carnegie Mellon University - Pittsburgh, PA