[OpenAFS-devel] Patches for Openafs compression support
Jeffrey Hutzelman
jhutz@cmu.edu
Wed, 05 Jan 2005 13:26:33 -0500
On Wednesday, January 05, 2005 12:39:33 -0500 Mitch Collinsworth
<mitch@ccmr.cornell.edu> wrote:
>
> On Wed, 5 Jan 2005, Jeffrey Hutzelman wrote:
>
>>> Do we want to implement compression as seperate calls for every thing
>>> that we want to compress or should we be able to switch it on/off
>>> like encryption?
>>
>> Actually, what I'd like to see is an extension to the dump file format
>> to allow for compressed dumps. In this model, "new" volservers would
>> automatically generate compressed dumps if the feature was enabled, and
>> would always be able to accept either kind of dump. No new RPC's would
>> be added (*), and no changes to any clients would be required. The
>> compressed part of the dump should begin with the vnodes, leaving the
>> dump and volume headers uncompressed for tools which expect to be able
>> to parse these.
>
> Thank you! This is what I'm concerned about. Our tools to backup AFS
> using Amanda parse the dump file format in order to generate an index
> of files. If the volume headers are kept uncompressed then this should
> not be a problem.
Hm. Stage parses the dump header to get timestamps and the like, but we
haven't gotten around to file indexing yet. Creating a file index would
require parsing the (presumably compressed) directory vnodes.
> Of course the other solution would be an interface that does the dump
> file parsing and presents a consistent API for external tools to use.
> Then we wouldn't need to be on guard against file format changes. :-)
/afs/cs.cmu.edu/project/systems-jhutz/dumpscan
But I really should do a formal release one of these days.
-- Jeff