[OpenAFS] Re: why afs backup is so poorly supported

Marcus Watts mdw@umich.edu
Tue, 10 Oct 2006 17:11:34 -0400

Jeffrey Hutzelman <jhutz@cmu.edu> replied:
> It's an interesting idea, though probably more suitable for discussion on 
> openafs-devel than in this forum.  To handle StoreData, you'd need the 
> ability to update only part of a blob.  Also, how efficiently are large 
> blobs handled even by those databases that support them?

That most likely depends on the database & API.  At a quick glance,
the command line tool for postgresql only imports & exports to
"a local file" (which has interesting similarities to the origins
of AFS).  The libpq C interface to postgresql supports read, write,
lseek from/to the server, much like regular file access.  The internals
of postgresql implement blobs as chunks in a special table which can be
randomly accessed.  Blobs can be up to 2G in size.  There are also
"toasted" objects, which probably aren't as useful.  I'm afraid
to ask if one can toast blobs.

My recollection is that oracle has some sort of chunk-wise access
to blobs.  I assume other db systems (db2, etc.) have similar
functions, at least if they intend to implement the "l" in blob.

One interesting limitation with blobs may have to do with rollback segment
size.  I know that's an issue with oracle, not sure about postgresql.
The basic problem is that if you update 2G of stuff, you might not be
able to do it as one atomic transaction.  Probably you shouldn't want
to in any case, but it might spoil chas's vision of atomic commits.