[OpenAFS] Re: why afs backup is so poorly supported
Tue, 10 Oct 2006 17:24:26 -0400
Cool as the transactional piece would be, iirc from our discussion in
2004, putting a postgresql behind every fileserver sounds kind of
heavyweight, doesn't it?
Plus, is there a difference between transactional metadata updates and
transactional file data updates?
Marcus Watts wrote:
> Jeffrey Hutzelman <firstname.lastname@example.org> replied:
>> It's an interesting idea, though probably more suitable for discussion on
>> openafs-devel than in this forum. To handle StoreData, you'd need the
>> ability to update only part of a blob. Also, how efficiently are large
>> blobs handled even by those databases that support them?
> That most likely depends on the database & API. At a quick glance,
> the command line tool for postgresql only imports & exports to
> "a local file" (which has interesting similarities to the origins
> of AFS). The libpq C interface to postgresql supports read, write,
> lseek from/to the server, much like regular file access. The internals
> of postgresql implement blobs as chunks in a special table which can be
> randomly accessed. Blobs can be up to 2G in size. There are also
> "toasted" objects, which probably aren't as useful. I'm afraid
> to ask if one can toast blobs.
> My recollection is that oracle has some sort of chunk-wise access
> to blobs. I assume other db systems (db2, etc.) have similar
> functions, at least if they intend to implement the "l" in blob.
> One interesting limitation with blobs may have to do with rollback segment
> size. I know that's an issue with oracle, not sure about postgresql.
> The basic problem is that if you update 2G of stuff, you might not be
> able to do it as one atomic transaction. Probably you shouldn't want
> to in any case, but it might spoil chas's vision of atomic commits.
> OpenAFS-info mailing list
The Linux Box
206 South Fifth Ave. Suite 150
Ann Arbor, MI 48104