[OpenAFS] Offline migration of AFS partition files (i.e. the
contents of `/vicepX`)
Ciprian Dorin Craciun
ciprian.craciun@gmail.com
Sun, 3 Mar 2019 01:36:44 +0200
On Sun, Mar 3, 2019 at 12:29 AM Jeffrey Altman
<jaltman@secure-endpoints.com> wrote:
> On 3/2/2019 3:42 PM, Ciprian Dorin Craciun wrote:
> > (A) When you state `exactly copied` you mean only the following
> > (based on the `struct stat` members):
> > [...]
>
> The vice partition directory hierarchy is used to create a private
> object store. The reason that Harald said "exact copy" is because
> OpenAFS leverages the fact that the "dafs" or "fs" bnode services
> execute as "root" to encode information in the inode's metadata that is
> not guaranteed to be a valid state from the perspective of normal file
> tooling.
I understand that OpenAFS "reuses" the inode metadata for its own
purposes, and that one shouldn't touch it outside the OpenAFS tools.
However is it enough if I make sure that while migrating I need to
keep **only** the following file metadata:
* `st_uid` and `st_gid`;
* `st_mode`;
* `st_atim`, `st_mtim` and `st_ctim`;
Can I assume that no other meta-data is required? (Like for example
Linux file-system ACL's or extended user attributes?) (I would assume
not, however I wanted to make sure.)
Moreover I am curios if the timestamps are actually required?
(Especially the access and changed timestamps.)
> For many years there was discussion of creating a plug-in interface for
> the vice partition object storage. This would permit separate formats
> depending on the underlying file system capabilities and use of non-file
> system object stores.
Although this is a little bit off-topic, I am quite happy that OpenAFS
decided to just reuse a "proper" file-system, and layout its own
"objects" on-top, instead of going with opaque "object stores"...
I understand that from a performance and scalability point of view a
more advanced format would help, however for small deployments, I
think the plain file-system approach provides more reliability and
reassurance that in case something happens one can easily recover
files. (See bellow for more about this.)
> OpenAFS stores each AFS3 File ID data stream in a single file in
> the current format.
>
> > I.e. formalizing the last one: if one would take any file accessible
> > under `/afs` and would compute its SHA1, then by looking into all
> > `/vicepX` partitions belonging to that cell, one would definitively
> > find a matching file with that SHA1.
>
> This is true for the current format.
Continuing my "reliability" idea of plain file-systems, I for example
maintain MD5 checksums for all my AFS stored files (i.e. those in
`/afs/cell`), which means that in case something goes wrong with the
AFS directories or meta-data, I can always just MD5 the actual
`/vicepX` files, and pick my data out of there.
In fact, given that I have deployed OpenAFS for personal use and most
my "archived" files are on it, and the fact that I don't have too much
time to invest in it, just knowing the fact that I can always easily
get my data out, gives me almost blind trust in OpenAFS.
(This, and the lack of WAN and ACL support, is why I don't use Lustre,
Ceph or other "modern" distributed / parallel file-systems.)
> > My curiosity into all this is because I want to prepare some `rsync`
> > and `cpio` snippets that perhaps could help others in a similar
> > endeavor. Moreover (although I know there are at least two other
> > "official" ways to achieve this) it can serve as an alternative backup
> > mechanism.
>
> The vice partition format should be considered to be private to the
> fileserver processes. It is not portable and should not be used as a
> backup or transfer mechanism.
I understand this, however I'm thinking more in case of "disaster
recovery" scenarios, and in those cases when the OpenAFS services are
not capable of running. (As is in my case when I don't have OpenAFS
yet installed on my "new" server, and my "old" server OS is unusable.
I just have my `/vicepX` partitions... Moreover I intend to create a
`cpio` in `newc` format of my old `/vicepX` partitions and keep them
for a while... And given that `cpio` has limited metadata support is
why I asked about which metadata is required.)
Thanks Jeffrey for the information,
Ciprian.