[OpenAFS] Re: AFS handling of deleted open files

spacefrogg-openafs@spacefrogg.net spacefrogg-openafs@spacefrogg.net
Tue, 12 Jan 2021 08:09:40 +0000 (UTC)


My answer is pure speculation and inference from my partial knowledge of AFS and Linux.

On a readonly volume, the cache manager doesn't track individual files. It just keeps a backpointer to the whole volume. So, when the readonly volume is updated, files from it will be refetched when read the next time. I believe this happens promptly because I have seen the followimg issue (which exclusively related to r/w copies of the same file, opened at two clients): Client A and B open file. Client A changes file (e.g. removes content from the middle) Client B updates content (i.e. editor detects file change). Client B sees garbage in the middle of file or no change at all. Client B reopens file, file is consistent again with state left from Client A.

What this example tells me, is, that the cache manager on Client B immediately reads updated chunks of a file, which results in file offsets being wrong, which results in garbage.

Under Linux COW filesystems, this does normally not happen, as the original is not modified in place.

Having said all that, you must take precautions when using a file that you know may be updated while you read it.