[OpenAFS] Re: AFS handling of deleted open files
J. Bruce Fields
bfields@fieldses.org
Tue, 12 Jan 2021 13:22:01 -0500
On Tue, Jan 12, 2021 at 08:09:40AM +0000, spacefrogg-openafs@spacefrogg.net wrote:
> My answer is pure speculation and inference from my partial knowledge of AFS and Linux.
>
> On a readonly volume, the cache manager doesn't track individual files. It just keeps a backpointer to the whole volume. So, when the readonly volume is updated, files from it will be refetched when read the next time. I believe this happens promptly because I have seen the followimg issue (which exclusively related to r/w copies of the same file, opened at two clients): Client A and B open file. Client A changes file (e.g. removes content from the middle) Client B updates content (i.e. editor detects file change). Client B sees garbage in the middle of file or no change at all. Client B reopens file, file is consistent again with state left from Client A.
>
> What this example tells me, is, that the cache manager on Client B immediately reads updated chunks of a file, which results in file offsets being wrong, which results in garbage.
>
> Under Linux COW filesystems, this does normally not happen, as the original is not modified in place.
It happens on other filesystems too, this is just standard filesystem
semantics; if you don't want this to happen, you code your applications
to use file locks, or to remove files and replace them instead of
updating in place.
(It's true that there are Linux filesystems like btrfs that use CoW
under the covers, but they're still implementing the same standard
semantics, and a read from one process will still see the results of
writes from other processes.)
--b.
> Having said all that, you must take precautions when using a file that you know may be updated while you read it.
> _______________________________________________
> OpenAFS-info mailing list
> OpenAFS-info@openafs.org
> https://lists.openafs.org/mailman/listinfo/openafs-info