[OpenAFS] Open files dissapear?
Fri, 27 Feb 2009 10:38:36 +0100
Harald Barth wrote:
> So is blogbench running into the directory size limit of AFS and
> ignoring the error code or something like that?
I would rule this out: the test only puts a limited amount of files in
aech subdirectory, so it's only possible to hit the limits if the
directories are not cleaned between runs. Furthermore, the problem
occurs also very early in the test when only a couple of hundred files
have been created.
> If I throttle down blogbench a bit it does not bail out:
I tried reducing the number of writer, rewriter, commenters and reader
thread to 1. Same problem.
>From what I learned from the webpage and the code, blogbench operates as
- The (re)writer threads open() a temporary file, write() some data,
close() the file, and rename() the file to it's "real" name.
- The reader threads open() a file (read-only), read() the file in, and
close() the file.
These are all standard C functions, nothing fancy. I've run blogbench on
nfs and cifs mounts and there the problem doesn't show.
I would say that the problems appears when a reader opens a file at the
same moment as a rewriter renames a temporary file to that name. If I
run stat() on the filename directly before and directly after the error
occurs, the filename is linked to different inodes. I've been able to
verify with lsof that the file descriptor is still open at that time, so
it seems that the inode was removed prematurely.
I also noticed another problem that may be related (or maybe not):
> # ls -l
> ls: cannot access pcmcia.h: No such file or directory
> total 0
> ?????????? ? ? ? ? ? pcmcia.h
I have a couple of files like this.
I know very little about openafs, but could this (these?) be related to
a problem in my Volume database?
Robbert Eggermont Information & Communication Theory
R.Eggermont@TUDelft.nl Electr.Eng., Mathematics & Comp.Science
+31 (15) 2783234 Delft University of Technology