[OpenAFS-devel] Number of inodes - are there known limits?

Harald Barth haba@pdc.kth.se
Wed, 30 Jan 2002 18:41:03 +0100 (CET)


Is there anyone out there who knows if some volumes in the pdc.kth.se
cell happen to hit some strange inode barrier with approx 2 700 000
inodes in a partition and 1 070 000 inodes in a volume? Server is
OpenAFS 2.2.2 on Dux 5.0 1094.

Before the server started to accumulate CPU time and went into
something that resembled the meltdown syndrome I got a message to
FileLog from viced/physio.c:

    code = FDH_READ(fdP, data, PAGESIZE);
    if (code != PAGESIZE) {
        if (code < 0)
            code = errno;
        else
            code = EIO;
        ViceLog (0,
                 ("ReallyRead(): read failed device %X inode %X errno %d\n",
                   file->dirh_handle->ih_dev,
                   PrintInode(NULL, file->dirh_handle->ih_ino), code));
        FDH_REALLYCLOSE(fdP);
        return code;
    }
 
So I'd like to know what actually happend here. It would have been
easier if the format string would have been "inode %s" instead of
"inode %X" which is found in more than one place. So what happened
when the FDH_READ returned something which was not PAGESIZE? Oh, yes,
the errno I finally saw in the FileLog was 5, EIO.

I can add that my users have not been quite nice to that particular fileserver
with 26 000 000 accesses the past day, my other servers see typically 2 500 000
accesses in the same period.

I have now restarted the fileserver with -nojumbo -p 23 -busyat 200
-rxpck 400. These values are guesses from my side, what do you think
about them in respect to try preventing a meltdown?

Harald.