[OpenAFS-devel] how does fileserver read from disk?

chas williams - CONTRACTOR chas@thirdoffive.cmf.nrl.navy.mil
Wed, 21 Sep 2005 07:37:08 -0400


In message <65FF232D-00B7-4C5D-8020-6786970D4740@e18.physik.tu-muenchen.de>,Rol
and Kuhn writes:
>I know already that 16k-reads are non-optimal ;-) What I meant was  
>doing chunksize (1MB in my case) reads. But what I gather from this  
>discussion is that this would really be some work as this read-ahead  
>would have to be managed across several rx jumbograms, wouldn't it?

i wouldnt think its a step in the right direction to implement caching
in the threads on the fileserver.  clients make larger requests than 16k
however the fileserver limits them (see the -sendsize option).  of course,
this has an upper bound given by RX_MAXIOVECS * RX_MAX_PACKET_DATA_SIZE
(which is about 22592).

the cache manager on the  client knows if the user wants more data
(well somewhat, this appears to be tuned somewhat by the chunksize and
readahead code on the client).  the fileserver shouldnt be responsible
for attempting to guess the behavior of the cache managers.  further
i think this is asking too much of the fileserver would should probably
be kept as simple as possible.

the right solution here would seem to be to fix the problem with rx.
solaris is limited to 16 for RX_MAXIOVECS (checking the solaris10 source
there doesnt seem to be a good reason for this but its there none the
less).  so the rx packetsize would have to change.  this might require
some other changes, like the flow control code since it counts packets,
expecting rx packets to be a fixed size essentially.