[OpenAFS-devel] .35 sec rx delay bug?

chas williams - CONTRACTOR chas@cmf.nrl.navy.mil
Wed, 08 Nov 2006 08:20:06 -0500


In message <20061108.113049.26358917.haba@habarber.pdc.kth.se>,Harald Barth wri
tes:
>Derick wrote that things get better when jumbograms are _dis_abled, did he?

things get better because some hardware in the middle is misbehaving.

>Ehm, we have to be careful not to confuse RX_MAX_FRAGS with jumbograms.

but RX_MAX_FRAGS is used to determine the maximum size of the jumbogram.
no jumbogram will exceed more than RX_MAX_FRAGS * ifMTU.

>I'd vote for a patch that sets RX-size < min( MTU(sender), MTU(receiver)).
>because I only have seen grief about dropped fragments and I don't see
>any performance gains from RX-size > MTU with todays hardware.

actually, afs claims to know about mis-matched sender/receiver mtu.

            /*
             * As of AFS 3.5, a jumbogram is more than one fixed size
             * packet transmitted in a single UDP datagram. If the remote
             * MTU is smaller than our local MTU then never send a datagram
             * larger than the natural MTU.
             */
            rx_packetread(np,
                          rx_AckDataSize(ap->nAcks) + 3 * sizeof(afs_int32),
                          sizeof(afs_int32), &tSize);
            maxDgramPackets = (afs_uint32) ntohl(tSize);
            maxDgramPackets = MIN(maxDgramPackets, rxi_nDgramPackets);
            maxDgramPackets =
                MIN(maxDgramPackets, (int)(peer->ifDgramPackets));
            maxDgramPackets = MIN(maxDgramPackets, tSize);
            if (maxDgramPackets > 1) {
                peer->maxDgramPackets = maxDgramPackets;
                call->MTU = RX_JUMBOBUFFERSIZE + RX_HEADER_SIZE;
            } else {
                peer->maxDgramPackets = 1;
                call->MTU = peer->natMTU;
            }

this doesnt seem to be doing what it should.  this looks right assuming
that ifDgramPackets = 1 for mtu = 1500 peers.  unfortunately the afs
mtu/dgram computations are a bit obscure and everywhere.