[OpenAFS] Windows client network behaviour
Wed, 21 Sep 2011 11:23:06 -0400
This is an OpenPGP/MIME signed message (RFC 2440 and 3156)
Content-Type: text/plain; charset=UTF-8
On 9/21/2011 7:09 AM, Anders Magnusson wrote:
> In the hunt for oddities regarding the new IFS Windows client I have
> observed a problem causing bad performance, and hopefully someone has
> some idea about what is going on.
> Server: OpenAFS 18.104.22.168, CentOS 5.3
> Client: Windows 7, OpenAFS 1.7.1
> The test case is to write an ISO image (700MB) to afs from local disk.
What size is the cache? Is the ISO larger than the cache?
What is the chunksize?
What is the blocksize?
> If the switch port is set to 100Mbit I will get ~3Mbyte/s, but if it is=
> set to 1Gbit then I get ~10Mbyte/s.
> Both these numbers are much lower than they should be, and more
> precisely I cannot understand why the speed in 100Mbit configuration
> becomes much lower than when using 1Gbit.
More than likely it is because the RPC round trip time is slower and
therefore the latency is longer.
> Before someone asks; there are no network limits here and both client
> and server are on the same subnet.
> I have run tcpdump on both client and server and seen this traffic
> For 100Mbit:
> - A data packet is sent out periodically at an almost exact rate of one=
> 1472 byte
> per 420 microseconds, which gives something close to 3Mbyte/s
> For 1Gbit:
> - The same as for 100Mbit except for that the packet rate is one packet=
> per 91 microseconds.
> The ack packet from the file server is sent back 12 microseconds after
> each second data packet.
How long does it take for each each StoreData RPC to complete?
> I have uninstalled the QoS module on the Windows interface.
> Any hints anyone? I think this smells as traffic shaping due to the
> quite exact transmit rate but
> since the QoS module is uninstalled and the behaviour is seen on the
> windows network interface
> I have no clue where it may be.
> A side note: Going via a SMB-AFS gateway on the same network gives
> significantly better
The SMB client behavior is very different. The SMB redirector sends
data in 64K chunks to the SMB server which is then written to the file
server semi-synchronously. As a result there is much less pressure on
the cache regardless of size. For the IFS client at present, all 700MB
will go into the windows page cache and it will swallow the entire AFS
cache at once. Things degrade at that point waiting for each RPC to
complete in order to make more room for new data.
If your cache size is large enough and the file servers are responsive,
then it is possible to obtain 40MB/sec write speeds on 1Gbit links. I
am aware of where the bottlenecks are but it is going to take time for
me to address them.
I will refer people to a blog post I wrote back in March 2008
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="signature.asc"
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.9 (MingW32)
-----END PGP SIGNATURE-----