[OpenAFS] Re: Client Cache Question

Andrew Deason adeason@sinenomine.net
Fri, 28 Jun 2013 18:15:26 -0500


On Fri, 28 Jun 2013 12:29:38 -0700
Timothy Balcer <timothy@telmate.com> wrote:

> On another VM in the same local colo, I am seeing an order of
> magnitude (20 - 35 minutes vs 2-3 minutes) longer transfers of even
> large single files (900M file in this case, on a cache of 500M).
> Strace on a simple 'cat' redirect from a local disk into AFS shows
> blocking on a write:

I'm not sure if I'm following this entirely; you're getting an order of
magnitude slower when cat'ing to /afs compared to... scp?

I thought before you were just trying to see why performance was varying
so wildly and due to different parameters; if you're just asking why the
client is performing so slowly compared to non-AFS methods, from the
numbers above I don't think there's any configuration you can do you
change that.

If you have a consistent RTT of 64ms, the maximum theoretical throughput
you'll get from an AFS transfer with the code you're running is less
than 700 KiB/s if I did my math right (32 * ~1400bytes / .064s).
Additionally, any individual new file will take at least 128-192ms each
(1 RTT for creating the file, 1 RTT for sending data, and possibly 1
more RTT for changing file status). Note that those are theoretical
limits; that is, if the client and server run infinitely fast. From the
numbers above, it looks like they fall within the normal range.

AFS network communication is currently very heavily affected by network
latency. Anything with TCP is going to appear much faster beacuse the
TCP window sizes will be huge, relatively speaking. Any performance
benchmarking with UDP is going to appear much faster because it doesn't
need to keep track of data windows, or it uses a different protocol on
top of UDP that uses a much larger window.

Does that clear up any confusion you may have had through all of this?
AFS' poor high-latency network performance has been known for a while;
there are / have been a few efforts to improve it, one of which I'm
working on right now. I can talk more about that, but if this stuff
isn't what you were asking about, then nevermind :)

-- 
Andrew Deason
adeason@sinenomine.net