[OpenAFS] Re: WAN speed

jukka.tuominen@finndesign.fi jukka.tuominen@finndesign.fi
Sat, 24 Mar 2012 02:59:19 +0200 (EET)


> On Fri, 23 Mar 2012 01:02:10 +0200 (EET)
> jukka.tuominen@finndesign.fi wrote:
>
>> > 109M single file:
>> >
>> > SSH to AFS ~525KB/s
>> > AFS to SSH ~800KB/s
> [...]
>> I tried turning off encryption, but it didn't make a notable
>> difference with either file transfer test
>
> Okay, well that's obviously better but still much lower than the
> theoretical. There are a few more switches you can fiddle with:
>
>  - increase the chunksize by passing -chunksize to afsd (you have to
>    restart the client to change this). Try '-chunksize 20' for 1M, or
>    '-chunksize 23' for 8M, or something around there.

Thanks for the help, I got it working now

-chunksize 23 transfer speeds:

20M/2000 files:

SSH to AFS ~150KB/s
AFS to SSH ~285KB/s


109M single file:
SSH to AFS ~533KB/s
AFS to SSH ~840KB/s

109M to AFS seems to actually hang in the end. In nautilus it shows as if
ok, but when it reaches 100% the dialog doesn't go away (have to kill the
process). It could be reading the file into RAM and writes it to disk
afterwards, or something. This wasn't just with the changed chunksize.

>
>  - try memcache instead of disk cache (I assume you're using disk
>    cache). Do this by passing -memcache to afsd, but this will use RAM
>    instead of local disk for the caching stuff, so you might need to
>    lower the cache size in 'cacheinfo'.

with 100 000 memcache (chunksize returned to default):

20M/2000 files:

SSH to AFS ~160KB/s
AFS to SSH ~260KB/s


109M single file:

SSH to AFS ~520KB/s until nautilus hang halfways.Reboot needed.
AFS to SSH ~600KB/s


>
>  - try 'cache bypass' (with disk cache) which you can turn on with 'fs
>    bypassthresh'. But keep in mind cache bypass is still a new feature
>    and may not be entirely stable.

20M/2000 files:

SSH to AFS ~150KB/s
AFS to SSH ~350KB/s


109M single file:

SSH to AFS ~520KB/s until nautilus hang halfways. Reboot needed.
AFS to SSH ~1.1MB/s until nautilus hang halfways. Reboot needed.

I believe something corrupted at server end after client reboots. I 
restored the server to the previous snapshot.

The last one seemed a bit more responsive, I think.

>
> If it still seems slow no matter what you try, what may be helpful is if
> you can provide the output of:
>
> rxdebug <client> 7001 -rxstat -noconn
> rxdebug <server> 7000 -rxstat -noconn
>
> So we could see the rate of resends and such for Rx. Provide that output
> both immediately before and immediately after you try a transfer.

I hope this helps

before

~$ rxdebug localhost 7001 -rxstat -noconn
Trying 127.0.0.1 (port 7001):
Free packets: 492/499, packet reclaims: 95, calls: 5, used FDs: 64
not waiting for packets.
0 calls waiting for a thread
1 threads are idle
0 calls have waited for a thread
rx stats: free packets 492, allocs 40820, alloc-failures(rcv 0/5,send
0/0,ack 0)
   greedy 0, bogusReads 0 (last from host 0), noPackets 0, noBuffers 10,
selects 0, sendSelects 0
   packets read: data 23070 ack 446 busy 0 abort 4 ackall 0 challenge 1
response 0 debug 2 params 0 unused 0 unused 0 unused 0 version 0
   other read counters: data 21835, ack 443, dup 815 spurious 1238 dally 1
   packets sent: data 4714 ack 15953 busy 0 abort 0 ackall 0 challenge 0
response 1 debug 0 params 0 unused 0 unused 0 unused 0 version 0
   other send counters: ack 15953, data 4671 (not resends), resends 43,
pushed 0, acked&ignored 1069
           (these should be small) sendFailed 0, fatalErrors 0
   Average rtt is 0.006, with 632 samples
   Minimum rtt is 0.002, maximum is 0.028
   1 server connections, 6 client connections, 3 peer structs, 8 call
structs, 6 free call structs

$ rxdebug <server> 7000 -rxstat -noconn
Trying ... (port 7000):
Free packets: 505/1357, packet reclaims: 12, calls: 8434, used FDs: 53
not waiting for packets.
0 calls waiting for a thread
25 threads are idle
0 calls have waited for a thread
rx stats: free packets 505, allocs 97164, alloc-failures(rcv 0/0,send
0/0,ack 0)
   greedy 0, bogusReads 0 (last from host 0), noPackets 0, noBuffers 0,
selects 0, sendSelects 0
   packets read: data 56152 ack 32031 busy 0 abort 2 ackall 0 challenge 6
response 4 debug 2 params 0 unused 0 unused 0 unused 0 version 0
   other read counters: data 56148, ack 31513, dup 3119 spurious 518 dally 0
   packets sent: data 48038 ack 30243 busy 4 abort 30 ackall 0 challenge
54 response 6 debug 0 params 0 unused 0 unused 0 unused 0 version 0
   other send counters: ack 30243, data 42312 (not resends), resends 5726,
pushed 0, acked&ignored 33479
           (these should be small) sendFailed 0, fatalErrors 0
   Average rtt is 0.004, with 33057 samples
   Minimum rtt is 0.000, maximum is 0.077
   3 server connections, 8 client connections, 4 peer structs, 8 call
structs, 7 free call structs


Afterwards


$ rxdebug localhost 7001 -rxstat -noconn
Trying 127.0.0.1 (port 7001):
Free packets: 493/499, packet reclaims: 95, calls: 5, used FDs: 64
not waiting for packets.
0 calls waiting for a thread
1 threads are idle
0 calls have waited for a thread
rx stats: free packets 493, allocs 65586, alloc-failures(rcv 0/5,send
0/0,ack 0)
   greedy 0, bogusReads 0 (last from host 0), noPackets 0, noBuffers 10,
selects 0, sendSelects 0
   packets read: data 28251 ack 7092 busy 0 abort 6 ackall 0 challenge 1
response 0 debug 4 params 0 unused 0 unused 0 unused 0 version 0
   other read counters: data 27016, ack 7076, dup 815 spurious 1242 dally 10
   packets sent: data 19819 ack 21165 busy 0 abort 0 ackall 0 challenge 0
response 1 debug 0 params 0 unused 0 unused 0 unused 0 version 0
   other send counters: ack 21165, data 19044 (not resends), resends 775,
pushed 0, acked&ignored 14718
           (these should be small) sendFailed 0, fatalErrors 0
   Average rtt is 0.005, with 9719 samples
   Minimum rtt is 0.002, maximum is 0.028
   1 server connections, 6 client connections, 3 peer structs, 8 call
structs, 5 free call structs


$ rxdebug <server> 7000 -rxstat -noconn
Trying ... (port 7000):
Free packets: 509/1357, packet reclaims: 12, calls: 13304, used FDs: 64
not waiting for packets.
0 calls waiting for a thread
25 threads are idle
0 calls have waited for a thread
rx stats: free packets 509, allocs 116600, alloc-failures(rcv 0/0,send
0/0,ack 0)
   greedy 0, bogusReads 0 (last from host 0), noPackets 0, noBuffers 0,
selects 0, sendSelects 0
   packets read: data 71026 ack 37160 busy 0 abort 2 ackall 0 challenge 6
response 4 debug 4 params 0 unused 0 unused 0 unused 0 version 0
   other read counters: data 71022, ack 36642, dup 3684 spurious 518 dally 0
   packets sent: data 53161 ack 36889 busy 4 abort 32 ackall 0 challenge
54 response 6 debug 0 params 0 unused 0 unused 0 unused 0 version 0
   other send counters: ack 36889, data 47435 (not resends), resends 5726,
pushed 0, acked&ignored 33734
           (these should be small) sendFailed 0, fatalErrors 0
   Average rtt is 0.004, with 33312 samples
   Minimum rtt is 0.000, maximum is 0.077
   2 server connections, 8 client connections, 4 peer structs, 8 call
structs, 7 free call structs


rxpert I didn't try, yet.

br, jukka

>
> You can also try a benchmark using 'rxperf' to see how fast rx itself
> goes, which could help rule out what the bottleneck is during the
> transfer. If you want to try that but are not sure how to use it, let us
> know.
>
> --
> Andrew Deason
> adeason@sinenomine.net
>
> _______________________________________________
> OpenAFS-info mailing list
> OpenAFS-info@openafs.org
> https://lists.openafs.org/mailman/listinfo/openafs-info
>