[OpenAFS] Re: Client Cache Question
Timothy Balcer
timothy@telmate.com
Fri, 28 Jun 2013 12:29:38 -0700
--20cf302d4e6057226904e03be684
Content-Type: text/plain; charset=ISO-8859-1
Ok, I have tested the udp bandwidth between sites, and its clear all the
way up to maximum of 100Mbits, bidirectionally. Used iperf to test this,
with a 224k buffer. As is TCP, if you wanted to know :)
On another VM in the same local colo, I am seeing an order of magnitude (20
- 35 minutes vs 2-3 minutes) longer transfers of even large single files
(900M file in this case, on a cache of 500M). Strace on a simple 'cat'
redirect from a local disk into AFS shows blocking on a write:
read(3,
"\251\334%]\212|\366\243\tk\21\221\332\35\335\366J\303\324\323g\336\345\4\221\267\372GA}\210("...,
32768) = 32768
write(1,
"\251\334%]\212|\366\243\tk\21\221\332\35\335\366J\303\324\323g\336\345\4\221\267\372GA}\210("...,
32768
Info on that VM: 500M memcache (no behavior change from disk cache.. disk
cache and memcache both block on write):
/sbin/afsd -stat 1500000 -daemons 6 -volumes 256 -chunksize *13* -rxpck *
6000* -*memcache* -afsdb -dynroot -fakestat
rsync in my original client also is blocking on a write. I'd copy/paste it
here, but it would look the same as above :)
Looking on the fileserver in FileLog I see nothing of note. No references
to the volume in question whatsoever, and it is showing online.
with a straight scp, the file I am trying to transfer travels at about
5MB/s to the fileserver. 3 Minutes transfer time.
Using cat or rsync it takes 25 to 35 minutes
If I use the same method on the local fileserver.. "cat" the file into afs
via a redirect, it takes 6 SECONDS. Yes thats seconds, not minutes.
Well, that fileserver has a 2GB cache, and different client parameters.
Lets make the remote client look the same...:
/sbin/afsd -stat 1500000 -daemons 6 -volumes 256 -chunksize *19* -rxpck *
3000* -afsdb -dynroot -fakestat
The cat completes almost immediately of course (I put pv in the pipe to the
redirect to watch transfer speed and completion).. however it blocks at the
very end, and takes 25 minutes to complete. Also, I notice that while the
client's cache is flushing out to the fileserver, the volume takes many
seconds to do an 'ls' on, even of a specific file. The file is growing very
slowly.. (via a watch -n 10 ls -l /path/to/afs/file) but consistently.. as
opposed to before, when the cache was smaller then the file.. in which case
it grew in "fits and starts"
I have monitoring on all my AFS fileservers to watch for irregularities..
they are all showing clear. rxdebug shows 0 blocked connections.
So, I am left with the conclusion that there is "something" in how my afs
client is talking to my remote fileserver that is seriously mucking things
up. Ping time is about 64ms, by the way, and it is over a private MPLS,
which is stable, and not heavily used. As I said before, a simple rsync to
the same fileserver to a local volume on it takes a couple of minutes to
move this amount of content. I even tested udp based file transfers.. even
faster than rsync.
FYI, I do not have encryption turned on.
Please let me know whatever other information you might need to help? I am
very stumped as to what could be causing this.
PS: I am going to strace the cat mentioned above, as well as do a few tests
to other colos and fileservers, including one that is local to the client
in question to see what changes.
On Tue, Jun 25, 2013 at 2:03 PM, Timothy Balcer <timothy@telmate.com> wrote:
> Im aware of udp protocol problems.. I've tested network throughput and
> responsivity for udp/tcp several different ways, and it's dandy.
>
> Ok I'll do a few straced rsync's and we'll see what happens :)
>
>
> On Tue, Jun 25, 2013 at 1:59 PM, Andrew Deason <adeason@sinenomine.net>wrote:
>
>> On Tue, 25 Jun 2013 13:49:06 -0700
>> Timothy Balcer <timothy@telmate.com> wrote:
>>
>> > So, this means that there is no networkly or I/O reason on the
>> > destination or source that could be the culprit particularly.
>>
>> That's not necessarily true; even discarding the issues with Rx (the
>> transport protocol AFS uses), some networking issues can be UDP-only,
>> which would affect AFS but not straight rsync. You may find that very
>> unlikely, and I certainly try not to jump straight to that conclusion
>> right away, but it has happened before.
>>
>> > Im running an fstrace right now, but as you said, its very cryptic.
>>
>> Yeah, I meant that for an fstrace you'd need to give it to an openafs
>> developer (give it to me, or to the list, or whatever). That would
>> contain filenames and volume names, but not file data.
>>
>> > Are you suggesting I run an strace on the rsync process? Or that I
>> > attach a strace to all the afs background daemons?
>>
>> Just strace the rsync process. Just to see if we're slow on, say,
>> open/write/close, or something you may not expect, like lstat or
>> getdents or something. That would also tell you if a specific file is
>> slower than all of the others. Before we can look at what it is within
>> AFS that's making things so slow, it would help to know what operation
>> is actually being slow.
>>
>> Obviously if you share the raw strace output with someone, that would
>> contain path names and a little bit of the file data. I was intending
>> for you to look at that and see if anything is unusual, or just report
>> what seems "slow".
>>
>> --
>> Andrew Deason
>> adeason@sinenomine.net
>>
>> _______________________________________________
>> OpenAFS-info mailing list
>> OpenAFS-info@openafs.org
>> https://lists.openafs.org/mailman/listinfo/openafs-info
>>
>
>
>
> --
> Timothy Balcer / IT Services
> Telmate / San Francisco, CA
> Direct / (415) 300-4313
> Customer Service / (800) 205-5510
>
--
Timothy Balcer / IT Services
Telmate / San Francisco, CA
Direct / (415) 300-4313
Customer Service / (800) 205-5510
--20cf302d4e6057226904e03be684
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: quoted-printable
<div dir=3D"ltr"><div>Ok, I have tested the udp bandwidth between sites, an=
d its clear all the way up to maximum of 100Mbits, bidirectionally. Used ip=
erf to test this, with a 224k buffer. As is TCP, if you wanted to know :)<b=
r>
<br></div><div>On another VM in the same local colo, I am seeing an order o=
f magnitude (20 - 35 minutes vs 2-3 minutes) longer transfers of even large=
single files (900M file in this case, on a cache of 500M). Strace on a sim=
ple 'cat' redirect from a local disk into AFS shows blocking on a w=
rite:<br>
<br><div style=3D"margin-left:40px">read(3, "\251\334%]\212|\366\243\t=
k\21\221\332\35\335\366J\303\324\323g\336\345\4\221\267\372GA}\210("..=
., 32768) =3D 32768<br>write(1, "\251\334%]\212|\366\243\tk\21\221\332=
\35\335\366J\303\324\323g\336\345\4\221\267\372GA}\210("..., 32768<br>
</div><br></div><div>Info on that VM: 500M memcache (no behavior change fro=
m disk cache.. disk cache and memcache both block on write): <br><br><div s=
tyle=3D"margin-left:40px">/sbin/afsd -stat 1500000 -daemons 6 -volumes 256 =
-chunksize <b>13</b> -rxpck <b>6000</b> -<b>memcache</b> -afsdb -dynroot -f=
akestat<br>
</div></div><div><br></div><div>rsync in my original client also is blockin=
g on a write. I'd copy/paste it here, but it would look the same as abo=
ve :)<br><br></div><div>Looking on the fileserver in FileLog I see nothing =
of note. No references to the volume in question whatsoever, and it is show=
ing online.<br>
<br></div><div>with a straight scp, the file I am trying to transfer travel=
s at about 5MB/s to the fileserver. 3 Minutes transfer time.<br><br></div><=
div>Using cat or rsync it takes 25 to 35 minutes<br><br></div><div>If I use=
the same method on the local fileserver.. "cat" the file into af=
s via a redirect, it takes 6 SECONDS. Yes thats seconds, not minutes. <br>
<br>Well, that fileserver has a 2GB cache, and different client parameters.=
Lets make the remote client look the same...: <br><br><div style=3D"margin=
-left:40px">/sbin/afsd -stat 1500000 -daemons 6 -volumes 256 -chunksize <b>=
19</b> -rxpck <b>3000</b> -afsdb -dynroot -fakestat<br>
</div><br></div><div>The cat completes almost immediately of course (I put =
pv in the pipe to the redirect to watch transfer speed and completion).. ho=
wever it blocks at the very end, and takes 25 minutes to complete. Also, I =
notice that while the client's cache is flushing out to the fileserver,=
the volume takes many seconds to do an 'ls' on, even of a specific=
file. The file is growing very slowly.. (via a watch -n 10 ls -l /path/to/=
afs/file) but consistently.. as opposed to before, when the cache was small=
er then the file.. in which case it grew in "fits and starts"<br>
<br></div><div>I have monitoring on all my AFS fileservers to watch for irr=
egularities.. they are all showing clear. rxdebug shows 0 blocked connectio=
ns.<br></div><div><br></div><div>So, I am left with the conclusion that the=
re is "something" in how my afs client is talking to my remote fi=
leserver that is seriously mucking things up. Ping time is about 64ms, by t=
he way, and it is over a private MPLS, which is stable, and not heavily use=
d. As I said before, a simple rsync to the same fileserver to a local volum=
e on it takes a couple of minutes to move this amount of content. I even te=
sted udp based file transfers.. even faster than rsync.<br>
<br></div><div>FYI, I do not have encryption turned on.<br><br></div><div>P=
lease let me know whatever other information you might need to help? I am v=
ery stumped as to what could be causing this.<br><br></div><div>PS: I am go=
ing to strace the cat mentioned above, as well as do a few tests to other c=
olos and fileservers, including one that is local to the client in question=
to see what changes.<br>
</div><div><br></div><div><br></div><div><br><br></div></div><div class=3D"=
gmail_extra"><br><br><div class=3D"gmail_quote">On Tue, Jun 25, 2013 at 2:0=
3 PM, Timothy Balcer <span dir=3D"ltr"><<a href=3D"mailto:timothy@telmat=
e.com" target=3D"_blank">timothy@telmate.com</a>></span> wrote:<br>
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex"><div dir=3D"ltr">Im aware of udp protocol pr=
oblems.. I've tested network throughput and responsivity for udp/tcp se=
veral different ways, and it's dandy.<br>
<br>Ok I'll do a few straced rsync's and we'll see what happens=
:)<br>
</div><div class=3D"gmail_extra"><div><div class=3D"h5"><br><br><div class=
=3D"gmail_quote">On Tue, Jun 25, 2013 at 1:59 PM, Andrew Deason <span dir=
=3D"ltr"><<a href=3D"mailto:adeason@sinenomine.net" target=3D"_blank">ad=
eason@sinenomine.net</a>></span> wrote:<br>
<blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1p=
x #ccc solid;padding-left:1ex"><div>On Tue, 25 Jun 2013 13:49:06 -0700<br>
Timothy Balcer <<a href=3D"mailto:timothy@telmate.com" target=3D"_blank"=
>timothy@telmate.com</a>> wrote:<br>
<br>
> So, this means that there is no networkly or I/O reason on the<br>
> destination or source that could be the culprit particularly.<br>
<br>
</div>That's not necessarily true; even discarding the issues with Rx (=
the<br>
transport protocol AFS uses), some networking issues can be UDP-only,<br>
which would affect AFS but not straight rsync. You may find that very<br>
unlikely, and I certainly try not to jump straight to that conclusion<br>
right away, but it has happened before.<br>
<div><br>
> Im running an fstrace right now, but as you said, its very cryptic.<br=
>
<br>
</div>Yeah, I meant that for an fstrace you'd need to give it to an ope=
nafs<br>
developer (give it to me, or to the list, or whatever). That would<br>
contain filenames and volume names, but not file data.<br>
<div><br>
> Are you suggesting I run an strace on the rsync process? Or that I<br>
> attach a strace to all the afs background daemons?<br>
<br>
</div>Just strace the rsync process. Just to see if we're slow on, say,=
<br>
open/write/close, or something you may not expect, like lstat or<br>
getdents or something. That would also tell you if a specific file is<br>
slower than all of the others. Before we can look at what it is within<br>
AFS that's making things so slow, it would help to know what operation<=
br>
is actually being slow.<br>
<br>
Obviously if you share the raw strace output with someone, that would<br>
contain path names and a little bit of the file data. I was intending<br>
for you to look at that and see if anything is unusual, or just report<br>
what seems "slow".<br>
<div><div><br>
--<br>
Andrew Deason<br>
<a href=3D"mailto:adeason@sinenomine.net" target=3D"_blank">adeason@sinenom=
ine.net</a><br>
<br>
_______________________________________________<br>
OpenAFS-info mailing list<br>
<a href=3D"mailto:OpenAFS-info@openafs.org" target=3D"_blank">OpenAFS-info@=
openafs.org</a><br>
<a href=3D"https://lists.openafs.org/mailman/listinfo/openafs-info" target=
=3D"_blank">https://lists.openafs.org/mailman/listinfo/openafs-info</a><br>
</div></div></blockquote></div><br><br clear=3D"all"><br></div></div><div c=
lass=3D"im">-- <br><span style=3D"border-collapse:collapse;color:rgb(102,10=
2,102);font-family:verdana,sans-serif;font-size:x-small">Timothy Balcer / I=
T Services<br>
Telmate / San Francisco, CA<br>
Direct / </span><span style=3D"border-collapse:collapse;font-family:verdana=
,sans-serif;font-size:x-small"><font color=3D"#1155cc"><a href=3D"tel:%2841=
5%29%20300-4313" value=3D"+14153004313" target=3D"_blank">(415) 300-4313</a=
></font><br>
<font color=3D"#666666">Customer Service /=A0</font><a value=3D"+1800205551=
0" style=3D"color:rgb(17,85,204)">(800) 205-5510</a></span>
</div></div>
</blockquote></div><br><br clear=3D"all"><br>-- <br><span style=3D"border-c=
ollapse:collapse;color:rgb(102,102,102);font-family:verdana,sans-serif;font=
-size:x-small">Timothy Balcer / IT Services<br>Telmate / San Francisco, CA<=
br>
Direct / </span><span style=3D"border-collapse:collapse;font-family:verdana=
,sans-serif;font-size:x-small"><font color=3D"#1155cc">(415) 300-4313</font=
><br><font color=3D"#666666">Customer Service /=A0</font><a value=3D"+18002=
055510" style=3D"color:rgb(17,85,204)">(800) 205-5510</a></span>
</div>
--20cf302d4e6057226904e03be684--