[OpenAFS] UDP slowness remedies?
Dan Pritts
danno@internet2.edu
Fri, 22 Oct 2010 14:44:46 -0400
Others will have to chime in on tuning the server for vos move. I have =
one thought that might or might not help - do something like this from a =
host at source site. I am far from sure of the syntax but you get the =
idea:
=20
vos dump srv part vol | ssh host-at-target-site vos restore srv part vol
hmm, probably simplest to have the target-site host be your fileserver =
or otherwise havea copy of the server key, and use -localauth. =20
=20
If the volume is actively being written to this will probably not do =
what you want, and you'll run into issues with ssh limiting your =
throughput, too, but it's a possible option.
re ssh transfer slowness, see PSC's patches to openssh. =20
also, "never underestimate the bandwidth of a station wagon full of =
magnetic tape." Which I wouldn't want to do, either, but I like saying =
it :)
On Oct 22, 2010, at 1:44 PM, Eric Chris Garrison wrote:
> On 10/22/10 1:29 PM, Dan Pritts wrote:
>> I'm guessing you are going between Bloomington and Indianapolis so
>> latency shouldn't be too high, but even 10ms surely will add up if =
the
>> conversation goes back and forth a million times.
>=20
> That's correct.
>=20
>> I'm pretty sure you can run multiple vos move's in parallel, which =
would
>> help dramatically.
>=20
> We have some half-TB volumes though, those are still going to take =
days
> at this rate.
>=20
>> as far as your iperf results, my experience is that tuning UDP =
buffers
>> generally is not necessary; the defaults are usually sufficient to =
get
>> hundreds of megabits.
>=20
> Is there a way to make vos moves do this? I'm using the mvto script,
> modified to use -localauth since the moves far exceed ticket lifetime.
> I could modify it further if vos has any options for these big =
volumes.
>=20
>> in UDP mode, iperf does not attempt to scale the bandwidth; it tries =
to
>> send at whatever bandwidth you specify on the acommand line. the
>> default is 1Mbit/sec...is it possible that's where your 1mbit result
>> came from?
>=20
> Yes, I used the default for the initial 1Mbit/sec result. In my email
> to this list, I reported incorrectly that I changed the buffer size =
for
> the second test; it was actually bandwidth. I used the -b 1024M option
> on iperf for the second try like I mentioned, and it was nearly line
> rate. Which is why I asked here what I could do to speed up these AFS
> transfers, since faster should be possible.
>=20
> Thanks,
>=20
> Chris