[OpenAFS] Re: Performance issues
Jaap Winius
jwinius@umrk.nl
Wed, 24 Aug 2011 01:10:04 +0200
Quoting Andrew Deason <adeason@sinenomine.net>:
> What approximate throughput are you seeing in the two cases? ...
I've been monitoring bandwidth usage with Cacti, which shows that even
internal throughput does not normally exceed 3 Mbps. The graphs for
the main Internet connections are pretty much the same and the
available bandwidth is almost never maxed out. It's probably safe to
assume, however, that there are lots of spikes that don't show up in
any of these graphs.
> ... I would guess that this has more to do with the bandwidth and
> latency, but unless there are more dropped or reordered packets or
> something, I'm not sure either way what we would see from the OpenAFS
> fileserver point of view besides worse bandwidth/latency.
After both the fs and db servers were moved from the virtual hosts
(and from behind the firewall and NAT) to the bare metal, we saw a
marked improvement in local performance. For example, certain
applications, particularly the Iceweasel (Firefox) browser, only
became usable after the fs server was moved. But as I said, after the
changes we've seen no performance improvement for when the user
volumes are accessed remotely.
(It has been interesting to note that, when dealing with poor file
server performance, we found Chrome and Epiphany to work far better
than the two other browsers we tested, Iceweasel and Opera.)
I'm now leaning towards the idea that latency is the main factor
slowing down our remote performance. Latency becomes more important as
protocols become more chatty, and I've not been surprised to see AFS
described as being such. When combined with the use of GUI desktops
(even lightweight Xfce) instead of simple shell environments, that's
not going to improve performance either. Originally I was hoping the
AFS client cache would do more to compensate, but I guess that was a
bit too much to expect from it.
Cheers,
Jaap