[OpenAFS] Afs User volume servers in VM's
Wed, 26 Oct 2011 12:02:18 -0400
This is an OpenPGP/MIME signed message (RFC 2440 and 3156)
Content-Type: text/plain; charset=UTF-8
On 10/26/2011 10:34 AM, Booker Bense wrote:
> I am sure I'm far from the first person to think of this and there are
> some threads on the list about it. But has anyone
> gone to the logical conclusion for user volumes and done
> one VM , one server per user home volume ?
This is not a practical use of resources. There is a limit of ~255 file
servers in a cell which will not cover all of the users in most cells.
Fixing this requires a database upgrade and RPC updates for the
> A batch system of any reasonable size is pretty much a built in
> denial of service attack for the current OpenAFS implementation.
Before deciding on a solution, what needs to be understood is which
resource or resources become the bottleneck. Only then can a proper
solution be implemented. We know there are a variety of bottlenecks in
the AFS file server:
1. rx related
2. file server host management
3. callback processing
4. vol package
5. dir package
6. disk i/o channel
7. mp scalability or lack thereof
which are affected by the configuration of the file server and the
system as a whole.
We also know from Andrei Maslennikov's HEPiX Storage WG studies with CMS
and ATLAS that the configuration of the afs clients must be tuned to the
job being processed. A mismatch of the client configuration to the job
can result in significant additional work being performed by the file
server for no gain to the application issuing the file system requests.
> We work around this by user education and having a "jail server"
> where we move user volumes that are getting hammered. But this
> requires a lot of monitoring and admin shuffling, and badly affects the=
> user perception of AFS as a service.
> Ideally, you'd like one mini-server per user volume and at least the
> user would only shoot himself in the foot. I don't think this is
> particularly practical even with current VM's, but how far can you push=
> it? And in particular when one VM goes south how does that affect the
> rest of the VM's on the machine?
One file server per volume is impractical. Scaling the number of file
servers has a negative impact on client performance since clients must
maintain the up/down status of each file server that it comes into
contact with. I am not aware of any clients that have been tested in
environments with more than a thousand file servers across all cells the
client is in contact with.
Hosting multiple file server VMs on a machine is a practical way of
improving utilization of systems with large numbers of CPU cores. There
is no benefit in deploying shipping OpenAFS file servers on machines
with more than four cores.
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="signature.asc"
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.9 (MingW32)
-----END PGP SIGNATURE-----