[OpenAFS] Afs User volume servers in VM's

Jeffrey Altman jaltman@your-file-system.com
Wed, 26 Oct 2011 12:02:18 -0400


This is an OpenPGP/MIME signed message (RFC 2440 and 3156)
--------------enig2F7BFC7E5664661075CD2333
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

On 10/26/2011 10:34 AM, Booker Bense wrote:
>=20
> I am sure I'm far from the first person to think of this and there are
> some threads on the list about it. But has anyone
> gone to the logical conclusion for user volumes and done
> one VM , one server per user home volume ?

This is not a practical use of resources.  There is a limit of ~255 file
servers in a cell which will not cover all of the users in most cells.

Fixing this requires a database upgrade and RPC updates for the
management tools.

> A batch system of any reasonable size is pretty much a built in
> denial of service attack for the current OpenAFS implementation.

Before deciding on a solution, what needs to be understood is which
resource or resources become the bottleneck.  Only then can a proper
solution be implemented.  We know there are a variety of bottlenecks in
the AFS file server:

1. rx related

2. file server host management

3. callback processing

4. vol package

5. dir package

6. disk i/o channel

7. mp scalability or lack thereof

8. other

which are affected by the configuration of the file server and the
system as a whole.

We also know from Andrei Maslennikov's HEPiX Storage WG studies with CMS
and ATLAS that the configuration of the afs clients must be tuned to the
job being processed.  A mismatch of the client configuration to the job
can result in significant additional work being performed by the file
server for no gain to the application issuing the file system requests.

> We work around this by user education and having a "jail server"
> where we move user volumes that are getting hammered. But this
> requires a lot of monitoring and admin shuffling, and badly affects the=

> user perception of AFS as a service.
>=20
> Ideally, you'd like one mini-server per user volume and at least the
> user would only shoot himself in the foot. I don't think this is
> particularly practical even with current VM's, but how far can you push=

> it? And in particular when one VM goes south how does that affect the
> rest of the VM's on the machine?

One file server per volume is impractical.  Scaling the number of file
servers has a negative impact on client performance since clients must
maintain the up/down status of each file server that it comes into
contact with.  I am not aware of any clients that have been tested in
environments with more than a thousand file servers across all cells the
client is in contact with.

Hosting multiple file server VMs on a machine is a practical way of
improving utilization of systems with large numbers of CPU cores.  There
is no benefit in deploying shipping OpenAFS file servers on machines
with more than four cores.

Jeffrey Altman



--------------enig2F7BFC7E5664661075CD2333
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="signature.asc"

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.9 (MingW32)

iQEcBAEBAgAGBQJOqC8KAAoJENxm1CNJffh48x8H/Avd+8xwJ04Jviyho89ZMZqR
iL6OYj94d1pctH+PoiUJf1TWXHsoIRQk2Y3dWBAIXZzJfoevFbnUIUhx9HjYQMbw
enpN/XgM2Y23XaFCQfd+BLWei1VOtKO26w4xuHm4lTqyYP2iZ0wH7MjhLG5uvcfm
E6iqPn/mXj7Wx/WnRDb+wY1EYZa7Z828EuZmL7rw5F8C2KT/eQ8gkS6zp0mCU765
7HBk/REDi6dX7VwF8vsreNmt1L5ASSz5gSJySU14S1S3LbBZErr4FxqHyGPTlNn0
wfv5r/lKACKMHxVM4tOWTEJrBvziDHk4Xo9dULTBj2hjdGJenfn0uVrru11jC8Y=
=4ugM
-----END PGP SIGNATURE-----

--------------enig2F7BFC7E5664661075CD2333--