[OpenAFS] Re: Afs User volume servers in VM's

Andrew Deason adeason@sinenomine.net
Wed, 26 Oct 2011 10:57:07 -0500


On Wed, 26 Oct 2011 17:26:55 +0200
Stephan Wiesand <stephan.wiesand@desy.de> wrote:

> Running multiple fileservers on different ports on the same system
> would be even more efficient. Is this possible or could be implemented
> (in theory)?

In theory, of course. You just need a way in the fileserver to specify a
port, vldb modifications to store a port for the fileserver, and the
protocol modifications to communicate a port to the vldb and to clients.
You know, "just" all that :) The prerequisites for the vldb
modifications have already been discussed a bit on the standardization
list.

That would of course limit the amount of clients that can actually
access that fileserver, since no clients today can do any of that.

> What would be a great feature to have is a way to keep the server from
> using more than, say, half of the available threads for a single
> volume. Would this be feasible to implement at all?

Sure. Well, sort of. The server can obviously keep track of which calls
are associated with what volume (and already do post-1.6, for
-offline-timeout functionality), so if the number of calls is greater
than "X", we can just not service the request.

However, the way to do that is to return a VBUSY error to the client
("busy; try again later"), which cause the clients to sleep and retry
after some number of seconds (and make them log those "busy waiting for
volume" messages). And we only do that after we've obtained some kind of
reference to the relevant volume, which is after a considerable amount
of processing has been done. Maybe we could check that a bit earlier,
but the point is we'd have to receive the call and return an error if
we're over quota for the volume, which takes some extra processing; we
can't, like, direct calls dealing with volumes to a certain subset of
threads or something. But I guess that's probably not a problem; it
could make things worse in some situations but better in others.

-- 
Andrew Deason
adeason@sinenomine.net