[OpenAFS] cache performance
Phil.Moore@morganstanley.com
Phil.Moore@morganstanley.com
Fri, 25 Oct 2002 11:42:41 -0400
>>>>> "Todd" == Todd M Lewis <utoddl@email.unc.edu> writes:
Todd> Seems like capturing the data at the server would be a lot more
Todd> efficient. (And let's not even mention the glaring assumption
Todd> that you actually know about all your clients...)
Oh, I'm not arguing that point at all. Server-side data collection
will *ALWAYS* be more efficient that client side, and we know we don't
get complete client coverage, even in a draconian, fascist
control-phreaque environment like ours.
Todd> Just curious: could you point out in what ways specifically this
Todd> has been useful? Perhaps adding appropriate logging on the
Todd> server would be worth it for some of the rest of us.
We have a HUGE environment here, and almost (>95%) all of our
production software is run from readonly AFS volumes. When we want to
decommission old releases of software, and reclaim the space, we have
a huge headache on our hands.
We need to know *who* is using something, so we can get them to
upgrade to newer releases of the given product. We provide the
following information to our developers to help them manage this
problem.
First of all, we perform server-side analysis of AFS volume access to
determine the most recent last access timestamp on each AFS volume, in
each AFS cell (any given software product is distributed across
numerous AFS cells). We can roll this up and provide a single last
access time for each release (since we know which AFS volumes comprise
any given software release).
But that only tells me *when* software was accessed, not by *who*.
This is where the cache audits have proven emmensely useful. Now, I
can at least provide a list of machines that have accessed the
release, so they know where to start looking for the dependencies.
Its not easy, of course -- you have to grope the process table, look
at what software packages are configured to run on these hosts, etc.
Its not a perfect solution, but in practice, we've frequently been
able to track down production dependencies on software we wanted to
wipe out, and work proactively with the owners of the dependent
software to get them upgraded, and avoid the inevitable outages that
happen when we remove something that is still in use.
So...
I absolutely want this data to be available on the server side, and
strategically, this is an area we (Morgan Stanley) will eventually
focus on.
However, tactically, I am looking for a way to get better data out of
the clients, since I have the infrastructure in place to (at least
attempt to) audit them. Long term, server-side is clearly the way to
go, of course.