[OpenAFS] the future
Tue, 2 Oct 2012 12:32:50 -0400
On Mon, Oct 1, 2012 at 10:09 PM, Jeffrey Altman
> On 10/1/2012 12:48 AM, Troy Benjegerdes wrote:
>> On Sun, Sep 30, 2012 at 11:38:10PM +0200, Lars Schimmer wrote:
>>> On 30.09.2012 21:10, Troy Benjegerdes wrote:
>>>> One-time deals (on linux) that require interaction will blow up all kinds
>>>> of automated tools and leave the rank and file admins your enemy.
>>> Easy, user do call admins angry and stupid. And Admins change OpenAFS to
>>> NFS/SMB/or anything else, which is free and easy to deploy.
>>> Nearly everything is free, functional and already included.
>>> Why hassle with more work, incompatible licenses and all the user support?
>> Having migrated from NFSv3 to AFS (and then OpenAFS), I'd have to say that
>> NFS may be free, but it doesn't really fall into the 'functional' category.
>> But this was several years ago, so there might have been some magic that
>> happened with NFS I haven't seen yet.
>> Can anyone who has experience migrating to/from OpenAFS from/to anything
>> else in the last 2-3 years please comment? If there's really something
>> free, functional, and already included then I'd like to know what the
>> heck it is.
> I will remind the community of OpenEFS <http://www.openefs.org/> which
> was developed specifically to permit a large financial institution to
> use NFSv3 for global software distribution via a firm-wide name space.
> While it is true that AFS3 provides a large amount of administrator
> functionality in the box that is not present in competing products, that
> doesn't prevent organizations from spending money to replicate that
> functionality at a higher layer.
Thanks for the shout out, Jeff....
Now that my software's name has been invoked in this conversation, I
think it's finally time I piped up and offered some of my perspective
First, for those of you who are not familiar with the work I've done,
just read this short document.
I think am uniquely qualified to talk about "migrating to/from
OpenAFS" given my history with distributed filesystems in general.
That document's a bit out of date, since I've had a couple of jobs
since I wrote it. I think the first thing to point out is that it is
foolish, and in fact categorically false, to make a context-free
"Filesystem FOO is better than filesystem BAR"
I can give you plenty of real world use case scenarios where OpenAFS
performance is completely horrid, and the other filesystems beat the
crap out of it. The opposite is also true, of course. The reality
is that there is no obvious winner in this debate, because you have to
compare each of these technologies in the context of the specific use
case to which you are subjecting it.
Troy, you have repeatedly asserted that "everything not-AFS just plain
sucks" (sorry for the paraphrase), and all that tells me is that for
YOUR specific use cases, that is obviously true. However, you can
not make that same assertion for the general case.
What large enterprises have either understood, or are slowly figuring
out, is that there is no single distributed filesystem product that
stands out above the others. You have to make some painful trade
offs to determine which one works for the specific use cases you have,
and the challenge is to pick the best one for the problem you're
trying to solve. Trade offs suck, especially when they are very
hard to pin down with quantifiable metrics.
Also, I want to make sure people understand what OpenEFS is, and how
it relates to OpenAFS (other than the confusing single-character
difference in the produce names). EFS is really an open source
implementation of the software development lifecycle functionality
that was core to VMS, and it currently only supports an NFSv3 backend.
As Steve Jenkins has pointed out, there is a branch with a WIP
implementation of support for OpenAFS and that branch is also where
I've been laying the groundwork for NFSv4 support, too. I expect to
make the OpenAFS support part of the master branch, fully working,
early next year.
However, and this is CRITICAL: EFS does NOT manage the backend NFS
servers. Why? A complete and total lack of standards for how to
manage NFS!! Have you ever taken a close look at how something as
simple as "exportfs" varies WILDLY from OS to OS? This is one area
where OpenAFS *does* kick the crap out of NFS: centralized management,
and the fact that there's one implementation, making automation a lot
more feasible (cf. AFS::Command perl modules suite, which has been
used to implement things like VMS, and lots of other management
software for AFS). Nothing remotely similar exists for NFS, since
each and every NAS vendor has taken advantage of the LACK of
standardization of their management tools, making a generic management
infrastructure all but impossible. Other have disagreed, but I'm
still waiting for the code or the product to appear....
Example: in order to manage the NetApp filers, I had to develop a
rather complex suite of perl modules to make it possible to write
scalable code to manage them: that code is available CPAN, too. That
code was, by necessity, entirely proprietary to NetApp, since their
management interface for NFS is entirely proprietary. In order to
generically manage NFS servers, you would need to develop an abstract
API that support common functionality, and design it to take plug-ins
for a variety of APIs to proprietary management interfaces, since
EVERY SINGLE NAS VENDOR, without exception, has developed (out of
necessity, to fill the void) their own management interfaces.
Finally, there's a HUGE barrier to adoption for EFS: it makes a lot of
assumptions about your global environment that are inherited from the
original Morgan Stanley Aurora architecture, and that probably do not
hold true for most enterprises. For example, it assumes you have a
single, global authentication domain (i.e. Kerberos realm) and that
ALL of your "cells" are part of that environment. It also assumes,
for NFSv3, that your UIDs and GIDs are sanely managed, and effectively
one, single global database as well. It also assumes that you are
willing to let all of the EFS administrative servers trust each other
(distribution is rsync over ssh, for example, and you have to allow
password-free ssh as root between those core machines).
I will be the first to admit that EFS is trying to provide general
purpose solution to a problem that probably will not fit very well
with most big enterprises. And that's really a key point: each and
every large enterprise is to some degree unique, since each and every
business is to some degree unique. Standardization can only get you
so far, and then when you encounter problem of scale for which you
can't buy a COTS solution, you solve the problem yourself, and now
you're unique. I've been working in the core engineering departments
of major financial firms for over 20 years now, and had a hands-on,
up-close-and-personal look at how these companies have deployed core
services, and I would probably guarantee that my career ends
immediately if I were to share some of what I know (maybe I'll write a
book, if I ever retire -- unlikely -- my expensive hobbies will
probably force me to stay employed until I drop dead :-).