[OpenAFS-devel] Re: Lockdown for VL and VOL RPC interfaces for non-authenticated user

Gergely Risko gergely@risko.hu
Mon, 17 Mar 2014 15:13:46 +0100


On Sat, 15 Mar 2014 23:01:15 -0400, Jeffrey Altman <jaltman@your-file-system.com> writes:

> Gergely,
>
> I'm going to prune the majority of the content because I would like to
> focus on the threats you wish to protect against.

Thank you for the very detailed response, I'll try to address the issues
raised in your response.

> You have proposed a mechanism for locking down some of the RPCs on the
> VOL and VL services based upon:
>   system:anyuser (the current behavior)
>   system:authuser
>   system:administrator
>
> I believe that such broad controls on the RPCs that are not used by the
> cache managers are reasonable.  Doing so will not violate the agreement
> with IBM on the use of the AFS protocol.  However, I'm not sure that
> doing so will address your specific threats.
>
> I also believe there needs to be an additional level to permit
> system:authuser + authenticated foreign users.

Forgive my unfamiliarity with foreign users in AFS, but is there already
some mechanism to have "friendly zones", because just allowing anyone
with an AFS ticket to any zone doesn't seem to be fruitful (it's easy to
install a fake zone for yourself).

Also, I agree with the comments in this thread to reuse the already
existing terminology of AFS, so I will call my options:
  - anyuser (default)
  - authuser
  - administrator
  - ??? (how should we call your authuser + foreign user class?)

What should be the sysadm interface for this feature?  A vlserver config
that can be added in /etc/openafs/BosConfig?  Or a new file in
/etc/openafs/server?  Or a dynamic setting that can be changed and
queried through a vos RPC?

> There are a variety of methods by which spammers do this today:
>
> 1. They scan the contents of the "home", "usr", "user", etc. tree in the
>    cell's file system name space.   The list of mount points is more
>    often then not system:anyuser "l" or at best system:authuser "l"
>    in order to permit users to see each others home directories and
>    because machines they login into must be able able to access the
>    home directories before the user's authentication tokens have been
>    obtained.

In my setting I don't plan to give system:anyuser access to the user
store.  If users want to publish data in AFS, we will have separate
volumes for that which will not contain their username (nor in the
volume name, nor in the path that the volume is mounted on).

But yes, in already existing installations, where public space is
e.g. provided at locations like /afs/elte.hu/user/e/errge/public my fix
is kind of pointless, because it's obvious that there is an errge user.

> 2. "vos listvldb" can be used to obtain the list of all volumes.  The
>    user names can often be extracted from the volume names.

Yes, I want to fix this one.

> 3. "vos listaddr" to obtain the list of all file servers combined with
>    "vos listvol" can be used to obtain a list of all volume names.

And this one too.

> There is little benefit to locking down the vlserver and the volserver
> if the file system can be searched.

But it can't be in a lot of cases.  Also, I don't really want to defend
against system:authuser.  Because it's very hard to do that.  They can
use unix "last" or "w" command on shell servers to mine email
addresses in a standard university setting.  On the other hand doing so
they risk being punished for actions like this.  At the same time you
can't punish random chinese IP addresses sending vos listvol RPCs to
your servers.

>>   - spammers can confirm based on the stats the list of users that are
>>     actually active on a computer system,
>
> The cache manager debug interface (cmdebug) is implemented by all
> existing AFS cache managers.  This interface can be used to obtain the
> list of FIDs in the cache including the active set of callbacks.  The
> FIDs indicate the cell and the volume by ID.  The ID can be converted to
> a volume name using VL_GetEntryByName*() RPCs that must be open to
> permit cache managers to lookup the file server/partitions on which a
> volume is located.
>
> The "vos examine" reported statistics are not necessary.   There is no
> authentication on the cache manager debugging interface because there is
> no mechanism for keying the service.   The "volume stats" also are not
> collected for a specific "computer or device" but for the cell as a whole.

This cache manager information leak is interesting, thanks for pointing
it out.  Is this true for local users or also remotely talking to the
cache manager?  So is the debug interface open for remote connections?

I plan to use AFS with client laptops where every laptop has one user
and don't plan to give shell access to big shared shell servers.  This
is why my question is relevant.

>>   - from the vol stats people can monitor and figure out if someone is
>>     at the computer using AFS which can be part of a bigger social
>>     attack or harrasment scenarios.
>
> The volume statistics can indicate which volumes are more actively used.

Yes, exactly that was my point to, that's why I'd like to get rid of the
public availability of those RPCs if not needed by cache managers.

I'd like to elaborate a bit more on this sentence from your email:
> There is little benefit to locking down the vlserver and the volserver
> if the file system can be searched.

This is true when we're designing a new security system and I of course
hold myself to this principle when I design new systems through my
everyday work.  This case is on the other hand is a bit different.  If
we don't start to take care of these issues at least when it's easy,
then we will always be adding new (or leaving open old) holes with the
reasoning seen here:

  http://lists.openafs.org/pipermail/openafs-info/2012-July/038333.html

  "I think it is fine to skip access control checks on this call
  entirely.  As you point out, the information available via this RPC is
  also available to unauthenticated clients via the volserver."

Security is not black and white, if we fix one leak then we're a little
bit better already, I think.  Of course, it's not optimal but we should
start somewhere.

If you don't think that I'm really going in a bad direction with this
proposal then I'd appreciate your help in designing and implementing
what is reasonable now and maybe fixing more later.

Thanks,
Gergely