[OpenAFS-devel] Re: openafs - proposed cache security improvement

Sean O'Malley omalleys@msu.edu
Wed, 28 Mar 2007 23:27:17 -0400 (EDT)


On Tue, 27 Mar 2007, Marcus Watts wrote:

> > > In this case, spoofed servers.  Regular kerberos works because
> > > it's not protecting a shared resource.  In this case, there's
> > > a shared resource involved, so there needs to be something extra.
> > > I hope you have your kerberos servers & file servers straight
> > > in your head.
> >
> > I have that straight I didnt have your proposal straight, because I
> > was trying to twist it so it could with clients detached from the network
> > at midnight which is always a fuzzy time. :) I am also very loosely using
> > terminology which is confusing especially to programmers. =)
> >
> > I was just kind of wondering is if you could use the shared key, to
> > encrypt a file which stores a "master key", that could be used to "verify"
> > credentials locally for the local user, which would probably be encrypted
> > with a combination of the master key and the shared key. IF they have been
> > previously authenicated which they have to do in order to create a "cache"
> > of their actual files they wish to take with them. Their "cache"  could be
> > accessed using a combination of the host shared key, and their password
> > which would decrypt their "filesystem" (more like a loopback mounted
> > filesystem.). Upon reconnection to the network they would have to
> > authenicate once using the fake stored creds to verify their
> > creds were actually legit, and once using their real creds to the actual
> > server to get a regular connection, and to sync their "cache" with the
> > fileservers.
> >
> > I was also thinking that you could hack kaserver to store client keys,
> > and transport encryption keys. It could store the client public user key
> > to match it with the host key and an encryption key. (and of course put a
> > TTL on those keys so they can be cleaned up periodically, and for
> > security.) Which does require another server, but kaserver would
> > just need to be modified. (well okay, it probably needs to be completely
> > overhauled, but not for a prototype.)
> >
> > Thus I have offered more confusion. :)
>
> Ok.  For the encrypted cache -
>
> In your case -- the thing to follow is the chain of whatever you do working
> backwards from the encrypted data & key, to whatever resources you have on
> the machine sitting there "cold". ...

I did. I couldnt think of a better way to implement than what implemented
for the connected network sans the DES key.

> You mentioned kaserver.  kaserver is des-only today.

That needs to go away as does dependency on DES period in other parts of
the code, which i really think you were headed with the ssl stuff in a way
to snag a new libcrypto and abstract the calls or just abandon that code.

The DOCUMENTATION needs to be changed so people don't USE kaserver and
have to go through the ugly, relatively undocumented conversion process.

And along the same lines our wiki needs to be updated with new information
so as to reflect things we -have- done and are doing. we look like a DEAD
project.

> Storing things in kaserver or any remote server creates
> its own problems, especially potentially large amounts of relatively
> volatile data.

Volatile yes, large.. depends on your definition. =)

> I'm very interested in avoiding central administrative
> overhead, both human & machine.

me too!

> This is a slight elaboration of the local password scenario -- now the
> attacker has to trick kaserver (or whatever) into surrendering
> possession of the key. The mechanism here whatever it is is likely to
> be very specific to your laptop environment.
> An unattended server in a closet should probably not be waiting for
> the user to enter a password, and a shared multi-user machine in a
> pool of similar such machines should certainly not be doing so.
>
Not quite what I was thinking. ~s/kaserver/highly volatile database/ in
pretty much everything i have said, then do a two or three type of
keys. I didnt quite get it all drawn out in my head quite right going all the
way back which is impossible from a cold boot, even with the best of
salts. :)

I was up a level when i said password entry.. that was the actual user key
not the machine key. There is no way in heck i would propose the client
key like that. I would propose the users own "portable" dataset be
encrypted that way. but you are correct it starts to get very specific to
the portable computing environment. I was thinking maybe the encryption
alogorithms could be shared and the token format could be shared.

But I do think you touched the important points. :)
One is having to hack two machines, you have to nail the client
machine first, then you have to nail a highly protected server that can be
used to log and track the potentially tainted data and be used to notify
users. ideally it is written so it can't be spoofed very easily. But the
important part is that you are aware of a potential breach in security of
your end users data.

You make your point, there are trade-offs involved. I won't
argue that. What I would argue, is the trade-off between security and
usuability we have to slide towards the usuability side. Unless there is a
"better way".  I think a few things need to be done to get a larger group
of people interested in the project and detached clients is one of them
that could make a big splash. It just takes years to get stuff
implemented, especially dealing with legacy issues, so if you can put the
gears in place before you need them, you have fewer legacy issues to deal
with and your development isnt waiting on the migration cycle.


--------------------------------------
  Sean O'Malley, Information Technologist
  Michigan State University
-------------------------------------