[OpenAFS] Re: Monitoring bad ACLs of webpages: best practices? faster
Fri, 7 May 2010 13:38:45 -0500
On Fri, 07 May 2010 14:14:36 -0400
Kevin Walsh <firstname.lastname@example.org> wrote:
> One solution we're considering is regularly scanning our webspace for
> excessively naive ACLs, but this is quite time consuming. Is there a
> faster way to search for specific ACLs than various incantations of
> gfind to fs-listacl, perhaps something that dumps all the ACLs of a
> volume, assuming they are kept on one spot?
You can examine volume dumps offline, as Thomas Kula mentioned, but it's
still probably not all that fast. ACLs aren't kept all in one spot, so
you do need to pretty much go through the whole dump.
You can also use a 'find' variant from an AFS client, as you said; I'm
pretty sure there are places that do this. But yeah, it's slow and adds
to fileserver load. Though you can do this on the .backup or .readonly
(if it exists) of the volumes in question, to alleviate that somewhat.
Another option is to use the fileserver audit log to detect when ACLs
are changed, and you can go see what they were changed to immediately
and do something if they're "bad". (Or possibly record changed ACLs and
go check them every day or something.) But you have to do that for each
fileserver. Also, in 1.4 the audit log can only go to a file (or pipe).
There exists the functionality to audit log to a SYSV message queue in
1.5, which is probably more convenient, but it probably is not difficult
to backport to 1.4.
There's also been proposals for solving this type of problem
proactively, by preventing users from setting "bad" ACLs in the first
place. You can read about them here, if you're interested:
Though, that doesn't really help you out now, as it's still just a
proposal in text; it still needs protocol standardization,