[OpenAFS-devel] Some thoughts on test framework requirements

Sam Hartman hartmans@mekinok.com
Sun, 22 Jul 2001 20:25:27 -0400 (EDT)


I've been thinking about how to set up AFS tests and have tried to
come up with some requirements for the basic testing structure.  I
think I'm coming at this from a different standpoint than Mark.  While
I agree that tests that can be run within the source tree framework as
part of make check would be good, I'm focusing initially on
regression tests that can assume more about an environment than seems
reasonable from within make check.  Hardware is getting cheap  and I
think that the AFS community as a whole can dedicate a few machines to
running constant wash/test cycles of AFS.  This does not imply that
tests within the source tree are any less important--they are critical
if for no other reason than to make sure that people contributing
features also contribute known-good tests for their features.  I'm
simply approaching the test-design problem from the release-level
tests; at the end I'll need to come back and make sure that I can fit
source-level tests into the framework somehow.

So, what do I want out of a framework  for tests?

1) Not Dejagnu.  I've dealt with the Kerberos  tests and having to
   deal with Dejagnu was sufficiently annoying that I tended to prefer
   to avoid running make check  or to look at test problems.      My
   natural tendency would be to write or find a framework in Perl, but
   that's just a personal preference.

2)  Tests should support recursion.  Consider a test  for
the recent readlink stat cache bug.  To test this problem you want to:
* Get admin tokens
* Create and mount a volume
* create a symlink
* Drop admin tokens
* Try reading the symlink

Many of those steps could be simpler tests in their own right.  For
example it is likely that a good set of AFS tests would include a
simple make volume and try mounting it test.  That test is likely to
do fairly good error checking on the create volume operation.  The
implementation  of the readlink test should take advantage   of this
code.  One way to handle this might be to implement tests as
procedures in some programming language and to simply allow them to
call each other.  This is fine, although I suspect you may want a bit
more infrastructure  to know which sub test is failing; I guess a
stack trace provides enough info, but more polish would be desirable.

2)  Tests should be parameterized.  You want to run multiple instances
    of a test at the same time in many cases, and you want to reuse
    sub-tests such as volume creation at multiple levels.  So you want
    to be able to pass in things like the name of a volume to create,
    the name of a mount point to use, etc.  You actually want to also
    be able to have inherited parameters as well.  For example, assume
    I have a large battery of tests I want to run multiple instances
    of.  I probably want to give that battery of tests  a root of
    volume names it should create/manipulate as well as a directory
    path where all its mount points should live under.  Any sub-tests
    it recurses into should inherit these values.  IF it wants to run
    multiple sub-tests in parallel, then it might append to these
    parameters  and go on from there.

3)  Long term we want distributed execution.  This will be important
    so we can execute parallel sets of tests in different pags on a
    single machine as well as executing tests to confirm cache
    consistency between machines.  We probably also want to be able
    to    start up a master wash and have it go run tests on all
    architectures in a coordinated manner.  I don't know we want to do
    the distributed support now, but I think it influences the design
    in several key ways.  We want to minimize the complexity of the
    interface between tests, so that for example that interface can
    easily be implemented over ssh or  some other pipe.  Ideally we
    might have something as simple as tests taking a set of key-value
    pairs as input for parameterization and generating a success
    status along with logging output.

4) Tests should be able to log large quantities of output.  Consider
    that one test I at least want to run is building AFS.  I want
    buildlogs especially in case of failure.

More to follow; comments welcome.