[OpenAFS-announce] Request for help: Dedicated OpenAFS testers

Andrew Deason openafs-info@openafs.org
Wed, 16 Apr 2014 17:37:04 -0500


Executive summary: If you are able to run any part of OpenAFS
prereleases in any part of your environment, and want to help testing
OpenAFS in a slightly more structured manner, please send me an email.


Hi everyone,

This year at EAKC, a conversation came up yet again about what kind of
testing the OpenAFS releases and prereleases go through. In short, the
answer tends to be very little beyond "smoke" tests from developers, and
depends on people in the community running prereleases when they are
announced. This also varies quite a bit, and so there is almost no
amount of testing that is "guaranteed" for each release that is
announced.

One way of improving this is to have more comprehensive automated test
suites, and there are a couple of efforts underway to try to fix this
from that end in parallel with what I'm talking about in this email.
Even with such an automated system in place, though, that cannot fully
replace the testing done by administrators at various sites. This is
because there are many factors at sites we either do not know about
(strange factors in an environment even the administrators may not know
about or understand) or cannot easily replicate (such as interaction
with expensive commercial software, or internal software). 

So, we would really like it if more sites tried out prereleases and
reported successes or failures. Currently that doesn't happen very much,
though Rich Sudlow just sent one today for 1.6.8pre1; thanks Rich!

At the same time, we already know that many sites have a kind of test
environment, or a "burn-in" procedure that is run against OpenAFS
releases before they deploy new versions to a production environment.
Maybe you have a "test" cell that you install to to see if anything
breaks, or maybe you install a single fileserver with less-important
volumes. But sites often only do that for actual releases (not the
prereleases), so any issues found by such testing are only discovered
after the actual release has been done.

In order to try and get more people to do this regularly for
prereleases, I am trying to assemble a group of people that we can
depend on for providing testing results from their site. This way, if I
don't hear from someone, I can bug them about it and find out what's
going on, and so people don't have to "remember" to test new
prereleases. Thanks to Arne Wiebalck of CERN for originally raising this
kind of idea.


If you would like to be one of those people, please send me an email. I
have already found a few people that have agreed to do this without
trying too hard, but hopefully we can get many more. (Those people were
either at EAKC for this discussion, or found out about this via back
channels.)

There are not really any requirements for participation, beyond just
running (Unix) components of OpenAFS. For sites that have a "burn-in"
procedure for deploying regular OpenAFS releases, just take an OpenAFS
prerelease and do whatever you do with OpenAFS releases before deploying
them to production. If you do not have anything like that, maybe install
the prerelease on a single fileserver (storing your home directory), or
a single client (your desktop).

If you are not comfortable with installing a prerelease fileserver in
your cell, that's okay! Installing and using any component is useful;
just testing the client is fine, too, and helps get more coverage.

Even if you have a very small setup and you don't generate very much
load, that's okay, too! While bigger environments are better for
discovering some hard-to-find issues, just getting the code running at
all in different environments is valuable. While we would like the
prerelease code to get as heavy use as possible to be more certain that
it doesn't have bugs, we recognize that people aren't going to trust
"prerelease" versions as much as a stable release, so that's fine.

And if you are worried about the stability of prereleases, and whether
participating in this testing will break your cell, keep the following
in mind. Right now prereleases are not much different than stable
releases in this respect. There are very few bugs that get "caught" in
prereleases these days. Even when pre2, pre3, etc releases are done, it
doesn't mean that someone ran pre1 and found an issue that was fixed in
pre2. Often it just means that someone encountered an important issue in
a prior stable release, and we just found out about it during the
prerelease cycle. So, right now if there is a critical bug lurking
1.6.8pre1, and you avoid installing to 1.6.8pre1 and avoid downtime for
that issue, chances are you're just going to encounter that issue
whenever you install 1.6.8 final. (Ticket 131852 is an example of this
that will likely yield a 1.6.8pre2.)

If your site does not let you publicly acknowledge that you run AFS, or
if you are concerned that participating in such testing creates some
kind of official/legal endorsement, that does not necessarily prevent
you from participating in this. I may need to be anonymizing some of the
participants or otherwise providing a kind "buffer" anyway, so that is
certainly an option.


So, if you have any interest at all in participating in this, please
send me an email. If you want to go through a support vendor instead of
contacting me directly, that's fine too; just contact them and tell them
to contact me. If your support vendor is SNA, just email me directly;
you don't need to make a ticket or anything unless you want to. If
anyone does not want to talk directly to me for any reason, you can also
send something to openafs-info@openafs.org (public, obviously) or
release-team@openafs.org (private). If anyone doesn't want to
communicate with me over an SNA-"branded" email address for any reason,
you can also reach me at adeason@gmail.com.

I am currently considering this a Unix-only endeavor, since I'm trying
to organize this to be somewhat related to the 1.6 release process.
Maybe something like this would be useful for the Windows client as well
at some point, but the Unix and Windows release processes are completely
separate worlds at this point, so any community testing thing would also
probably be separated. This also is not intended to get rid of ad-hoc
reports of success sent to openafs-info; those will always be welcomed.

I am not 100% certain of how some details of this will work, and partly
this depends on how many people respond and what kind of sites are
available for testing what components. For now I'm just handling it
manually, but maybe this will grow into involving a mailing list, maybe
we'll put the participants in the "credits" on the website somewhere,
etc etc.

And if you think you've already told me that you're interested in this
and that you're already "on the list"; email me anyway! Just to avoid
any ambiguity. 

If I haven't said it enough yet: email me if you are interested in
participating in this. Right now! You don't have to provide me with any
information yet and you don't need to figure out how it will work for
you; I just want a point of contact so I can continue the conversation.

-- 
Andrew Deason
adeason@sinenomine.net