[OpenAFS] Problems on AFS Unix clients after AFS fileserver moves

Todd DeSantis atd@us.ibm.com
Tue, 9 Aug 2005 18:26:10 -0400


--0__=08BBFACBDFEB81E48f9e8a93df938690918c08BBFACBDFEB81E4
Content-type: multipart/alternative; 
	Boundary="1__=08BBFACBDFEB81E48f9e8a93df938690918c08BBFACBDFEB81E4"

--1__=08BBFACBDFEB81E48f9e8a93df938690918c08BBFACBDFEB81E4
Content-type: text/plain; charset=US-ASCII
Content-transfer-encoding: quoted-printable


Hi Rich -

I am glad that

      fs checkvolumes

was able to help you get rid of this problem.

Hopefully this was not a coincidence and the "vos release"
of the bogus root.cell.readonly also did not happen around
this time.

To help understand why your clients were in this state
I would like to ask some questions:

 - a kdump snapshot would have been able to give us some
   information on the state of the client and could have
   helped us determine if any volume and/or vcache entry
   was still pointing at this old fileserver

   Did you just not build kdump for the client, or does
   OpenAFS not build kdump by default ?

 - when was this fileserver taken out of commission, was it
   within 2 hours ?

   Normal callback timeouts on volumes would be 2 hours.
   There is a daemon on the client that will run every 2
   hours and it will clear the "volume status" flag on
   the volumes in the volume cache, if the expiration time
   has elapsed.  I think readonly volumes had a maximum
   2 hour timeout.

   This process also causes the vcache structures to have
   their CStatd bit cleared.  This tells the client to run
   a FetchStatus call to determine if my cached version is
   still the correct version of the file/dir.

   This is the way that the IBM Transarc clients work.  It is
   possible that the OpenAFS code has changed the callback timing
   a bit, I am not sure of this.

   But the above procedures will cause the following to happen
   the next time you tried to access a file or directory that
   had its volume status flag cleared

      - contact the vlserver and get location information for
        the volume.  If the client still thought that this file
        lived on the bad fileserver, and the VLDB information is
        correct, then it would get the new server location info.

      - it would then contact the fileserver with a FetchStatus
        call to determine if its cache is current, or if it
        needs to do a FetchData call to the fileserver for your
        directories and files.

      - and at this time, it has located the directory/file you
        are looking for

Other ways that the volume location information can get cleared is
with

      - fs checkvolumes, as Kim and I suggested to Rich
      - vos move
      - vos release
      - bringing more volumes into the cache than the -volumes option
        in afsd.  This causes some volumes to cycle out of the cache
        and this can clear the status flag for the volume
      - and possibly other vos transactions on the volume

Also, as Derrick mentioned in the first email, once the client knows
about a fileserver, it will remember it until the client is rebooted.
And every once in a while the CheckServersDaemon will run and it will
see that it does not get an answer from this fileserver.  And then
every 5 minutes or so, the client will send a GetTime request to the
fileserver IP to determine if the fileserver is back up.  This could
have been the tcpdump traffic you saw going to this old fileserver IP,
the GetTime call.

Sorry for chiming in on this one, but I wanted to add some information
to this issue since the "checkv" has seemd to get us out of this
problem.

A kdump snapshot would have really helped.

And one more thing to check is if OpenAFS changed any of the
callback timing for volumes.

Thanks

Todd DeSantis
AFS Support
IBM Pittsburgh Lab



                                                                       =
    
             Rich Sudlow                                               =
    
             <rich@nd.edu>                                             =
    
             Sent by:                                                  =
 To 
             openafs-info-admi         dhk@ccre.com                    =
    
             n@openafs.org                                             =
 cc 
                                       "'openafs'"                     =
    
                                       <openafs-info@openafs.org>      =
    
             08/09/2005 05:21                                      Subj=
ect 
             PM                        Re: [OpenAFS] Problems on AFS Un=
ix  
                                       clients after AFS fileserver mov=
es  
                                                                       =
    
                                                                       =
    
                                                                       =
    
                                                                       =
    
                                                                       =
    
                                                                       =
    




Dexter 'Kim' Kimball wrote:
> fs checkv will cause the client to discard what it remembers about
volumes.
> Did you try that?

No - That worked!

Thanks

Rich

>
> Kim
>
>
>      -----Original Message-----
>      From: openafs-info-admin@openafs.org
>      [mailto:openafs-info-admin@openafs.org] On Behalf Of Rich Sudlow=

>      Sent: Tuesday, August 09, 2005 9:58 AM
>      To: openafs
>      Subject: [OpenAFS] Problems on AFS Unix clients after AFS
>      fileserver moves
>
>
>      We've been having problems with our cell for the last couple
>      years with AFS clients after fileservers are taken out of servic=
e.
>      Before that things seemed to work ok when doing fileserver
>      moves and
>      rebuilding. All data was moved off the fileserver but the client=
s
>      still seem to have some need to talk to it.  In the past the AFS=

>      admins have left the fileservers up and empty for a number of
>      days to try to resolve this issue -  but it doesn't resolve the
>      issue.
>
>      For example a recent example:
>
>      The fileserver reno.helios.nd.edu was shutdown after all data
>      moved off of it.  However the client still can't get to
>      a number of AFS files.
>
>      [root@xeon109 root]# fs checkservers
>      These servers unavailable due to network or server problems:
>      reno.helios.nd.edu.
>      [root@xeon109 root]# cmdebug reno.helios.nd.edu -long
>      cmdebug: error checking locks: server or network not responding
>      cmdebug: failed to get cache entry 0 (server or network
>      not responding)
>      [root@xeon109 root]# cmdebug reno.helios.nd.edu
>      cmdebug: error checking locks: server or network not responding
>      cmdebug: failed to get cache entry 0 (server or network
>      not responding)
>      [root@xeon109 root]#
>
>      [root@xeon109 root]#  vos listvldb -server reno.helios.nd.edu
>      VLDB entries for server reno.helios.nd.edu
>
>      Total entries: 0
>      [root@xeon109 root]#
>
>      on the client:
>      rxdebug localhost 7001 -version
>      Trying 127.0.0.1 (port 7001):
>      AFS version:  OpenAFS 1.2.11 built  2004-01-11
>
>
>      This is a linux 2.4 client and I don't have kdump - have
>      also had these
>      problems on sun4x_58 clients too.
>
>      I should mention that we've seen some correlation
>      to this happening on machines with "busy" AFS caches  -
>      which makes it
>      even more frustrating as it seems to affect machines which
>      depend on
>      AFS the most. We've tried lots of fs flush* * -
>      So far we've ended up rebooting which does fix the
>      problem.
>
>      Does anyone have any clues what the problem is or what a workaro=
und
>      might be?
>
>      Thanks
>
>      Rich
>
>      --
>      Rich Sudlow
>      University of Notre Dame
>      Office of Information Technologies
>      321 Information Technologies Center
>      PO Box 539
>      Notre Dame, IN 46556-0539
>
>      (574) 631-7258 office phone
>      (574) 631-9283 office fax
>
>      _______________________________________________
>      OpenAFS-info mailing list
>      OpenAFS-info@openafs.org
>      https://lists.openafs.org/mailman/listinfo/openafs-info
>
>
>
> _______________________________________________
> OpenAFS-info mailing list
> OpenAFS-info@openafs.org
> https://lists.openafs.org/mailman/listinfo/openafs-info


--
Rich Sudlow
University of Notre Dame
Office of Information Technologies
321 Information Technologies Center
PO Box 539
Notre Dame, IN 46556-0539

(574) 631-7258 office phone
(574) 631-9283 office fax

_______________________________________________
OpenAFS-info mailing list
OpenAFS-info@openafs.org
https://lists.openafs.org/mailman/listinfo/openafs-info
=

--1__=08BBFACBDFEB81E48f9e8a93df938690918c08BBFACBDFEB81E4
Content-type: text/html; charset=US-ASCII
Content-Disposition: inline
Content-transfer-encoding: quoted-printable

<html><body>
<p>Hi Rich -<br>
<br>
I am glad that <br>
<br>
	fs checkvolumes<br>
<br>
was able to help you get rid of this problem. <br>
<br>
Hopefully this was not a coincidence and the &quot;vos release&quot;<br=
>
of the bogus root.cell.readonly also did not happen around<br>
this time.<br>
<br>
To help understand why your clients were in this state<br>
I would like to ask some questions:<br>
<br>
 - a kdump snapshot would have been able to give us some<br>
   information on the state of the client and could have<br>
   helped us determine if any volume and/or vcache entry<br>
   was still pointing at this old fileserver<br>
<br>
   Did you just not build kdump for the client, or does<br>
   OpenAFS not build kdump by default ?<br>
<br>
 - when was this fileserver taken out of commission, was it<br>
   within 2 hours ?<br>
<br>
   Normal callback timeouts on volumes would be 2 hours.<br>
   There is a daemon on the client that will run every 2<br>
   hours and it will clear the &quot;volume status&quot; flag on<br>
   the volumes in the volume cache, if the expiration time<br>
   has elapsed.  I think readonly volumes had a maximum<br>
   2 hour timeout.<br>
<br>
   This process also causes the vcache structures to have<br>
   their CStatd bit cleared.  This tells the client to run<br>
   a FetchStatus call to determine if my cached version is<br>
   still the correct version of the file/dir.<br>
<br>
   This is the way that the IBM Transarc clients work.  It is<br>
   possible that the OpenAFS code has changed the callback timing<br>
   a bit, I am not sure of this.<br>
<br>
   But the above procedures will cause the following to happen<br>
   the next time you tried to access a file or directory that<br>
   had its volume status flag cleared<br>
<br>
	- contact the vlserver and get location information for<br>
	  the volume.  If the client still thought that this file<br>
	  lived on the bad fileserver, and the VLDB information is<br>
	  correct, then it would get the new server location info.<br>
<br>
	- it would then contact the fileserver with a FetchStatus<br>
	  call to determine if its cache is current, or if it<br>
	  needs to do a FetchData call to the fileserver for your <br>
	  directories and files.<br>
<br>
	- and at this time, it has located the directory/file you<br>
	  are looking for<br>
<br>
Other ways that the volume location information can get cleared is<br>
with<br>
<br>
	- fs checkvolumes, as Kim and I suggested to Rich<br>
	- vos move<br>
	- vos release<br>
	- bringing more volumes into the cache than the -volumes option<br>
	  in afsd.  This causes some volumes to cycle out of the cache<br>
	  and this can clear the status flag for the volume<br>
	- and possibly other vos transactions on the volume<br>
<br>
Also, as Derrick mentioned in the first email, once the client knows<br=
>
about a fileserver, it will remember it until the client is rebooted.<b=
r>
And every once in a while the CheckServersDaemon will run and it will<b=
r>
see that it does not get an answer from this fileserver.  And then<br>
every 5 minutes or so, the client will send a GetTime request to the<br=
>
fileserver IP to determine if the fileserver is back up.  This could<br=
>
have been the tcpdump traffic you saw going to this old fileserver IP,<=
br>
the GetTime call.<br>
<br>
Sorry for chiming in on this one, but I wanted to add some information<=
br>
to this issue since the &quot;checkv&quot; has seemd to get us out of t=
his<br>
problem.<br>
<br>
A kdump snapshot would have really helped.<br>
<br>
And one more thing to check is if OpenAFS changed any of the <br>
callback timing for volumes.<br>
<br>
Thanks<br>
<br>
Todd DeSantis<br>
AFS Support<br>
IBM Pittsburgh Lab<br>
<br>
<img width=3D"16" height=3D"16" src=3D"cid:1__=3D08BBFACBDFEB81E48f9e8a=
93df938@us.ibm.com" border=3D"0" alt=3D"Inactive hide details for Rich =
Sudlow &lt;rich@nd.edu&gt;">Rich Sudlow &lt;rich@nd.edu&gt;<br>
<br>
<br>

<table width=3D"100%" border=3D"0" cellspacing=3D"0" cellpadding=3D"0">=

<tr valign=3D"top"><td style=3D"background-image:url(cid:2__=3D08BBFACB=
DFEB81E48f9e8a93df938@us.ibm.com); background-repeat: no-repeat; " widt=
h=3D"40%">
<ul>
<ul>
<ul>
<ul><b><font size=3D"2">Rich Sudlow &lt;rich@nd.edu&gt;</font></b><font=
 size=3D"2"> </font><br>
<font size=3D"2">Sent by: openafs-info-admin@openafs.org</font>
<p><font size=3D"2">08/09/2005 05:21 PM</font></ul>
</ul>
</ul>
</ul>
</td><td width=3D"60%">
<table width=3D"100%" border=3D"0" cellspacing=3D"0" cellpadding=3D"0">=

<tr valign=3D"top"><td width=3D"1%" valign=3D"middle"><img width=3D"58"=
 height=3D"1" src=3D"cid:3__=3D08BBFACBDFEB81E48f9e8a93df938@us.ibm.com=
" border=3D"0" alt=3D""><br>
<div align=3D"right"><font size=3D"2">To</font></div></td><td width=3D"=
100%"><img width=3D"1" height=3D"1" src=3D"cid:3__=3D08BBFACBDFEB81E48f=
9e8a93df938@us.ibm.com" border=3D"0" alt=3D""><br>
<font size=3D"2">dhk@ccre.com</font></td></tr>

<tr valign=3D"top"><td width=3D"1%" valign=3D"middle"><img width=3D"58"=
 height=3D"1" src=3D"cid:3__=3D08BBFACBDFEB81E48f9e8a93df938@us.ibm.com=
" border=3D"0" alt=3D""><br>
<div align=3D"right"><font size=3D"2">cc</font></div></td><td width=3D"=
100%"><img width=3D"1" height=3D"1" src=3D"cid:3__=3D08BBFACBDFEB81E48f=
9e8a93df938@us.ibm.com" border=3D"0" alt=3D""><br>
<font size=3D"2">&quot;'openafs'&quot; &lt;openafs-info@openafs.org&gt;=
</font></td></tr>

<tr valign=3D"top"><td width=3D"1%" valign=3D"middle"><img width=3D"58"=
 height=3D"1" src=3D"cid:3__=3D08BBFACBDFEB81E48f9e8a93df938@us.ibm.com=
" border=3D"0" alt=3D""><br>
<div align=3D"right"><font size=3D"2">Subject</font></div></td><td widt=
h=3D"100%"><img width=3D"1" height=3D"1" src=3D"cid:3__=3D08BBFACBDFEB8=
1E48f9e8a93df938@us.ibm.com" border=3D"0" alt=3D""><br>
<font size=3D"2">Re: [OpenAFS] Problems on AFS Unix clients after AFS f=
ileserver moves</font></td></tr>
</table>

<table border=3D"0" cellspacing=3D"0" cellpadding=3D"0">
<tr valign=3D"top"><td width=3D"58"><img width=3D"1" height=3D"1" src=3D=
"cid:3__=3D08BBFACBDFEB81E48f9e8a93df938@us.ibm.com" border=3D"0" alt=3D=
""></td><td width=3D"336"><img width=3D"1" height=3D"1" src=3D"cid:3__=3D=
08BBFACBDFEB81E48f9e8a93df938@us.ibm.com" border=3D"0" alt=3D""></td></=
tr>
</table>
</td></tr>
</table>
<br>
<tt>Dexter 'Kim' Kimball wrote:<br>
&gt; fs checkv will cause the client to discard what it remembers about=
 volumes.<br>
&gt; Did you try that?<br>
<br>
No - That worked!<br>
<br>
Thanks<br>
<br>
Rich<br>
<br>
&gt; <br>
&gt; Kim<br>
&gt; <br>
&gt; <br>
&gt; &nbsp; &nbsp; &nbsp;-----Original Message-----<br>
&gt; &nbsp; &nbsp; &nbsp;From: openafs-info-admin@openafs.org <br>
&gt; &nbsp; &nbsp; &nbsp;[<a href=3D"mailto:openafs-info-admin@openafs.=
org">mailto:openafs-info-admin@openafs.org</a>] On Behalf Of Rich Sudlo=
w<br>
&gt; &nbsp; &nbsp; &nbsp;Sent: Tuesday, August 09, 2005 9:58 AM<br>
&gt; &nbsp; &nbsp; &nbsp;To: openafs<br>
&gt; &nbsp; &nbsp; &nbsp;Subject: [OpenAFS] Problems on AFS Unix client=
s after AFS <br>
&gt; &nbsp; &nbsp; &nbsp;fileserver moves<br>
&gt; &nbsp; &nbsp; &nbsp;<br>
&gt; &nbsp; &nbsp; &nbsp;<br>
&gt; &nbsp; &nbsp; &nbsp;We've been having problems with our cell for t=
he last couple<br>
&gt; &nbsp; &nbsp; &nbsp;years with AFS clients after fileservers are t=
aken out of service.<br>
&gt; &nbsp; &nbsp; &nbsp;Before that things seemed to work ok when doin=
g fileserver <br>
&gt; &nbsp; &nbsp; &nbsp;moves and<br>
&gt; &nbsp; &nbsp; &nbsp;rebuilding. All data was moved off the fileser=
ver but the clients<br>
&gt; &nbsp; &nbsp; &nbsp;still seem to have some need to talk to it. &n=
bsp;In the past the AFS<br>
&gt; &nbsp; &nbsp; &nbsp;admins have left the fileservers up and empty =
for a number of<br>
&gt; &nbsp; &nbsp; &nbsp;days to try to resolve this issue - &nbsp;but =
it doesn't resolve the<br>
&gt; &nbsp; &nbsp; &nbsp;issue.<br>
&gt; &nbsp; &nbsp; &nbsp;<br>
&gt; &nbsp; &nbsp; &nbsp;For example a recent example:<br>
&gt; &nbsp; &nbsp; &nbsp;<br>
&gt; &nbsp; &nbsp; &nbsp;The fileserver reno.helios.nd.edu was shutdown=
 after all data<br>
&gt; &nbsp; &nbsp; &nbsp;moved off of it. &nbsp;However the client stil=
l can't get to<br>
&gt; &nbsp; &nbsp; &nbsp;a number of AFS files.<br>
&gt; &nbsp; &nbsp; &nbsp;<br>
&gt; &nbsp; &nbsp; &nbsp;[root@xeon109 root]# fs checkservers<br>
&gt; &nbsp; &nbsp; &nbsp;These servers unavailable due to network or se=
rver problems: <br>
&gt; &nbsp; &nbsp; &nbsp;reno.helios.nd.edu.<br>
&gt; &nbsp; &nbsp; &nbsp;[root@xeon109 root]# cmdebug reno.helios.nd.ed=
u -long<br>
&gt; &nbsp; &nbsp; &nbsp;cmdebug: error checking locks: server or netwo=
rk not responding<br>
&gt; &nbsp; &nbsp; &nbsp;cmdebug: failed to get cache entry 0 (server o=
r network <br>
&gt; &nbsp; &nbsp; &nbsp;not responding)<br>
&gt; &nbsp; &nbsp; &nbsp;[root@xeon109 root]# cmdebug reno.helios.nd.ed=
u<br>
&gt; &nbsp; &nbsp; &nbsp;cmdebug: error checking locks: server or netwo=
rk not responding<br>
&gt; &nbsp; &nbsp; &nbsp;cmdebug: failed to get cache entry 0 (server o=
r network <br>
&gt; &nbsp; &nbsp; &nbsp;not responding)<br>
&gt; &nbsp; &nbsp; &nbsp;[root@xeon109 root]#<br>
&gt; &nbsp; &nbsp; &nbsp;<br>
&gt; &nbsp; &nbsp; &nbsp;[root@xeon109 root]# &nbsp;vos listvldb -serve=
r reno.helios.nd.edu<br>
&gt; &nbsp; &nbsp; &nbsp;VLDB entries for server reno.helios.nd.edu<br>=

&gt; &nbsp; &nbsp; &nbsp;<br>
&gt; &nbsp; &nbsp; &nbsp;Total entries: 0<br>
&gt; &nbsp; &nbsp; &nbsp;[root@xeon109 root]#<br>
&gt; &nbsp; &nbsp; &nbsp;<br>
&gt; &nbsp; &nbsp; &nbsp;on the client:<br>
&gt; &nbsp; &nbsp; &nbsp;rxdebug localhost 7001 -version<br>
&gt; &nbsp; &nbsp; &nbsp;Trying 127.0.0.1 (port 7001):<br>
&gt; &nbsp; &nbsp; &nbsp;AFS version: &nbsp;OpenAFS 1.2.11 built &nbsp;=
2004-01-11<br>
&gt; &nbsp; &nbsp; &nbsp;<br>
&gt; &nbsp; &nbsp; &nbsp;<br>
&gt; &nbsp; &nbsp; &nbsp;This is a linux 2.4 client and I don't have kd=
ump - have <br>
&gt; &nbsp; &nbsp; &nbsp;also had these<br>
&gt; &nbsp; &nbsp; &nbsp;problems on sun4x_58 clients too.<br>
&gt; &nbsp; &nbsp; &nbsp;<br>
&gt; &nbsp; &nbsp; &nbsp;I should mention that we've seen some correlat=
ion<br>
&gt; &nbsp; &nbsp; &nbsp;to this happening on machines with &quot;busy&=
quot; AFS caches &nbsp;- <br>
&gt; &nbsp; &nbsp; &nbsp;which makes it<br>
&gt; &nbsp; &nbsp; &nbsp;even more frustrating as it seems to affect ma=
chines which <br>
&gt; &nbsp; &nbsp; &nbsp;depend on<br>
&gt; &nbsp; &nbsp; &nbsp;AFS the most. We've tried lots of fs flush* * =
-<br>
&gt; &nbsp; &nbsp; &nbsp;So far we've ended up rebooting which does fix=
 the<br>
&gt; &nbsp; &nbsp; &nbsp;problem.<br>
&gt; &nbsp; &nbsp; &nbsp;<br>
&gt; &nbsp; &nbsp; &nbsp;Does anyone have any clues what the problem is=
 or what a workaround<br>
&gt; &nbsp; &nbsp; &nbsp;might be?<br>
&gt; &nbsp; &nbsp; &nbsp;<br>
&gt; &nbsp; &nbsp; &nbsp;Thanks<br>
&gt; &nbsp; &nbsp; &nbsp;<br>
&gt; &nbsp; &nbsp; &nbsp;Rich<br>
&gt; &nbsp; &nbsp; &nbsp;<br>
&gt; &nbsp; &nbsp; &nbsp;-- <br>
&gt; &nbsp; &nbsp; &nbsp;Rich Sudlow<br>
&gt; &nbsp; &nbsp; &nbsp;University of Notre Dame<br>
&gt; &nbsp; &nbsp; &nbsp;Office of Information Technologies<br>
&gt; &nbsp; &nbsp; &nbsp;321 Information Technologies Center<br>
&gt; &nbsp; &nbsp; &nbsp;PO Box 539<br>
&gt; &nbsp; &nbsp; &nbsp;Notre Dame, IN 46556-0539<br>
&gt; &nbsp; &nbsp; &nbsp;<br>
&gt; &nbsp; &nbsp; &nbsp;(574) 631-7258 office phone<br>
&gt; &nbsp; &nbsp; &nbsp;(574) 631-9283 office fax<br>
&gt; &nbsp; &nbsp; &nbsp;<br>
&gt; &nbsp; &nbsp; &nbsp;______________________________________________=
_<br>
&gt; &nbsp; &nbsp; &nbsp;OpenAFS-info mailing list<br>
&gt; &nbsp; &nbsp; &nbsp;OpenAFS-info@openafs.org<br>
&gt; &nbsp; &nbsp; &nbsp;</tt><tt><a href=3D"https://lists.openafs.org/=
mailman/listinfo/openafs-info">https://lists.openafs.org/mailman/listin=
fo/openafs-info</a></tt><tt><br>
&gt; &nbsp; &nbsp; &nbsp;<br>
&gt; <br>
&gt; <br>
&gt; _______________________________________________<br>
&gt; OpenAFS-info mailing list<br>
&gt; OpenAFS-info@openafs.org<br>
&gt; </tt><tt><a href=3D"https://lists.openafs.org/mailman/listinfo/ope=
nafs-info">https://lists.openafs.org/mailman/listinfo/openafs-info</a><=
/tt><tt><br>
<br>
<br>
-- <br>
Rich Sudlow<br>
University of Notre Dame<br>
Office of Information Technologies<br>
321 Information Technologies Center<br>
PO Box 539<br>
Notre Dame, IN 46556-0539<br>
<br>
(574) 631-7258 office phone<br>
(574) 631-9283 office fax<br>
<br>
_______________________________________________<br>
OpenAFS-info mailing list<br>
OpenAFS-info@openafs.org<br>
</tt><tt><a href=3D"https://lists.openafs.org/mailman/listinfo/openafs-=
info">https://lists.openafs.org/mailman/listinfo/openafs-info</a></tt><=
tt><br>
</tt><br>
</body></html>=


--1__=08BBFACBDFEB81E48f9e8a93df938690918c08BBFACBDFEB81E4--


--0__=08BBFACBDFEB81E48f9e8a93df938690918c08BBFACBDFEB81E4
Content-type: image/gif; 
	name="graycol.gif"
Content-Disposition: inline; filename="graycol.gif"
Content-ID: <1__=08BBFACBDFEB81E48f9e8a93df938@us.ibm.com>
Content-transfer-encoding: base64

R0lGODlhEAAQAKECAMzMzAAAAP///wAAACH5BAEAAAIALAAAAAAQABAAAAIXlI+py+0PopwxUbpu
ZRfKZ2zgSJbmSRYAIf4fT3B0aW1pemVkIGJ5IFVsZWFkIFNtYXJ0U2F2ZXIhAAA7

--0__=08BBFACBDFEB81E48f9e8a93df938690918c08BBFACBDFEB81E4
Content-type: image/gif; 
	name="pic32728.gif"
Content-Disposition: inline; filename="pic32728.gif"
Content-ID: <2__=08BBFACBDFEB81E48f9e8a93df938@us.ibm.com>
Content-transfer-encoding: base64

R0lGODlhWABDALP/AAAAAK04Qf79/o+Gm7WuwlNObwoJFCsoSMDAwGFsmIuezf///wAAAAAAAAAA
AAAAACH5BAEAAAgALAAAAABYAEMAQAT/EMlJq704682770RiFMRinqggEUNSHIchG0BCfHhOjAuh
EDeUqTASLCbBhQrhG7xis2j0lssNDopE4jfIJhDaggI8YB1sZeZgLVA9YVCpnGagVjV171aRVrYR
RghXcAGFhoUETwYxcXNyADJ3GlcSKGAwLwllVC1vjIUHBWsFilKQdI8GA5IcpApeJQt8L09lmgkH
LZikoU5wjqcyAMMFrJIDPAKvCFletKSev1HBw8KrxtjZ2tvc3d5VyKtCKW3jfz4uMKmq3xu4N0nK
BVoJQmx2LGVOmrqNjjJf2hHAQo/eDwJGTKhQMcgQEEAnEjFS98+RnW3smGkZU6ncCWav/4wYOnAI
TihRL/4FEwbp28BXMMcoscQCVxlepL4IGDSCyJyVQOu0o7CjmLN50OZlqWmyFy5/6yBBuji0AxFR
M00oQAqNIstqI6qKHUsWRAEAvagsmfUEAImyxgbmUpJk3IklNUtJOUAVLoUr1+wqDGTE4zk+T6FG
uQb3SizBCwatiiUgCBN8vrz+zFjVyQ8FWkOlg4NQiZMB5QS8QO3mpOaKnL0Z2EKvNMSILEThKhCg
zMKPVxYJh23qm9KNW7pArPynMqZDiErsTMqI+LRi3QAgkFUbXpuFKhSYZALd0O5RKa2z9EYKBbpb
qxIKsjUPRgD7I2XYV6wyrOw92ykExP8NW4URhknC5dKGE4v4NENQj2jXjmfNgOZDaXb5glRmXQ33
YEWQYNcZFnrYcIQLNzyTFDQNkXIff0ExVlY4srziQk43inZgL4rwxxINMvpFFAz1KOODHiu+4aEw
NEjFl5B3JIKWKF3k6I9bfUGp5ZZcdunll5IA4cuHvQQJ5gcsoCWOOUwgltIwAKRxJgbIkJAQZEq0
2YliZnpZZ4BH3CnYOXldOUOfQoYDqF1LFHbXCrO8xmRsfoXDXJ6ChjCAH3QlhJcT6VWE6FCkfCco
CgrMFsROrIEX3o2whVjWDjoJccN3LdggSGXLCdLEgHr1lyU3O3QxhgohNKXJCWv8JQr/PDdaqd6w
2rj1inLiGeiCJoDspAoQlYE6QWLSECehcWIYxIQES6zhbn1iImTHEQyqJ4eIxJJoUBc+3CbBuwZE
V5cJPPkIjFDdeEabQbd6WgICTxiiz0f5dBKquXF6k4senwEhYGnKEFJeGrxUZy8dB8gmAXI/sPvH
ESfCwVt5hTgYiqQqtdRNHQIU1PJ33ZqmzgE90OwLaoJcnMop1WiMmgkPHQRIrwgFuNV90A3doNKT
mrKIN07AnGcI9BQjhCBN4RfA1qIZnMqorJCogKfGQnxSCDilTVIA0yl5ciTovgLuBDKFUDE9aQcw
9SA+rjSNf9/M1gxrj6VwDTS0IUSElMzBfsj0NFXR2kwsV1A5IF1grLgLL/r1R40BZEnuBWgmQEyb
jqRwSAt6bqMCOFkvKFN2GPPkUzIm/SCF8z8pVzpbjVnMsy0vOr1hw3SaSRUhpY09v0z0J1FnwzPl
fmh+xl4WtR0zGu24I4KbMQm3lnVu2oNWxI9W/lcyzA+mCKF4DBikxb/+UWtOGRiFP8qEwAayIgIA
Ow==

--0__=08BBFACBDFEB81E48f9e8a93df938690918c08BBFACBDFEB81E4
Content-type: image/gif; 
	name="ecblank.gif"
Content-Disposition: inline; filename="ecblank.gif"
Content-ID: <3__=08BBFACBDFEB81E48f9e8a93df938@us.ibm.com>
Content-transfer-encoding: base64

R0lGODlhEAABAIAAAAAAAP///yH5BAEAAAEALAAAAAAQAAEAAAIEjI8ZBQA7

--0__=08BBFACBDFEB81E48f9e8a93df938690918c08BBFACBDFEB81E4--