[OpenAFS] Need volume state / fileserver / salvage knowledge
Patricia O'Reilly
oreilly@qualcomm.com
Fri, 28 Jan 2011 13:04:44 -0800
Was there any type of maintenance being done on the system? ZFS takes a snapshot of the filesystem prior to maintenance, then reverts back to that snapshot after the maintenance is complete. That means if your server was running when the initial snapshot was taken, the SALVAGE.fs file was in your /usr/afs/local directory. Regardless of how clean you shut it down, that salvage file will be there after the maintenance when ZFS reverts to the original filesystem snapshot.
Jeff Blaine wrote:
> OpenAFS 1.4.11 on Solaris 10 SPARC servers with *ZFS* vice
> partitions
>
> The last time we brought our fileservers down (cleanly, according
> to "shutdown" info via bos status), it struck me as odd that
> salvages were needed once it came up. I sort of brushed it off.
>
> We've done it again, and the same situation is presenting itself,
> and I'm really confused as to how that is and what is happening
> incorrectly. One of the three cleanly shutdown fileservers came
> up with hundreds of unattachable volumes, and is salvaging now
> by our hand.
>
> If anyone has any ideas, please share! I don't see anything in
> the 1.4.12 or 1.4.14 release notes indicating anything that would
> be causing this in 1.4.11 (which is the first release we've
> used on our upgraded Solaris 10 + ZFS fileservers). This has
> cost us hours of downtime for these particular volumes.
>
> In the meantime, I am going to start scouring openafs.org and
> the wiki for as much information as I can about how the entire
> fileserver/clean/dirty/salvage process works (finally).
>
> Below you can (if you care to) see that the ZFS properties for
> the fileservers are the same (no salvage needed vs. salvage needed).
>
> ===================================================
> Fileserver with NO Salvage Needed on Clean Shutdown
> ===================================================
>
> Showing 1 partition, all are confirmed to be configured the same
> as this.
>
> BosConfig Info
>
> bnode fs fs 1
> parm /usr/afs/bin/fileserver
> parm /usr/afs/bin/volserver
> parm /usr/afs/bin/salvager -tmpdir /usr/tmp -parallel all4 -DontSalvage
> end
>
> ZFS Info
>
> NAME PROPERTY VALUE SOURCE
> pool-vice/vicepa type filesystem -
> pool-vice/vicepa creation Wed Jul 15 11:23 2009 -
> pool-vice/vicepa used 30.0G -
> pool-vice/vicepa available 146G -
> pool-vice/vicepa referenced 30.0G -
> pool-vice/vicepa compressratio 1.00x -
> pool-vice/vicepa mounted yes -
> pool-vice/vicepa quota 176G local
> pool-vice/vicepa reservation none default
> pool-vice/vicepa recordsize 32K local
> pool-vice/vicepa mountpoint /vicepa local
> pool-vice/vicepa sharenfs off local
> pool-vice/vicepa checksum on default
> pool-vice/vicepa compression off local
> pool-vice/vicepa atime off local
> pool-vice/vicepa devices on default
> pool-vice/vicepa exec on local
> pool-vice/vicepa setuid on local
> pool-vice/vicepa readonly off default
> pool-vice/vicepa zoned off default
> pool-vice/vicepa snapdir hidden default
> pool-vice/vicepa aclmode groupmask default
> pool-vice/vicepa aclinherit restricted default
> pool-vice/vicepa canmount on default
> pool-vice/vicepa shareiscsi off default
> pool-vice/vicepa xattr on local
> pool-vice/vicepa copies 1 default
> pool-vice/vicepa version 3 -
> pool-vice/vicepa utf8only off -
> pool-vice/vicepa normalization none -
> pool-vice/vicepa casesensitivity sensitive -
> pool-vice/vicepa vscan off default
> pool-vice/vicepa nbmand off default
> pool-vice/vicepa sharesmb off default
> pool-vice/vicepa refquota none default
> pool-vice/vicepa refreservation none default
> pool-vice/vicepa primarycache all default
> pool-vice/vicepa secondarycache all default
> pool-vice/vicepa usedbysnapshots 0 -
> pool-vice/vicepa usedbydataset 0 -
> pool-vice/vicepa usedbychildren 0 -
> pool-vice/vicepa usedbyrefreservation 0 -
> pool-vice/vicepa logbias latency default
>
> ================================================
> Fileserver with Salvage Needed on Clean Shutdown
> ================================================
>
> Showing 1 partition (which is 1 that did have volumes on it
> that needed salvaging), all are confirmed to be configured
> the same as this.
>
> BosConfig Info
>
> bnode fs fs 1
> parm /usr/afs/bin/fileserver
> parm /usr/afs/bin/volserver
> parm /usr/afs/bin/salvager -tmpdir /usr/tmp -parallel all4 -DontSalvage
> end
>
> ZFS Info
>
> NAME PROPERTY VALUE SOURCE
> pool-vice/vicepa type filesystem -
> pool-vice/vicepa creation Mon Aug 17 9:58 2009 -
> pool-vice/vicepa used 26.6G -
> pool-vice/vicepa available 83.6G -
> pool-vice/vicepa referenced 26.6G -
> pool-vice/vicepa compressratio 1.00x -
> pool-vice/vicepa mounted yes -
> pool-vice/vicepa quota 110G local
> pool-vice/vicepa reservation none default
> pool-vice/vicepa recordsize 32K local
> pool-vice/vicepa mountpoint /vicepa local
> pool-vice/vicepa sharenfs off local
> pool-vice/vicepa checksum on default
> pool-vice/vicepa compression off local
> pool-vice/vicepa atime off local
> pool-vice/vicepa devices on default
> pool-vice/vicepa exec on default
> pool-vice/vicepa setuid on default
> pool-vice/vicepa readonly off default
> pool-vice/vicepa zoned off default
> pool-vice/vicepa snapdir hidden default
> pool-vice/vicepa aclmode groupmask default
> pool-vice/vicepa aclinherit restricted default
> pool-vice/vicepa canmount on default
> pool-vice/vicepa shareiscsi off default
> pool-vice/vicepa xattr on default
> pool-vice/vicepa copies 1 default
> pool-vice/vicepa version 3 -
> pool-vice/vicepa utf8only off -
> pool-vice/vicepa normalization none -
> pool-vice/vicepa casesensitivity sensitive -
> pool-vice/vicepa vscan off default
> pool-vice/vicepa nbmand off default
> pool-vice/vicepa sharesmb off default
> pool-vice/vicepa refquota none default
> pool-vice/vicepa refreservation none default
> pool-vice/vicepa primarycache all default
> pool-vice/vicepa secondarycache all default
> pool-vice/vicepa usedbysnapshots 0 -
> pool-vice/vicepa usedbydataset 0 -
> pool-vice/vicepa usedbychildren 0 -
> pool-vice/vicepa usedbyrefreservation 0 -
> pool-vice/vicepa logbias latency default
> _______________________________________________
> OpenAFS-info mailing list
> OpenAFS-info@openafs.org
> https://lists.openafs.org/mailman/listinfo/openafs-info