[OpenAFS] fileserver (sunxx86_510) doesn't attach volumes

Derrick Brashear shadow@gmail.com
Fri, 21 Mar 2008 09:53:22 -0400


If you have a recent enough patch level that something changed in the
ufs code (I don't know this but it's possible) then it may simply not
be possible to run inode fileservers anymore on Solaris 10.

I know how to replace inode with something which will work almost
everywhere; Maybe I should just find a few minutes to write the code.



On Fri, Mar 21, 2008 at 9:33 AM, John Tang Boyland <boyland@cs.uwm.edu> wrote:
> We just had routine maintenance done to our servers
>  (Solaris 10 on intel).  They are running 1.4.6, but when
>  one of the servers came back up it couldn't attach any volumes.
>  This is an inode server.
>
>  The shutdown seemed to go fine (see end of FileLog.old at end of
>  message), but when it came back up, there are many many problems
>  (see start of FileLog at end of message)
>
>  I'm a little reluctant to go right ahead and salvage the volumes
>  because last time I did so it deleted all recently created files.
>  (See previous openafs-info report of salvage troubles.)
>  But I'll probably be forced to again.  Anything I can do to
>  avoid these troubles in the future?  (Maybe inode is a bad idea?)
>
>  Any help or ideas accepted
>  John
>
>  ...
>  Fri Mar 21 07:09:07 2008 Shutting down file server at Fri Mar 21 07:09:07 2008
>  Fri Mar 21 07:09:07 2008 Vice was last started at Fri Mar 21 06:50:20 2008
>
>  Fri Mar 21 07:09:07 2008 Large vnode cache, 400 entries, 0 allocs, 16 gets (9 reads), 0 writes
>  Fri Mar 21 07:09:07 2008 Small vnode cache,400 entries, 0 allocs, 14 gets (7 reads), 0 writes
>  Fri Mar 21 07:09:07 2008 Volume header cache, 400 entries, 16 gets, 0 replacements
>  Fri Mar 21 07:09:07 2008 Partition /vicepa: 362217784 available 1K blocks (minfree=40246420), Fri Mar 21 07:09:07 2008 359691642 free blocks
>  Fri Mar 21 07:09:07 2008 With 90 directory buffers; 0 reads resulted in 0 read I/Os
>  Fri Mar 21 07:09:07 2008 Total Client entries = 42, blocks = 1; Host entries = 13, blocks = 1
>  Fri Mar 21 07:09:07 2008 There are 42 connections, process size 138059
>  Fri Mar 21 07:09:07 2008 There are 13 workstations, 3 are active (req in < 15 mins), 0 marked "down"
>  Fri Mar 21 07:09:07 2008 VShutdown:  shutting down on-line volumes...
>  Fri Mar 21 07:09:07 2008 VShutdown:  complete.
>  Fri Mar 21 07:09:07 2008 File server has terminated normally at Fri Mar 21 07:09:07 2008
>
>  ms 0 unused 0 unused 0 unused 0 version 0
>    other send counters: ack 143, data 322 (not resends), resends 15, pushed 0, acked&ignored 0
>         (these should be small) sendFailed 0, fatalErrors 0
>    22 server connections, 42 client connections, 21 peer structs, 28 call structs, 23 free call structs
>  0 add CB, 0 break CB, 0 del CB, 0 del FE, 0 CB's timed out, 0 space reclaim, 4 del host
>  7 CBs, 7 FEs, (14 of total of 60000 16-byte blocks)
>
>  ...
>
>  Fri Mar 21 07:13:44 2008 File server starting
>  Fri Mar 21 07:13:44 2008 afs_krb_get_lrealm failed, using cs.uwm.edu.
>  Fri Mar 21 07:13:44 2008 Set thread id 5 for FSYNC_sync
>  Fri Mar 21 07:13:44 2008 FSYNC_sync: bind failed with (125), removed bogus /usr/afs/local/fssync.sock
>  Fri Mar 21 07:13:44 2008 Partition /vicepa: attaching volumes
>  Fri Mar 21 07:13:44 2008 VAttachVolume: Error reading diskDataHandle vol header
>  /vicepa/V0536872911.vol; error=101
>  Fri Mar 21 07:13:44 2008 VAttachVolume: Error attaching volume /vicepa/V0536872911.vol; volume needs salvage; error=101
>  Fri Mar 21 07:13:44 2008 VAttachVolume: Error reading smallVnode vol header /vicepa/V0536872913.vol; error=101
>  Fri Mar 21 07:13:44 2008 VAttachVolume: Error attaching volume /vicepa/V0536872913.vol; volume needs salvage; error=101
>  ... over and over for each volume
>
>  /etc/vfstab has
>
>  ...
>  /dev/dsk/c0t600C0FF000000000098C96204C3F4A00d0s2        /dev/rdsk/c0t600C0FF000000000098C96204C3F4A00d0s2       /vicepa afs     3       yes     nologging
>  ...
>
>
>  John
>
>  _______________________________________________
>  OpenAFS-info mailing list
>  OpenAFS-info@openafs.org
>  https://lists.openafs.org/mailman/listinfo/openafs-info
>