[OpenAFS] vfsck taking a long time

Neulinger, Nathan nneul@umr.edu
Thu, 14 Aug 2003 11:12:09 -0500


If you compile AFS with the --enable-fast-restart option, you can start
without salvaging.=20

I don't remember if the -DontSalvage option to salvager is supported on
a server that is compiled without that option, but you can try that too.
That tells salvager to immediately exit. (Put it in bosconfig)

-- Nathan

------------------------------------------------------------
Nathan Neulinger                       EMail:  nneul@umr.edu
University of Missouri - Rolla         Phone: (573) 341-4841
UMR Information Technology             Fax: (573) 341-4216


> -----Original Message-----
> From: James E. Dobson [mailto:James.E.Dobson@Dartmouth.EDU]=20
> Sent: Thursday, August 14, 2003 11:01 AM
> To: openafs-info@openafs.org
> Subject: [OpenAFS] vfsck taking a long time
>=20
>=20
>=20
> Hi All,
>=20
> I have an OpenAFS 1.2.8 server which is running on Solaris 9. After a=20
> power outage one of my partitions is requiring a vfsck. I have been=20
> running this for over 28 hours.
>=20
> 18934 root       87M   86M run     16    0  28:48:42  11% vfsck/1
>=20
> This is a 135G partition nearly full (.....was 98%). I was=20
> able to mount=20
> ro earlier but couldn't get fs to run on it since salvager wanted to=20
> "clean" it. I have not had the best of luck with salvaging my=20
> volumes so=20
> this worries me. Is there anyway to use a tool such as fsdb=20
> to mark this=20
> partition clean and mount rw to recover the data or perhaps debug the=20
> AFS fsck? This is native inode fileserver. I have been=20
> running namei on=20
> all my other servers.
>=20
> Here are the other 3 partitions on the server:
>=20
> [jed@bacchus] ~ > df -k /vicep{a,c,d}
> Filesystem            kbytes    used   avail capacity  Mounted on
> /dev/vx/dsk/afsdg/vicepa
>                       141089011 83058986 56619135    60%    /vicepa
> /dev/vx/dsk/afsdg/vicepc
>                       141089011 108288754 31389367    78%    /vicepc
> /dev/vx/dsk/afsdg/vicepd
>                       141089011 68779070 70899051    50%    /vicepd
>=20
>=20
> For those that use VxVM, my volume looks fine from the volume=20
> management=20
> tools:
>=20
> [jed@bacchus] ~ > vxprint -ht vicepb
> Disk group: afsdg
>=20
> V  NAME         RVG          KSTATE   STATE    LENGTH   READPOL=20
> PREFPLEX UTYPEPL NAME         VOLUME       KSTATE   STATE    LENGTH=20
> LAYOUT    NCOL/WID MODE
> SD NAME         PLEX         DISK     DISKOFFS LENGTH  =20
> [COL/]OFF DEVICE=20
>    MODE
> SV NAME         PLEX         VOLNAME  NVOLLAYR LENGTH  =20
> [COL/]OFF AM/NM=20
>     MODE
> DC NAME         PARENTVOL    LOGVOL
> SP NAME         SNAPVOL      DCO
>=20
> v  vicepb       -            ENABLED  ACTIVE   286657920 RAID     -=20
>     raid5pl vicepb-01    vicepb       ENABLED  ACTIVE  =20
> 286657920 RAID=20
>     5/160    RW
> sd disk01-02    vicepb-01    disk01   71664480 71664480 0/0  =20
>     c1t1d0=20
>    ENA
> sd disk02-02    vicepb-01    disk02   71664480 71664480 1/0  =20
>     c1t2d0=20
>    ENA
> sd disk03-02    vicepb-01    disk03   71664480 71664480 2/0  =20
>     c1t3d0=20
>    ENA
> sd disk04-02    vicepb-01    disk04   71664480 71664480 3/0  =20
>     c1t4d0=20
>    ENA
> sd disk05-02    vicepb-01    disk05   71664480 71664480 4/0  =20
>     c1t5d0=20
>    ENA
>=20
>=20
> Any clues?
>=20
> Thanks,
>=20
> -jed
>=20
> --=20
>=20
> // Jed Dobson
> // Department of Psychological & Brain Sciences
> // Dartmouth College
> // James.E.Dobson@Dartmouth.EDU
> // (603) 646-9324
>=20
>=20
> _______________________________________________
> OpenAFS-info mailing list
> OpenAFS-info@openafs.org
> https://lists.openafs.org/mailman/listinfo/openafs-info
>=20