[OpenAFS] AFS Problem with Win2K
David Claessens
dac@tinc.be
Thu, 27 Jan 2005 10:52:27 +0100
nope, the're identical, both look like this:
>intra.tinc.be #
10.2.1.145 # kirk.intra.tinc.be
Jeffrey Altman wrote:
> Are the CellServDB entries different on the two machines?
>
> Jeffrey Altman
>
>
> David Claessens wrote:
>
> > I have tested the commands listed below on Linux and Windows:
> >
> > ===========================
> > Linux
> > ===========================
> > $ fs examine /afs/intra.tinc.be/home
> > Volume status for vid = 536870915 named root.cell
> > Current disk quota is 5000
> > Current blocks used are 7
> > The partition has 78756824 blocks available out of 78757480
> >
> > $ vos examine 536870915 -cell intra.tinc.be
> > root.cell 536870915 RW 7 K On-line
> > kirk.intra.tinc.be /vicepa
> > RWrite 536870915 ROnly 0 Backup 0
> > MaxQuota 5000 K
> > Creation Thu Jan 20 14:18:09 2005
> > Last Update Mon Jan 24 17:30:25 2005
> > 19 accesses in the past day (i.e., vnode references)
> >
> > RWrite: 536870915
> > number of sites -> 1
> > server kirk.intra.tinc.be partition /vicepa RW Site
> > $ fs listacl /afs/intra.tinc.be/home
> > Access list for /afs/intra.tinc.be/home is
> > Normal rights:
> > system:administrators rlidwka
> > system:anyuser rl
> > ===========================
> > Windows
> > ===========================
> > $ fs examine \\afs\intra.tinc.be\home
> >
> > fs:'\\afs\intra.tinc.be\home': code 0x19
> >
> > $ vos examine 536870915 -cell intra.tinc.be
> >
> > Could not fetch the entry for volume number 536870915 from VLDB
> >
> > $ fs listacl \\afs\intra.tinc.be\home
> >
> > fs:'\\afs\intra.tinc.be\home': code 0x0
> >
> >
> > PS: I noticed my mistake of not mailing to the list a little later and
> > have resubmitted my first mail to the openAFS mailinglist. It is pending
> > there to be reviewed and approved by a list administrator because due to
> > the log files, the mail is larger than the allowed 40KB.
> >
> > Jeffrey Altman wrote:
> >
> >> David Claessens wrote:
> >> > \WINDOWS\TEMP\afsd_init.log
> >> > \WINDOWS\TEMP\afsd.log
> >> >
> >> > see attachements, I've filtered afsd_init.log according to
> >> date/time but
> >> > I couldn't do this with afsd.log so I've included this complete
> >> from the
> >> > past days although it never seems to grow beyond 5000 lines.
> >>
> >> The afsd.log file is a circular trace log. You can control it using
> >>
> >> fs trace [-on] [-off] [-reset] [-dump] [-help]
> >>
> >> within each session.
> >>
> >> The log files do not show any misbehavior.
> >>
> >> > $ net view \\afs
> >> > Gedeelde bronnen op \\afs
> >> >
> >> > AFS
> >> >
> >> > Sharenaam Type Gebruikt Opmerking
> >> > als
> >>
> -------------------------------------------------------------------------------
>
> >>
> >> >
> >> > .openafs.org Schijf
> >> > all Schijf (UNC)
> >> > auto1 Schijf (UNC)
> >> > auto2 Schijf (UNC)
> >> > Home Schijf (UNC)
> >> > intra.tinc.b Schijf
> >> > openafs.org Schijf
> >>
> >> This indicates that the AFS SMB server is successfully working.
> >>
> >> > $ tokens
> >> >
> >> > Tokens held by the Cache Manager:
> >> >
> >> > User dac's tokens for afs@intra.tinc.be [Expires Jan 26 20:12]
> >> > --End of list --
> >>
> >> And that you have a token. (is it the right one?)
> >>
> >> > $ dir \\afs\all
> >> > De volumenaam van station \\afs\all is AFS
> >> > Het volumenummer is 0000-04D2
> >> >
> >> > Map van \\afs\all
> >> >
> >> > 21/01/2005 15:19 <DIR> .
> >> > 21/01/2005 15:19 <DIR> ..
> >> > 21/01/2005 15:19 <DIR> openafs.org
> >> > 21/01/2005 15:19 <DIR> intra.tinc.be
> >> > 0 bestand(en) 4.150 bytes
> >> > 4 map(pen) 1.099.511.626.752 bytes beschikbaar
> >>
> >> You installed OAFW using the default cell of openafs.org. It therefore
> >> constructed a default set of mount points for .openafs.org and
> >> openafs.org. The intra.tinc.be mount point was generated when you
> >> first accessed \\AFS\intra.tinc.be or when you added a mount point
> >> with "fs mkmount".
> >>
> >> > After this command I went ahead and tried some more dir's, This
> >> command
> >> > returned immedeatly.
> >> >
> >> > $ dir \\afs\intra.tinc.be\
> >> > De volumenaam van station \\afs\intra.tinc.be is AFS
> >> > Het volumenummer is 0000-04D2
> >> >
> >> > Map van \\afs\intra.tinc.be
> >> >
> >> > 20/01/2005 14:33 <DIR> .
> >> > 20/01/2005 14:33 <DIR> ..
> >> > 24/01/2005 17:30 <DIR> home
> >> > 0 bestand(en) 6.144 bytes
> >> > 3 map(pen) 1.099.511.626.752 bytes beschikbaar
> >> >
> >> > The following command however took about a minute to complete and
> this
> >> > should be 3 directory's of the users dac, drl en gko. This
> >> shouldn't be
> >> > 0 bytes files.
> >> >
> >> > $ dir \\afs\intra.tinc.be\home
> >> > De volumenaam van station \\afs\intra.tinc.be is AFS
> >> > Het volumenummer is 0000-04D2
> >> >
> >> > Map van \\afs\intra.tinc.be\home
> >> >
> >> > 24/01/2005 17:30 <DIR> .
> >> > 20/01/2005 14:33 <DIR> ..
> >> > 01/01/1970 04:59 0 dac
> >> > 01/01/1970 04:59 0 drl
> >> > 01/01/1970 04:59 0 gko
> >> > 3 bestand(en) 4.096 bytes
> >> > 2 map(pen) 1.099.511.626.752 bytes beschikbaar
> >>
> >> What is the status of this volume?
> >>
> >> fs examine \\afs\intra.tinc.be\home
> >>
> >> will give you the volume information.
> >>
> >> vos examine <vol-id> -cell <cell-name>
> >>
> >> will give you volume information.
> >>
> >> The fact that you are seeing files instead of directories would imply
> >> that these entries are really symlinks which cannot be evaluated either
> >> because you do not have appropriate credentials or because the
> >> destination is not a reachable path.
> >>
> >> check the acls with
> >>
> >> fs listacl \\afs\intra.tinc.be\home
> >>
> >> > And another thing I've noticed: It reports a TB free disk space
> >> altough
> >> > my test server is only sharing an 80 GB drive. Is this 'normal'
> >> > behaviour ??
> >>
> >> possibly.
> >>
> >> > PS: Sorry for the dutch in the commands, My 2 windows test
> clients are
> >> > unfortunately the only 2 dutch installed machines in the network.
> >>
> >> FYI, in general I prefer that discussions remain on the mailing list so
> >> that others can benefit from your experience.
> >>
> >> Jeffrey Altman
> >