[OpenAFS] new RW root.cell

Susan Litzinger susan@psc.edu
Thu, 7 Mar 2019 13:43:03 -0500


--0000000000009bd2520583857a8c
Content-Type: text/plain; charset="UTF-8"

I got that to work for root.cell and now I find that root.afs is in the
same situation.  Does anyone see any harm in doing the same steps for
root.afs?  It's current status is:

[root@afs-vmc 2019-March]# vos volinfo -id root.afs -localauth
root.afs                          536901418 RW        290 K  On-line
    daphne.psc.edu /vicepdc
    RWrite  536901418 ROnly          0 Backup  536903011
    MaxQuota       1000 K
    Creation    Thu Dec 23 09:37:33 1999
    Copy        Sun Jun 10 11:24:25 2012
    Backup      Sat Mar  2 02:32:35 2019
    Last Access Tue Feb 12 13:45:30 2013
    Last Update Tue Jan  9 00:24:10 2007
    0 accesses in the past day (i.e., vnode references)

    RWrite: 536901418     ROnly: 536903611     Backup: 536903011
    number of sites -> 5
       *server daphne.psc.edu <http://daphne.psc.edu> partition /vicepdc RW
Site *
       server velma.psc.edu partition /vicepcb RO Site
       server fred.psc.edu partition /vicepbd RO Site
       server daphne.psc.edu partition /vicepdd RO Site
      * server afs-vmc.psc.edu <http://afs-vmc.psc.edu> partition /vicepgb
RO Site  -- Not released*

So I will remove the daphe vicepdd RO volume using 'vos remove' , 'vos
release' root.afs, create a RO on daphne vicepc using 'vos remsite', then
move the volume to afs-vmc partition vicepgb using 'vos move'.

BUT, should the afs.root and afs.cell volumes be on different servers as
they were in our initial implementation?  I couldn't find anything in the
doc that states this but wanted to make sure before I went ahead and did
it.

Thanks very much.  -susan


On Thu, Mar 7, 2019 at 12:29 PM Jeffrey Altman <jaltman@auristor.com> wrote:

> On 3/7/2019 11:59 AM, Susan Litzinger wrote:
> > Hmm.. I moved removed the incorrect RO and created a new RO on velma,
> > then tried to 'release' the new one prior to moving it to a different
> > server and that doesn't work.  I'm hesitant to go ahead and move it if
> > it's not in a good state.
>
> "vos remsite" only modifies the location database.  It does not remove
> volumes from vice partitions.  You needed to execute "vos remove" not
> "vos remsite".  You are still receiving the EXDEV error from velma
> because there are still two vice partitions attached to velma each of
> which have a volume from the same volume group.
>
> The fact that you were able to get into this situation is due to bugs in
> OpenAFS which were fixed long ago in AuriStorFS.  To cleanup:
>
>   vos remove -server vlema.psc.edu -partition vicepcb -id 537176385
>
> and then
>
>   vos release -id root.cell
>
> If you are still seeing errors, examine the VolserLog on velma.psc.edu
> and use
>
>   vos listvol -server velma.psc.edu -fast | grep 537176385
>
> to see if there are stranded readonly volumes left on somewhere.
>
> Jeffrey Altman
>
>

--0000000000009bd2520583857a8c
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div dir=3D"ltr">I got that to work for root.cell and now =
I find that root.afs is in the same situation.=C2=A0 Does anyone see any ha=
rm in doing the same steps for root.afs?=C2=A0 It&#39;s current status is:=
=C2=A0</div><div dir=3D"ltr"><br><div><div>[root@afs-vmc 2019-March]# vos v=
olinfo -id root.afs -localauth</div><div>root.afs=C2=A0 =C2=A0 =C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 53690141=
8 RW=C2=A0 =C2=A0 =C2=A0 =C2=A0 290 K=C2=A0 On-line</div><div>=C2=A0 =C2=A0=
 <a href=3D"http://daphne.psc.edu">daphne.psc.edu</a> /vicepdc=C2=A0</div><=
div>=C2=A0 =C2=A0 RWrite=C2=A0 536901418 ROnly=C2=A0 =C2=A0 =C2=A0 =C2=A0 =
=C2=A0 0 Backup=C2=A0 536903011=C2=A0</div><div>=C2=A0 =C2=A0 MaxQuota=C2=
=A0 =C2=A0 =C2=A0 =C2=A01000 K=C2=A0</div><div>=C2=A0 =C2=A0 Creation=C2=A0=
 =C2=A0 Thu Dec 23 09:37:33 1999</div><div>=C2=A0 =C2=A0 Copy=C2=A0 =C2=A0 =
=C2=A0 =C2=A0 Sun Jun 10 11:24:25 2012</div><div>=C2=A0 =C2=A0 Backup=C2=A0=
 =C2=A0 =C2=A0 Sat Mar=C2=A0 2 02:32:35 2019</div><div>=C2=A0 =C2=A0 Last A=
ccess Tue Feb 12 13:45:30 2013</div><div>=C2=A0 =C2=A0 Last Update Tue Jan=
=C2=A0 9 00:24:10 2007</div><div>=C2=A0 =C2=A0 0 accesses in the past day (=
i.e., vnode references)</div><div><br></div><div>=C2=A0 =C2=A0 RWrite: 5369=
01418=C2=A0 =C2=A0 =C2=A0ROnly: 536903611=C2=A0 =C2=A0 =C2=A0Backup: 536903=
011=C2=A0</div><div>=C2=A0 =C2=A0 number of sites -&gt; 5</div><div>=C2=A0 =
=C2=A0 =C2=A0 =C2=A0<b>server <a href=3D"http://daphne.psc.edu">daphne.psc.=
edu</a> partition /vicepdc RW Site=C2=A0</b></div><div>=C2=A0 =C2=A0 =C2=A0=
 =C2=A0server <a href=3D"http://velma.psc.edu">velma.psc.edu</a> partition =
/vicepcb RO Site=C2=A0</div><div>=C2=A0 =C2=A0 =C2=A0 =C2=A0server <a href=
=3D"http://fred.psc.edu">fred.psc.edu</a> partition /vicepbd RO Site=C2=A0<=
/div><div>=C2=A0 =C2=A0 =C2=A0 =C2=A0server <a href=3D"http://daphne.psc.ed=
u">daphne.psc.edu</a> partition /vicepdd RO Site=C2=A0</div><div>=C2=A0 =C2=
=A0 =C2=A0 <b>=C2=A0server <a href=3D"http://afs-vmc.psc.edu">afs-vmc.psc.e=
du</a> partition /vicepgb RO Site=C2=A0 -- Not released</b></div></div><div=
><br></div><div>So I will remove the daphe vicepdd RO volume using &#39;vos=
 remove&#39; , &#39;vos release&#39; root.afs, create a RO on daphne vicepc=
 using &#39;vos remsite&#39;, then move the volume to afs-vmc partition vic=
epgb using &#39;vos move&#39;.=C2=A0=C2=A0</div><div><br></div><div>BUT, sh=
ould the afs.root and afs.cell volumes be on different servers as they were=
 in our initial implementation?=C2=A0 I couldn&#39;t find anything in the d=
oc that states this but wanted to make sure before I went ahead and did it.=
=C2=A0</div><div><br></div><div>Thanks very much.=C2=A0 -susan=C2=A0</div><=
div><br></div></div></div><br><div class=3D"gmail_quote"><div dir=3D"ltr" c=
lass=3D"gmail_attr">On Thu, Mar 7, 2019 at 12:29 PM Jeffrey Altman &lt;<a h=
ref=3D"mailto:jaltman@auristor.com">jaltman@auristor.com</a>&gt; wrote:<br>=
</div><blockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;b=
order-left:1px solid rgb(204,204,204);padding-left:1ex">On 3/7/2019 11:59 A=
M, Susan Litzinger wrote:<br>
&gt; Hmm.. I moved removed the incorrect RO and created a new RO on velma,<=
br>
&gt; then tried to &#39;release&#39; the new one prior to moving it to a di=
fferent<br>
&gt; server and that doesn&#39;t work.=C2=A0 I&#39;m hesitant to go ahead a=
nd move it if<br>
&gt; it&#39;s not in a good state.=C2=A0=C2=A0<br>
<br>
&quot;vos remsite&quot; only modifies the location database.=C2=A0 It does =
not remove<br>
volumes from vice partitions.=C2=A0 You needed to execute &quot;vos remove&=
quot; not<br>
&quot;vos remsite&quot;.=C2=A0 You are still receiving the EXDEV error from=
 velma<br>
because there are still two vice partitions attached to velma each of<br>
which have a volume from the same volume group.<br>
<br>
The fact that you were able to get into this situation is due to bugs in<br=
>
OpenAFS which were fixed long ago in AuriStorFS.=C2=A0 To cleanup:<br>
<br>
=C2=A0 vos remove -server <a href=3D"http://vlema.psc.edu" rel=3D"noreferre=
r" target=3D"_blank">vlema.psc.edu</a> -partition vicepcb -id 537176385<br>
<br>
and then<br>
<br>
=C2=A0 vos release -id root.cell<br>
<br>
If you are still seeing errors, examine the VolserLog on <a href=3D"http://=
velma.psc.edu" rel=3D"noreferrer" target=3D"_blank">velma.psc.edu</a><br>
and use<br>
<br>
=C2=A0 vos listvol -server <a href=3D"http://velma.psc.edu" rel=3D"noreferr=
er" target=3D"_blank">velma.psc.edu</a> -fast | grep 537176385<br>
<br>
to see if there are stranded readonly volumes left on somewhere.<br>
<br>
Jeffrey Altman<br>
<br>
</blockquote></div>

--0000000000009bd2520583857a8c--