[OpenAFS] new RW root.cell

Susan Litzinger susan@psc.edu
Thu, 7 Mar 2019 14:50:34 -0500


--0000000000000b6ad20583866cb0
Content-Type: text/plain; charset="UTF-8"

I went ahead with what I thought would work and it does!  thanks for the
help form everyone! We appreciate it very much.  Now we can move off the
old servers and into the 21st century. :-)

[root@afs-vmc 2019-March]# vos volinfo root.afs
vsu_ClientInit: Could not get afs tokens, running unauthenticated.
root.afs                          536901418 RW        290 K  On-line
    afs-vma.psc.edu /vicepea
    RWrite  536901418 ROnly          0 Backup          0
    MaxQuota       1000 K
    Creation    Thu Dec 23 09:37:33 1999
    Copy        Thu Mar  7 14:46:40 2019
    Backup      Sat Mar  2 02:32:35 2019
    Last Access Tue Feb 12 13:45:30 2013
    Last Update Tue Jan  9 00:24:10 2007
    0 accesses in the past day (i.e., vnode references)

    RWrite: 536901418     ROnly: 536903611
    number of sites -> 5
       server afs-vma.psc.edu partition /vicepea RW Site
       server velma.psc.edu partition /vicepcb RO Site
       server fred.psc.edu partition /vicepbd RO Site
       server daphne.psc.edu partition /vicepdc RO Site
       server afs-vma.psc.edu partition /vicepea RO Site

-Susan


On Thu, Mar 7, 2019 at 1:43 PM Susan Litzinger <susan@psc.edu> wrote:

> I got that to work for root.cell and now I find that root.afs is in the
> same situation.  Does anyone see any harm in doing the same steps for
> root.afs?  It's current status is:
>
> [root@afs-vmc 2019-March]# vos volinfo -id root.afs -localauth
> root.afs                          536901418 RW        290 K  On-line
>     daphne.psc.edu /vicepdc
>     RWrite  536901418 ROnly          0 Backup  536903011
>     MaxQuota       1000 K
>     Creation    Thu Dec 23 09:37:33 1999
>     Copy        Sun Jun 10 11:24:25 2012
>     Backup      Sat Mar  2 02:32:35 2019
>     Last Access Tue Feb 12 13:45:30 2013
>     Last Update Tue Jan  9 00:24:10 2007
>     0 accesses in the past day (i.e., vnode references)
>
>     RWrite: 536901418     ROnly: 536903611     Backup: 536903011
>     number of sites -> 5
>        *server daphne.psc.edu <http://daphne.psc.edu> partition /vicepdc
> RW Site *
>        server velma.psc.edu partition /vicepcb RO Site
>        server fred.psc.edu partition /vicepbd RO Site
>        server daphne.psc.edu partition /vicepdd RO Site
>       * server afs-vmc.psc.edu <http://afs-vmc.psc.edu> partition
> /vicepgb RO Site  -- Not released*
>
> So I will remove the daphe vicepdd RO volume using 'vos remove' , 'vos
> release' root.afs, create a RO on daphne vicepc using 'vos remsite', then
> move the volume to afs-vmc partition vicepgb using 'vos move'.
>
> BUT, should the afs.root and afs.cell volumes be on different servers as
> they were in our initial implementation?  I couldn't find anything in the
> doc that states this but wanted to make sure before I went ahead and did
> it.
>
> Thanks very much.  -susan
>
>
> On Thu, Mar 7, 2019 at 12:29 PM Jeffrey Altman <jaltman@auristor.com>
> wrote:
>
>> On 3/7/2019 11:59 AM, Susan Litzinger wrote:
>> > Hmm.. I moved removed the incorrect RO and created a new RO on velma,
>> > then tried to 'release' the new one prior to moving it to a different
>> > server and that doesn't work.  I'm hesitant to go ahead and move it if
>> > it's not in a good state.
>>
>> "vos remsite" only modifies the location database.  It does not remove
>> volumes from vice partitions.  You needed to execute "vos remove" not
>> "vos remsite".  You are still receiving the EXDEV error from velma
>> because there are still two vice partitions attached to velma each of
>> which have a volume from the same volume group.
>>
>> The fact that you were able to get into this situation is due to bugs in
>> OpenAFS which were fixed long ago in AuriStorFS.  To cleanup:
>>
>>   vos remove -server vlema.psc.edu -partition vicepcb -id 537176385
>>
>> and then
>>
>>   vos release -id root.cell
>>
>> If you are still seeing errors, examine the VolserLog on velma.psc.edu
>> and use
>>
>>   vos listvol -server velma.psc.edu -fast | grep 537176385
>>
>> to see if there are stranded readonly volumes left on somewhere.
>>
>> Jeffrey Altman
>>
>>

--0000000000000b6ad20583866cb0
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div dir=3D"ltr"><div dir=3D"ltr">I went ahead with what I=
 thought would work and it does!=C2=A0 thanks for the help form everyone! W=
e appreciate it very much.=C2=A0 Now we can move off the old servers and in=
to the 21st century. :-)=C2=A0<div><br></div><div><div>[root@afs-vmc 2019-M=
arch]# vos volinfo root.afs</div><div>vsu_ClientInit: Could not get afs tok=
ens, running unauthenticated.</div><div>root.afs=C2=A0 =C2=A0 =C2=A0 =C2=A0=
 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 536901418 R=
W=C2=A0 =C2=A0 =C2=A0 =C2=A0 290 K=C2=A0 On-line</div><div>=C2=A0 =C2=A0 <a=
 href=3D"http://afs-vma.psc.edu">afs-vma.psc.edu</a> /vicepea=C2=A0</div><d=
iv>=C2=A0 =C2=A0 RWrite=C2=A0 536901418 ROnly=C2=A0 =C2=A0 =C2=A0 =C2=A0 =
=C2=A0 0 Backup=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 0=C2=A0</div><div>=C2=A0 =
=C2=A0 MaxQuota=C2=A0 =C2=A0 =C2=A0 =C2=A01000 K=C2=A0</div><div>=C2=A0 =C2=
=A0 Creation=C2=A0 =C2=A0 Thu Dec 23 09:37:33 1999</div><div>=C2=A0 =C2=A0 =
Copy=C2=A0 =C2=A0 =C2=A0 =C2=A0 Thu Mar=C2=A0 7 14:46:40 2019</div><div>=C2=
=A0 =C2=A0 Backup=C2=A0 =C2=A0 =C2=A0 Sat Mar=C2=A0 2 02:32:35 2019</div><d=
iv>=C2=A0 =C2=A0 Last Access Tue Feb 12 13:45:30 2013</div><div>=C2=A0 =C2=
=A0 Last Update Tue Jan=C2=A0 9 00:24:10 2007</div><div>=C2=A0 =C2=A0 0 acc=
esses in the past day (i.e., vnode references)</div><div><br></div><div>=C2=
=A0 =C2=A0 RWrite: 536901418=C2=A0 =C2=A0 =C2=A0ROnly: 536903611=C2=A0</div=
><div>=C2=A0 =C2=A0 number of sites -&gt; 5</div><div>=C2=A0 =C2=A0 =C2=A0 =
=C2=A0server <a href=3D"http://afs-vma.psc.edu">afs-vma.psc.edu</a> partiti=
on /vicepea RW Site=C2=A0</div><div>=C2=A0 =C2=A0 =C2=A0 =C2=A0server <a hr=
ef=3D"http://velma.psc.edu">velma.psc.edu</a> partition /vicepcb RO Site=C2=
=A0</div><div>=C2=A0 =C2=A0 =C2=A0 =C2=A0server <a href=3D"http://fred.psc.=
edu">fred.psc.edu</a> partition /vicepbd RO Site=C2=A0</div><div>=C2=A0 =C2=
=A0 =C2=A0 =C2=A0server <a href=3D"http://daphne.psc.edu">daphne.psc.edu</a=
> partition /vicepdc RO Site=C2=A0</div><div>=C2=A0 =C2=A0 =C2=A0 =C2=A0ser=
ver <a href=3D"http://afs-vma.psc.edu">afs-vma.psc.edu</a> partition /vicep=
ea RO Site=C2=A0</div></div><div><div><br></div><div>-Susan=C2=A0</div><div=
><br></div></div></div></div></div><br><div class=3D"gmail_quote"><div dir=
=3D"ltr" class=3D"gmail_attr">On Thu, Mar 7, 2019 at 1:43 PM Susan Litzinge=
r &lt;<a href=3D"mailto:susan@psc.edu">susan@psc.edu</a>&gt; wrote:<br></di=
v><blockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;borde=
r-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir=3D"ltr"><div d=
ir=3D"ltr">I got that to work for root.cell and now I find that root.afs is=
 in the same situation.=C2=A0 Does anyone see any harm in doing the same st=
eps for root.afs?=C2=A0 It&#39;s current status is:=C2=A0</div><div dir=3D"=
ltr"><br><div><div>[root@afs-vmc 2019-March]# vos volinfo -id root.afs -loc=
alauth</div><div>root.afs=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 536901418 RW=C2=A0 =C2=A0 =C2=A0 =
=C2=A0 290 K=C2=A0 On-line</div><div>=C2=A0 =C2=A0 <a href=3D"http://daphne=
.psc.edu" target=3D"_blank">daphne.psc.edu</a> /vicepdc=C2=A0</div><div>=C2=
=A0 =C2=A0 RWrite=C2=A0 536901418 ROnly=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 0=
 Backup=C2=A0 536903011=C2=A0</div><div>=C2=A0 =C2=A0 MaxQuota=C2=A0 =C2=A0=
 =C2=A0 =C2=A01000 K=C2=A0</div><div>=C2=A0 =C2=A0 Creation=C2=A0 =C2=A0 Th=
u Dec 23 09:37:33 1999</div><div>=C2=A0 =C2=A0 Copy=C2=A0 =C2=A0 =C2=A0 =C2=
=A0 Sun Jun 10 11:24:25 2012</div><div>=C2=A0 =C2=A0 Backup=C2=A0 =C2=A0 =
=C2=A0 Sat Mar=C2=A0 2 02:32:35 2019</div><div>=C2=A0 =C2=A0 Last Access Tu=
e Feb 12 13:45:30 2013</div><div>=C2=A0 =C2=A0 Last Update Tue Jan=C2=A0 9 =
00:24:10 2007</div><div>=C2=A0 =C2=A0 0 accesses in the past day (i.e., vno=
de references)</div><div><br></div><div>=C2=A0 =C2=A0 RWrite: 536901418=C2=
=A0 =C2=A0 =C2=A0ROnly: 536903611=C2=A0 =C2=A0 =C2=A0Backup: 536903011=C2=
=A0</div><div>=C2=A0 =C2=A0 number of sites -&gt; 5</div><div>=C2=A0 =C2=A0=
 =C2=A0 =C2=A0<b>server <a href=3D"http://daphne.psc.edu" target=3D"_blank"=
>daphne.psc.edu</a> partition /vicepdc RW Site=C2=A0</b></div><div>=C2=A0 =
=C2=A0 =C2=A0 =C2=A0server <a href=3D"http://velma.psc.edu" target=3D"_blan=
k">velma.psc.edu</a> partition /vicepcb RO Site=C2=A0</div><div>=C2=A0 =C2=
=A0 =C2=A0 =C2=A0server <a href=3D"http://fred.psc.edu" target=3D"_blank">f=
red.psc.edu</a> partition /vicepbd RO Site=C2=A0</div><div>=C2=A0 =C2=A0 =
=C2=A0 =C2=A0server <a href=3D"http://daphne.psc.edu" target=3D"_blank">dap=
hne.psc.edu</a> partition /vicepdd RO Site=C2=A0</div><div>=C2=A0 =C2=A0 =
=C2=A0 <b>=C2=A0server <a href=3D"http://afs-vmc.psc.edu" target=3D"_blank"=
>afs-vmc.psc.edu</a> partition /vicepgb RO Site=C2=A0 -- Not released</b></=
div></div><div><br></div><div>So I will remove the daphe vicepdd RO volume =
using &#39;vos remove&#39; , &#39;vos release&#39; root.afs, create a RO on=
 daphne vicepc using &#39;vos remsite&#39;, then move the volume to afs-vmc=
 partition vicepgb using &#39;vos move&#39;.=C2=A0=C2=A0</div><div><br></di=
v><div>BUT, should the afs.root and afs.cell volumes be on different server=
s as they were in our initial implementation?=C2=A0 I couldn&#39;t find any=
thing in the doc that states this but wanted to make sure before I went ahe=
ad and did it.=C2=A0</div><div><br></div><div>Thanks very much.=C2=A0 -susa=
n=C2=A0</div><div><br></div></div></div><br><div class=3D"gmail_quote"><div=
 dir=3D"ltr" class=3D"gmail_attr">On Thu, Mar 7, 2019 at 12:29 PM Jeffrey A=
ltman &lt;<a href=3D"mailto:jaltman@auristor.com" target=3D"_blank">jaltman=
@auristor.com</a>&gt; wrote:<br></div><blockquote class=3D"gmail_quote" sty=
le=3D"margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);paddi=
ng-left:1ex">On 3/7/2019 11:59 AM, Susan Litzinger wrote:<br>
&gt; Hmm.. I moved removed the incorrect RO and created a new RO on velma,<=
br>
&gt; then tried to &#39;release&#39; the new one prior to moving it to a di=
fferent<br>
&gt; server and that doesn&#39;t work.=C2=A0 I&#39;m hesitant to go ahead a=
nd move it if<br>
&gt; it&#39;s not in a good state.=C2=A0=C2=A0<br>
<br>
&quot;vos remsite&quot; only modifies the location database.=C2=A0 It does =
not remove<br>
volumes from vice partitions.=C2=A0 You needed to execute &quot;vos remove&=
quot; not<br>
&quot;vos remsite&quot;.=C2=A0 You are still receiving the EXDEV error from=
 velma<br>
because there are still two vice partitions attached to velma each of<br>
which have a volume from the same volume group.<br>
<br>
The fact that you were able to get into this situation is due to bugs in<br=
>
OpenAFS which were fixed long ago in AuriStorFS.=C2=A0 To cleanup:<br>
<br>
=C2=A0 vos remove -server <a href=3D"http://vlema.psc.edu" rel=3D"noreferre=
r" target=3D"_blank">vlema.psc.edu</a> -partition vicepcb -id 537176385<br>
<br>
and then<br>
<br>
=C2=A0 vos release -id root.cell<br>
<br>
If you are still seeing errors, examine the VolserLog on <a href=3D"http://=
velma.psc.edu" rel=3D"noreferrer" target=3D"_blank">velma.psc.edu</a><br>
and use<br>
<br>
=C2=A0 vos listvol -server <a href=3D"http://velma.psc.edu" rel=3D"noreferr=
er" target=3D"_blank">velma.psc.edu</a> -fast | grep 537176385<br>
<br>
to see if there are stranded readonly volumes left on somewhere.<br>
<br>
Jeffrey Altman<br>
<br>
</blockquote></div>
</blockquote></div>

--0000000000000b6ad20583866cb0--