[OpenAFS-devel] Read-only file system
Erik Osterman
e@osterman.com
Sun, 11 Mar 2007 13:40:56 -0700
This is a multi-part message in MIME format.
--------------050304090906010109070607
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: quoted-printable
Heh, this is what happens when you project what you really want over the=20
reality of what you really have. I've read this fact in several places,=20
but I just couldn't accept that it REALLY did work this way. Thanks=20
everyone!
Best Regards,
Erik Osterman
Martin MOKREJ=C5=A0 wrote:
> write to /afs/.osterman.com/some/path and not /afs/osterman.com/some/pa=
th.
> It is a habit to mount under /afs/.osterman.com the RW volume. To see t=
he
> changes in /afs/osterman.com/some/path you have to do 'vos release oste=
rman.com'.
> m
>
> Erik Osterman wrote:
> =20
>> I suspect it's something very fundamental that I don't understand.
>>
>> /*Rule 2:* Follow the Read-only Path When Possible /
>>
>> /If a mount point resides in a read-only volume and the volume that it
>> references is replicated, the Cache Manager attempts to access a
>> read-only copy of the volume; if the referenced volume is not
>> replicated, the Cache Manager accesses the read/write copy. The Cache
>> Manager is thus said to prefer a //read-only path through the filespac=
e,
>> accessing read-only volumes when they are available./
>>
>>
>> Does this mean that since I now have RO volumes, when I go into my hom=
e
>> directory (the home volume mounted to /afs/ourcell/home), that it find=
s
>> the RO replicated copy on our other server (donjulio) and since that's
>> where I'm accessing it, that if I try to write to it I will get
>> "Read-Only filesystem"? Surely, this can't be!
>>
>> Best,
>>
>> Erik Osterman
>>
>>
>>
>> Erik Osterman wrote:
>> =20
>>> Spent the better part of the day yesterday configuring replication. I
>>> felt like I overcame the learning curve; OpenAFS was beginning to mak=
e
>>> sense. Last night, everything was running just fine, by that I mean, =
I
>>> had several RW volumes replicated to 2 different hosts. I could read
>>> and write just fine on everything. I woke up today, and now it's
>>> complaining "Read-only file system" every time to try to make any
>>> modifications. I figured something must have flipped the RW partition
>>> into a RO mode.
>>>
>>> This is happening to all my volumes, but I'll just refer to my "home"
>>> volume. We have just a handful of users, so all users are just in hom=
e.
>>>
>>> vos listvldb jwalker
>>> home
>>> RWrite: 536870918 ROnly: 536870919
>>> number of sites -> 2
>>> server jwalker partition /vicepa RW Site
>>> server donjulio partition /vicepa RO Site
>>>
>>> vos listvol jwalker
>>> Total number of volumes on server jwalker partition /vicepa: 8
>>> home 536870918 RW 5039076 K On-line
>>>
>>> vos examine home
>>> home 536870918 RW 5039076 K On-line
>>> jwalker /vicepa
>>> RWrite 536870918 ROnly 536870919 Backup 0
>>> MaxQuota 0 K
>>> Creation Sat Mar 10 15:37:36 2007
>>> Copy Sat Mar 10 15:37:36 2007
>>> Backup Never
>>> Last Update Sun Mar 11 04:00:33 2007
>>> 47061 accesses in the past day (i.e., vnode references)
>>>
>>> RWrite: 536870918 ROnly: 536870919
>>> number of sites -> 2
>>> server jwalker partition /vicepa RW Site
>>> server donjulio partition /vicepa RO Site
>>>
>>>
>>> What would cause this unexpected behavior?
>>>
>>> Dead in the water,
>>>
>>> Erik Osterman
>>> _______________________________________________
>>> OpenAFS-devel mailing list
>>> OpenAFS-devel@openafs.org
>>> https://lists.openafs.org/mailman/listinfo/openafs-devel
>>> =20
>
> =20
--------------050304090906010109070607
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: quoted-printable
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
<head>
<meta content=3D"text/html;charset=3DUTF-8" http-equiv=3D"Content-Type"=
>
</head>
<body bgcolor=3D"#ffffff" text=3D"#000000">
Heh, this is what happens when you project what you really want over
the reality of what you really have. I've read this fact in several
places, but I just couldn't accept that it REALLY did work this way.
Thanks everyone!<br>
<br>
Best Regards,<br>
<br>
Erik Osterman<br>
<br>
Martin MOKREJ=C5=A0 wrote:
<blockquote cite=3D"mid45F463EA.80201@ribosome.natur.cuni.cz" type=3D"cit=
e">
<pre wrap=3D"">write to /afs/.osterman.com/some/path and not /afs/oster=
man.com/some/path.
It is a habit to mount under /afs/.osterman.com the RW volume. To see the
changes in /afs/osterman.com/some/path you have to do 'vos release osterm=
an.com'.
m
Erik Osterman wrote:
</pre>
<blockquote type=3D"cite">
<pre wrap=3D"">I suspect it's something very fundamental that I don't=
understand.
/*Rule 2:* Follow the Read-only Path When Possible /
/If a mount point resides in a read-only volume and the volume that it
references is replicated, the Cache Manager attempts to access a
read-only copy of the volume; if the referenced volume is not
replicated, the Cache Manager accesses the read/write copy. The Cache
Manager is thus said to prefer a //read-only path through the filespace,
accessing read-only volumes when they are available./
Does this mean that since I now have RO volumes, when I go into my home
directory (the home volume mounted to /afs/ourcell/home), that it finds
the RO replicated copy on our other server (donjulio) and since that's
where I'm accessing it, that if I try to write to it I will get
"Read-Only filesystem"? Surely, this can't be!
Best,
Erik Osterman
Erik Osterman wrote:
</pre>
<blockquote type=3D"cite">
<pre wrap=3D"">Spent the better part of the day yesterday configuri=
ng replication. I
felt like I overcame the learning curve; OpenAFS was beginning to make
sense. Last night, everything was running just fine, by that I mean, I
had several RW volumes replicated to 2 different hosts. I could read
and write just fine on everything. I woke up today, and now it's
complaining "Read-only file system" every time to try to make any
modifications. I figured something must have flipped the RW partition
into a RO mode.
This is happening to all my volumes, but I'll just refer to my "home"
volume. We have just a handful of users, so all users are just in home.
vos listvldb jwalker
home
RWrite: 536870918 ROnly: 536870919
number of sites -> 2
server jwalker partition /vicepa RW Site
server donjulio partition /vicepa RO Site
vos listvol jwalker
Total number of volumes on server jwalker partition /vicepa: 8
home 536870918 RW 5039076 K On-line
vos examine home
home 536870918 RW 5039076 K On-line
jwalker /vicepa
RWrite 536870918 ROnly 536870919 Backup 0
MaxQuota 0 K
Creation Sat Mar 10 15:37:36 2007
Copy Sat Mar 10 15:37:36 2007
Backup Never
Last Update Sun Mar 11 04:00:33 2007
47061 accesses in the past day (i.e., vnode references)
RWrite: 536870918 ROnly: 536870919
number of sites -> 2
server jwalker partition /vicepa RW Site
server donjulio partition /vicepa RO Site
What would cause this unexpected behavior?
Dead in the water,
Erik Osterman
_______________________________________________
OpenAFS-devel mailing list
<a class=3D"moz-txt-link-abbreviated" href=3D"mailto:OpenAFS-devel@openaf=
s.org">OpenAFS-devel@openafs.org</a>
<a class=3D"moz-txt-link-freetext" href=3D"https://lists.openafs.org/mail=
man/listinfo/openafs-devel">https://lists.openafs.org/mailman/listinfo/op=
enafs-devel</a>
</pre>
</blockquote>
</blockquote>
<pre wrap=3D""><!---->
</pre>
</blockquote>
<br>
</body>
</html>
--------------050304090906010109070607--