[OpenAFS] a noobs question and problems on a new cell

Kim Kimball dhk@ccre.com
Wed, 16 May 2007 11:33:18 -0600


To play devil's advocate, unless the network is really unstable (or 
pitifully slow) the worst that will happen is temporary inconvenience, 
and mostly to administrators.  User's will be unable to modify PTS 
protection groups and will not be able to change their password, unless 
the network isolates them from all other (up) DB servers, in which case 
we have other concerns as well.

And of course there's no requirement to put the DB servers where the 
file servers are.

I've used both approaches -- all DB servers in one location, 3 (and 5) 
DB servers all in different locations. 

I've also moved DB server functionality off of machines that were on 
unstable (or pitifully slow) networks.

YMMV

If the goals are as Anne describes then ACLs and protection groups are 
the way to go.  The use of RW mount points does not help discriminate 
access within a directory.  ACLs do.

Have fun!

Kim


Anne.Salemme@Dartmouth.EDU wrote:
> i have been following this thread, and i haven't seen any description 
> of what you are actually trying to accomplish. are you trying to make 
> it so that only the users at "headquarters" can write to certain 
> files, and that the users at the "district sites" will only be able to 
> read the files, not make changes?
>
> it sounds like you are heading for trouble trying to use replicated 
> volumes and possibly funky mounting schemes to accomplish something 
> you can do using acls.
> why won't acls do what you want?
>
> also, people have mentioned the issue of server quorum in a cell where 
> the database servers are geographically remote from each other. before 
> you design your cell, please consider the underlying network. with 
> AFS, unless you have a really stable network underneath, you will find 
> that putting fileservers or database servers out at remote sites is 
> not necessarily a good thing. example: one cell, three database 
> servers, one in sweden, one in italy, one in brazil, users in all 
> three places, administrators in sweden; say quorum drifts to brazil, 
> then network between brazil and sweden is down; the admins can't make 
> any normal changes until the network comes back. how inconvenient!
>
> as with any construction project, it's worth putting the time into 
> planning. good luck!
>
> anne
>
>
>
>
> Quoting Klaas Hagemann <kerberos@northsailor.de>:
>
>> <snip>
>>> I tried that this way and didn't get it:
>>> a volume called software (~1 Gig)
>>> in our headquarter the rw-volume on the afs server.
>>> in a district the (nightly) ro-snapshot of that volume.
>>> mounted into afs like:
>>> /afs/domain/.software (-rw)
>>> /afs/domain/software (ro)
>>> so if I understand that right i should now be able to access the 
>>> data under
>>> /afs/domain/.software on both sides.
>>>
>> That is right, but you will always get the rw-instance in your 
>> headquarter.
>>> in the headquarter it should use always the rw-instance and in the 
>>> district
>>> it should use the rw-instance (over vpn) on a write,
>>> and on a read it should prefer the local ro-instance. but that 
>>> doesn't work
>>> for me.
>>> everytime I accessed some software in the district it was transfered
>>> completly over the vpn from our headquarter.
>>> did I something missunderstood or have I done something wrong !?
>>>
>> If you choose the rw-path (the "dotted" path) /afs/domain/.software,
>> you will always get the rw-path. OpenAFS do not bother about the
>> location of the volume at this point.
>>
>> If you use the "normal" path /afs/domain/softare, you will preferable
>> be forwarded to an ro-Instance of that volume. In your case, users in
>> the headquarter would use a volume in one of your departments.
>>
>> The decision, whether to use a RO or a RW instance of a volume is not
>> made by the location of the volume. the decision is based on:
>> - is it an explicit rw-mountpoint (.software)
>> - are ro instances available
>>
>> If you do not make a rw-mountpoint, the afs client will contact
>> ro-volumes as long as you can access one. Only if no ro volume is
>> available, the rw instance is used.
>>
>> Then there is another point to be aware of:
>> "Once RW, always RW"
>> So if you have in your afs path only on rw-volume, all the underlying
>> moint-points will be rw too. So if your root.cell volume (which is the
>> mount-point for /afs/domain) is only available as a rw-version, you
>> will never be able to access ro-volumes.
>>
>>> the idea of this behaviour (take the lokal ro if available and just 
>>> get what
>>> you still need over vpn) was the coolest feature of the afs - i 
>>> thougt. and
>>> is the most case why I was looking on the whole afs thing - and not
>>> something like nfs.
>>>
>> that is basically still true, but the decision is not made by accessing
>> a file. the decision is made by choosing the right mount-point for a
>> volume.
>>
>> Which volume you have access to is a manner of mount-points and ACLs,
>> NOT of the location of the volume. In an ideal world a user do not need
>> to know on which server his data is stored.
>>
>>
>> Klaas
>>
>> _______________________________________________
>> OpenAFS-info mailing list
>> OpenAFS-info@openafs.org
>> https://lists.openafs.org/mailman/listinfo/openafs-info
>
>
> _______________________________________________
> OpenAFS-info mailing list
> OpenAFS-info@openafs.org
> https://lists.openafs.org/mailman/listinfo/openafs-info
>
>