[OpenAFS] a noobs question and problems on a new cell
Wed, 9 May 2007 09:15:16 -0400
i have been following this thread, and i haven't seen any description =20
of what you are actually trying to accomplish. are you trying to make =20
it so that only the users at "headquarters" can write to certain =20
files, and that the users at the "district sites" will only be able to =20
read the files, not make changes?
it sounds like you are heading for trouble trying to use replicated =20
volumes and possibly funky mounting schemes to accomplish something =20
you can do using acls.
why won't acls do what you want?
also, people have mentioned the issue of server quorum in a cell where =20
the database servers are geographically remote from each other. before =20
you design your cell, please consider the underlying network. with =20
AFS, unless you have a really stable network underneath, you will find =20
that putting fileservers or database servers out at remote sites is =20
not necessarily a good thing. example: one cell, three database =20
servers, one in sweden, one in italy, one in brazil, users in all =20
three places, administrators in sweden; say quorum drifts to brazil, =20
then network between brazil and sweden is down; the admins can't make =20
any normal changes until the network comes back. how inconvenient!
as with any construction project, it's worth putting the time into =20
planning. good luck!
Quoting Klaas Hagemann <email@example.com>:
>> I tried that this way and didn't get it:
>> a volume called software (~1 Gig)
>> in our headquarter the rw-volume on the afs server.
>> in a district the (nightly) ro-snapshot of that volume.
>> mounted into afs like:
>> /afs/domain/.software (-rw)
>> /afs/domain/software (ro)
>> so if I understand that right i should now be able to access the data und=
>> /afs/domain/.software on both sides.
> That is right, but you will always get the rw-instance in your headquarter=
>> in the headquarter it should use always the rw-instance and in the distri=
>> it should use the rw-instance (over vpn) on a write,
>> and on a read it should prefer the local ro-instance. but that doesn't wo=
>> for me.
>> everytime I accessed some software in the district it was transfered
>> completly over the vpn from our headquarter.
>> did I something missunderstood or have I done something wrong !?
> If you choose the rw-path (the "dotted" path) /afs/domain/.software,
> you will always get the rw-path. OpenAFS do not bother about the
> location of the volume at this point.
> If you use the "normal" path /afs/domain/softare, you will preferable
> be forwarded to an ro-Instance of that volume. In your case, users in
> the headquarter would use a volume in one of your departments.
> The decision, whether to use a RO or a RW instance of a volume is not
> made by the location of the volume. the decision is based on:
> - is it an explicit rw-mountpoint (.software)
> - are ro instances available
> If you do not make a rw-mountpoint, the afs client will contact
> ro-volumes as long as you can access one. Only if no ro volume is
> available, the rw instance is used.
> Then there is another point to be aware of:
> "Once RW, always RW"
> So if you have in your afs path only on rw-volume, all the underlying
> moint-points will be rw too. So if your root.cell volume (which is the
> mount-point for /afs/domain) is only available as a rw-version, you
> will never be able to access ro-volumes.
>> the idea of this behaviour (take the lokal ro if available and just get w=
>> you still need over vpn) was the coolest feature of the afs - i thougt. a=
>> is the most case why I was looking on the whole afs thing - and not
>> something like nfs.
> that is basically still true, but the decision is not made by accessing
> a file. the decision is made by choosing the right mount-point for a
> Which volume you have access to is a manner of mount-points and ACLs,
> NOT of the location of the volume. In an ideal world a user do not need
> to know on which server his data is stored.
> OpenAFS-info mailing list