[OpenAFS] a noobs question and problems on a new cell

Adnoh adnoh@users.sourceforge.net
Wed, 9 May 2007 01:55:11 -0700 (PDT)


Hy Christopher,
first - thanks for your responses and your patience.
second - sorry for the long time I need to respond. I'm out of office this
week.
I write my answers direkt into your text below.


Christopher D. Clausen wrote:
> 
>> i don't want to have all the volumes in our headquarter. so every
>> time a user openes his word-doc or similar it would be completly
>> transfered over our VPN - and I can hear the people crying "our
>> fileservers are too slow !" so seperate fileservers in every district
>> would be a good choice, I think - would'nt they ?
> That is an option.  There are of course problems with doing either. 
> Remember that the AFS clients themselves cache read-only data.  So if 
> most of your data is only being read and not written back that often, it 
> might make sense to have only centrally located AFS servers.
> 
thats right - but my problem at the moment is that we have only
windows-workstations. And I did'nt figure out how
I could customize the MSI-installation in that way, so I don't need to
travel to all our restricts and configure that client.
so I would like one afs "client" per district - the fileserver which is
already there (a linux gentoo machine) - some kind of afs->samba-gateway



> By default, the AFS client prefers to use readonly volumes, so if you 
> create a replica of a volume, the data will immediately become readonly. 
> You can however manualy force the mount point to be RW (-rw option to fs 
> mkm) and this way you can have an RW volume in each local district and 
> still be able to clone the data to other servers using vos release.  All 
> volume rights must go to directly to the RW volume.  The AFS client does 
> not detect when you want to make a write and find the proper RW volume. 
> You can modify the code to make it behave that way, but there are 
> reasons for not doing that.
> 
I tried that this way and didn't get it:
a volume called software (~1 Gig)
in our headquarter the rw-volume on the afs server.
in a district the (nightly) ro-snapshot of that volume.
mounted into afs like:
/afs/domain/.software (-rw)
/afs/domain/software (ro)
so if I understand that right i should now be able to access the data under
/afs/domain/.software on both sides.
in the headquarter it should use always the rw-instance and in the district
it should use the rw-instance (over vpn) on a write,
and on a read it should prefer the local ro-instance. but that doesn't work
for me.
everytime I accessed some software in the district it was transfered
completly over the vpn from our headquarter.
did I something missunderstood or have I done something wrong !?

the idea of this behaviour (take the lokal ro if available and just get what
you still need over vpn) was the coolest feature of the afs - i thougt. and
is the most case why I was looking on the whole afs thing - and not
something like nfs.



> However, you might simply be better off using a more common network 
> filesystem like NFS or samba and using something like rsync to backup 
> the data nightly.  You mentioned a VPN.  Since the network link is 
> already encrypted, you don't require filesystem encryption?  Or do you?
> 
I'm not shure of the encryption ting. the vpn is a line from a large
provider in germany. so I think the line is secure, but I'm a little
bit paranoide ;-)



> It seems as though you are trying to use AFS like NFS or samba, creating a
> single 
> large share point and allowing everyone to write in it.  This is not the 
> best way to use AFS, although it mostly works.  Replicating single large 
> volumes can take a long time, especially over slow links.
> 
yes and no. we have our samba-fileservers in every district completely
seperated from each other.
so if user a from district a wants to give a file to user b from district b
for working on it - he uses email. when
user b has his work completed on that file he uses that way to get the file
back to user a - and if someone in district
a has altered the file in that time - they have a problem...
so yes, i would like one big namespace - something like
/afs/domain/data/it
                       /controlling
                       /bookkeeping
and so on - so every user in a organisation unit can access his data from
each district he is at the moment and easilly share that to someone else who
is maybe not in the same district.
i thougt this is something afs wants me to give.

Can you describe a "distrcit office" in more detail?  How many users? 
->This differs - lets say 10 districts, 5 with ~100 users, 60 Gig of data
and a "data-change" of 100MB / Day
and the other 5 with the half of the above.

Is there technical staff there to diagnose problems with an AFS server, if
they occur?  Are the offices always connected to the network?  What type of
connection do they have?  Bandwidth?  Latency?  
->no - the only technical staff is in our headquarter. we have a vpn from a
large provider which has a offline-time of maybe 10 Min / Year at all - so
it is very goot. The Bandwith differs - from 512k - 2Mbit. they are
connected 24h / day.

Do you use Kerberos 5 currently within your organization?  A single realm? 
Or a realm per district?  
->We use a windows 2003 ADS for authentications of the windows workstations
and the samba-servers.

Do you have any off-site backup or disaster recovery requirements?
->I would like to have a backup on the local usb-hdd in each district and a
centraliced backup in our headquarter with a fullbackup/week and
diff-backup/day.

Any specific features that the project MUST do?  Any features that the
project SHOULD do?  Anything else that 
would be nice to do?
->  yes - that what I have mentioned above ;-) - the "global" namespace
would be nice. maybe it is
interesting to tell you that we wanne migrate the workstations to linux in
the next 2-3 years.

How much data are we talking about here?  Total and at each district?  What
is the "change rate" of your data?  How much 
data is modified per day or per week as a percentage of the total data? 
->mentioned above - all together, maybe ~ 500 Gig at the moment - but I
don't know how much duplicate data is there arround - you now that "i need
my files in every district, my local hdd and for best on my usb again" ;-)
-- 
View this message in context: http://www.nabble.com/a-noobs-question-and-problems-on-a-new-cell-tf3691140.html#a10390647
Sent from the OpenAFS - General mailing list archive at Nabble.com.