[OpenAFS-port-darwin] Resource / Data Forks

David Botsch dwb7@ccmr.cornell.edu
Mon, 19 May 2003 19:40:09 -0400


Hmm... haven't tried mounting /Users in afsspace...

Don't see why it wouldn't work, though.

but what we do is to have our user home directories under /afs (so for our cell
msc, user home dirs would be in something like /afs/msc/home). Of course, the
netinfo db is updated to contain the right home directory locations.

There are two gotchas with OS X:
1. the uid of the user should match the afs id in pts db
2. the gui finder doesn't understand afs permissions and only looks at user
permissions and does not refresh itself properly

More details on these two gotchas are available in the archives for this list.

But, there are patches for afs which should work around these two issues. I
haven't personally tried them out, yet (bad me.. too many things on plate), but
will be doing so soon.

On Mon, May 19, 2003 at 07:35:14PM -0400, lists@southernohio.net wrote:
> Perhaps I should ask another question...  Can /Users be mounted safely 
> as AFS?  Or is it better practice to have Users' subdirectories mounted 
> as AFS?  ie, /Users/me/Documents is AFS.
> 
> Thanks!  I'm started to get excited!  :D  This will be a network 
> administrator's dream come true.
> 
> On Monday, May 19, 2003, at 07:21  PM, David Botsch wrote:
> 
> > In my experience, it handles resource forks just fine, thought, not in 
> > separate
> > directories.
> >
> > Instead, the resource fork for filename ends up in a file called 
> > ._filename
> >
> > I don't know if this is an afs oddity or an OS X oddity.
> >
> > While UNIX users won't see these ._ files, Windows users will (and can
> > potentially delete them). Depending on the file, this may or may not 
> > be a
> > problem.
> >
> > Yes, a Mac user can save something to AFS space and then a MS or UNIX 
> > person
> > can open it (barring the usual problems inherent in things like MS 
> > Office can't
> > read its own files, etc).
> >
> > On Mon, May 19, 2003 at 06:36:59PM -0400, lists@southernohio.net wrote:
> >> I have been searching for an ideal solution to synchronize all user
> >> files on my network.  It consists of MacOS X(.2.6), Windows XP, and
> >> Linux (possibly soon to be FreeBSD) machines.  In part I want to do
> >> this to create redundancy as a frequent backup of user files.  This
> >> also needs to accommodate laptops.
> >>
> >> I had heard of OpenAFS long ago, but just recently it came back into
> >> the forefront while coming upon an article that basically said that it
> >> blows all of the other options out of the water.
> >>
> >> My question is how does it handle the HFS+ oddities (compared to other
> >> Unix FS's and NTFS).  I would like to know how this works before I
> >> start implementing this.  Will I be able to allow a user on MacOS X to
> >> go to a Windows machine and see all of their Word documents?  (I 
> >> assume
> >> that that will work flawlessly)  But what happens when a Mac user has
> >> resource forks on his or her files or applications?  Does OpenAFS
> >> translate that properly so that it shows up as two directories to the
> >> other OSes or does it destroy the fork and thus render this unusable
> >> for Mac users?
> >>
> >> If this does not work yet, what would it take to make it work so that
> >> it is essentially seamless?  It would be wonderful to have a unified
> >> redundant distributed file system!  And this would solve so many
> >> problems that seem to be all over the forums about synching between
> >> Windows, Mac, etc.
> >>
> >> Thanks for any input!
> >>
> >> _______________________________________________
> >> port-darwin mailing list
> >> port-darwin@openafs.org
> >> https://lists.openafs.org/mailman/listinfo/port-darwin
> >
> > -- 
> > ********************************
> > David William Botsch
> > Consultant/Advisor II
> > CCMR Computing Facility
> > dwb7@ccmr.cornell.edu
> > ********************************
> >

-- 
********************************
David William Botsch
Consultant/Advisor II
CCMR Computing Facility
dwb7@ccmr.cornell.edu
********************************