[OpenAFS] File too large
Mon, 13 Mar 2006 22:55:42 +0100
On Mar 13, 2006, at 2:53 PM, Juha J=E4ykk=E4 wrote:
>>> kernel) I cannot write a file larger than approximately 2GB in size
>>> to my AFS volumes, even from the fileserver itself. The release =20
>> Build it from source and use --enable-largefile-fileserver
> This is odd, I have 1.3.81 and I'm quite able to write >2 GiB files =20=
> on the
> AFS volume. I do not seem to be able to read them, though. Any process
> trying to access the over 2GB-parts of the files hangs for ever. It =20=
> even be killed (SIGKILL). Which one is at fault here, server or =20
> (Everything runs on linux/XFS, except the client cache, which is on =20=
They could both be the cause of your problem.
The large file support has to be in your client as well as your =20
fileserver, if you want to handle large files. ;-)
You should be able to mix clients and servers for files < 2GB.
> I also have one 1.4.0 -server. What happens if I put the large file on
> 1.4.0 and try to access it from 1.3.81 clients? What if I replicate =20=
> volume to 1.3.81 fileservers? Shuold I force all fileservers to be =20
> of the
> same version?
I don't remember if 1.4.x has large file support enabled by default, =20
since I don't use packages, but if it doesn't, you only get into =20
trouble when you mix in the 'wrong direction'. Which means, you =20
better don't handle large files with a server or client which doesn't =20=
have it supported. (kinda obvious, isn't it?)
For the replication part, you actually shouldn't be able to release =20
volumes with large files on a server which doesn't support that.
On my machines the 'vos release' fails, but I'm not sure there aren't =20=
any cases where it appears to be working.
I wouldn't trust it anyway... :-)
You don't have to 'force' the fileservers to be the same version you =20
just should think about what you're doing when you move or replicate =20
A mixed environment requires some extra care, but isn't it always =20
like that? :-)