[OpenAFS-devel] client freeze while writing large files
Ulrich Schwickerath
ulrich.schwickerath@iwr.fzk.de
Mon, 28 Jun 2004 13:40:49 +0200
Hello,
I'm experimenting with AFS on a linux box running kernel.org 2.4.26. I have
compiled the OpenAFS version 1.3.64 with --enable-large-file-server support
turned on.
Doing something like
cat testfile testfile> testfile1.dat
on my afs volume (where testfile has a size of about 2GB), the testfile1.dat
is created with size > 2GB as it should. However, if I do
dd if=/dev/zero of=test.dat bs=1024M count=3
then the client freezes as soon as the 2GB file size is reached. After that,
all attempts to access the afs cell freeze on that client node. A restart of
afs failes because the module cannot be unloaded (busy by the locked
processes it seems). On the server side everything looks normal. The file
stops growing at 2GB, and other clients can still connect to it.
Is this a known issue ? Is there a workaround (I mean, other than turning off
large file support) or a patch available for this ?
Thank's in advance,
Ulrich
--
__________________________________________
Dr. Ulrich Schwickerath
Forschungszentrum Karlsruhe
GRID-Computing and e-Science
Institut for Scientific Computing (IWR)
P.O. Box 36 40
76021 Karlsruhe, Germany
Tel: +49(7247)82-8607
Fax: +49(7247)82-4972
e-mail: ulrich.schwickerath@iwr.fzk.de
WWW: http://www.fzk.de
__________________________________________