[OpenAFS-devel] client freeze while writing large files

Ulrich Schwickerath ulrich.schwickerath@iwr.fzk.de
Mon, 5 Jul 2004 12:00:52 +0200


Hi, 

thank's for the reply. I tried with tcpdump, but there does not seem to be any 
traffic after the freeze has occurred. As an additional piece of information, 
I found out that obviously the block size of dd playes a role, because doing:
dd if=/dev/zero of=testfile.dat bs=1024k count=4096
works ok, while 
dd if=/dev/zero of=testfile.dat bs=1024M count=4
does not. Could this be related to the caches size ? I'm using a memory cache. 

Cheers,
Ulrich 

On Saturday 03 July 2004 21:34, Derrick J Brashear wrote:
> On Tue, 29 Jun 2004, Ulrich Schwickerath wrote:
> > Hi,
> >
> >> Fileserver on same box?
> >
> > yes, but this does not seem to matter. A client-only also freezes.
>
> Ok.
>
> > ls -l /afs/fzk.de/scratch2
> > being stuck I get:
> > [ulrich@hikiba1 ~]$ cmdebug -long -servers hikiba1
>
> (nothing exciting)
>
> 2 thoughts: fstrace, and tcpdump. (is the cache manager doing something ot
> just looping, and is there network activity)
>
> _______________________________________________
> OpenAFS-devel mailing list
> OpenAFS-devel@openafs.org
> https://lists.openafs.org/mailman/listinfo/openafs-devel

-- 
__________________________________________
Dr. Ulrich Schwickerath
Forschungszentrum Karlsruhe
GRID-Computing and e-Science
Institut for Scientific Computing (IWR)
P.O. Box 36 40
76021 Karlsruhe, Germany

Tel: +49(7247)82-8607
Fax: +49(7247)82-4972 

e-mail: ulrich.schwickerath@iwr.fzk.de
WWW: http://www.fzk.de
__________________________________________