[OpenAFS] Linux + 32bit files + butc writing to file = obnoxiousness
Robin Yamaguchi
rhy@physics.ucsb.edu
Fri, 9 May 2003 13:37:53 -0700 (PDT)
Thanks for the quick response!
All of the afs processes seem to still be running, thus no core dump:
708 ? S 0:00 /usr/afs/bin/bosserver
719 ? S 0:00 \_ /usr/afs/bin/kaserver
720 ? S 0:00 \_ /usr/afs/bin/buserver
721 ? S 0:00 \_ /usr/afs/bin/ptserver
722 ? S 0:00 \_ /usr/afs/bin/vlserver
724 ? S 1:18 \_ /usr/afs/bin/volserver
723 ? S< 0:01 \_ /usr/afs/bin/fileserver
714 ? SW 0:00 [afs_rxlistener]
716 ? SW 0:00 [afs_callback]
718 ? SW 0:00 [afs_rxevent]
738 ? SW 0:00 [afsd]
742 ? SW 0:00 [afs_checkserver]
744 ? SW 0:00 [afs_background]
746 ? SW 0:00 [afs_background]
748 ? SW 0:00 [afs_background]
751 ? SW 0:00 [afs_cachetrim]
Yet rxdebug, even on the server still can't connect to port 7000:
I've been running a test afs server for the past few months running redhat
7.3, 2.4.18-3, openafs 1.2.8. I backgrounded butc and got it run at 99%
cpu, but thus far, an hour later, the afs server has stayed up. Will
continue this test...
thanks,
Robin
On Fri, 9 May 2003, Derrick J Brashear wrote:
> On Fri, 9 May 2003, Robin Yamaguchi wrote:
>
> > My backups write to file, and regardless of what i put in tapeconfig, butc
> > can't write more then 2GB. I am running 2.4.20-8, ext3, afs 1.2.9 for
> > redhat 9, so I should be able to surpass this filesize limit. Is this
> > limit hardcoded in butc?
>
> You need the right flags at compile-time. I think we have a patch for this
> but if so it would be "didn't make it into 1.2.9"
>
> > about 3 gigs worth of localfiles on the afs server, none of which was afs
> > data, onto a mounted nfs share. This makes me wonder if afs is crashing in
> > relation to high CPU usage, not to just butc.
>
> Did you get a core?
> _______________________________________________
> OpenAFS-info mailing list
> OpenAFS-info@openafs.org
> https://lists.openafs.org/mailman/listinfo/openafs-info
>