[OpenAFS] Linux + 32bit files + butc writing to file = obnoxiousness
Robin Yamaguchi
rhy@physics.ucsb.edu
Tue, 17 Jun 2003 10:32:01 -0700 (PDT)
Hello again,
In regards to the problem I expereinced re: this thread, I can only produce
it using RedHat 9 kernels: kernel-2.4.20-18.9,
kernel-2.4.20-8, kernel-2.4.20-13.9 from RPMs. Running a kernel compiled
from source, even using Redhat's .config doesn't experience the AFS server
crash under high cpu and i/o usage. Strange...
In regards to butc and writing to file with a 2 gig limit, Derek, you
responded with the following.
> You need the right flags at compile-time. I think we have a patch for
> this but if so it would be "didn't make it into 1.2.9"
Could you please give me some specifics?
thank you! I am utmost grateful for your advice.
-Robin
--------------------------
Robin H. Yamaguchi
Physics Computing Services
UC Santa Barbara
805-893-8366
--------------------------
On Fri, 9 May 2003, Robin Yamaguchi wrote:
> Hello All:
>
> My backups write to file, and regardless of what i put in tapeconfig, butc
> can't write more then 2GB. I am running 2.4.20-8, ext3, afs 1.2.9 for
> redhat 9, so I should be able to surpass this filesize limit. Is this
> limit hardcoded in butc?
>
> Another problem i'm having involves high CPU load and AFS crashing. I've
> been running butc manually, backgrounding the process, and killing the
> console window. When backup dumps tries to run, and the dump file exsists,
> butc prompts for another tape. However, since I don't have console access,
> butc freaks out and runs at 99% cpu usage. Letting this process run for
> about an hour leads to my afs server crashing. AFS clients report
> "connection timed out" and rxdebug fails with code -1.
>
> strace -p butc shows:
> futex(0x80e7454, FUTEX_WAIT, 627, NULL) = 0
> gettimeofday({1052510584, 572087}, NULL) = 0
> time(NULL) = 1052510584
> gettimeofday({1052510584, 572435}, NULL) = 0
> sendmsg(5, {msg_name(16)={sa_family=AF_INET, sin_port=htons(32776),
> sin_addr=inet_addr("128.111.123.3")},
> msg_iov(2)=[{"\244\240aZ\372\377\347$\0\0\0\2\0\0\0\1\0\0\0\2\1\4\0\0"...,
> 28}, {"\0\0\0D\0\0\0u\0\0\0m\0\0\0p\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 536}],
> msg_controllen=0, msg_flags=0}, 0) = 564
>
> Intrestingly, afs crashed with simular symptoms when I attempted to tar/gz
> about 3 gigs worth of localfiles on the afs server, none of which was afs
> data, onto a mounted nfs share. This makes me wonder if afs is crashing in
> relation to high CPU usage, not to just butc.
>
> Any ideas, pointers, suggestions would be highly appriciated.
>
> thanks,
> Robin
>
> --------------------------
> Robin H. Yamaguchi
> Physics Computing Services
> UC Santa Barbara
> 805-893-8366
> --------------------------
>
> On Tue, 30 Jul 2002, J Maynard Gelinas wrote:
>
> > Hi,
> >
> > I've configured butc to write to a file using a secondary port offset,
> > this is well documented and easy enough to do. However, because Linux has
> > a filesize limit of 2GB (signed 32bits) on IA-32, every time the backup
> > file hits 2GB butc prompts to replace the tape. I suppose I could write an
> > expect script to move the backup file aside and then let butc know it's
> > time to continue writing the backup dump out, but that is a most obnoxious
> > solution. Seems to me the right way to go about this is to build large
> > file support in the kernel and libc, and then recompile butc. Has anyone
> > actually done this? And with what filesystem? XFS is known to support
> > large files well, but it's unclear to me how the XFS patches would effect
> > building and running the AFS kernel modules.
> >
> > I'm already writing out to tape. I just want secondary dumps written
> > out to disk for quick restores. Suggestions?
> >
> > Cheers,
> > --Maynard
> >
> >
> _______________________________________________
> OpenAFS-info mailing list
> OpenAFS-info@openafs.org
> https://lists.openafs.org/mailman/listinfo/openafs-info
>