From neilb+afs@inf.ed.ac.uk Fri Nov 2 12:55:42 2018 From: neilb+afs@inf.ed.ac.uk (Neil Brown) Date: Fri, 2 Nov 2018 11:55:42 +0000 (GMT) Subject: [OpenAFS] Return of the getcwd bug? In-Reply-To: References: Message-ID: On Thu, 11 Oct 2018, Neil Brown wrote: > Today I've been bitten twice by what appears to be the getcwd bug. I've not > had this problem in a long time. I thought that had been resolved? Is anyone > else seeing this? > > This was on both a 1.8.2 and a 1.8.0 client. My home volume is on a 1.6.23 > server. A follow up to my original post. My home volume is now on a 1.8.2 to match the client, I didn't expect it to make a difference, it hasn't. This morning (having rebooted to clear the last occurance on Monday), getcwd problems have returned. Same issues as before, but I've just noticed this. neilb@jingz(~)> pwd /afs/inf.ed.ac.uk/user/n/neilb neilb@jingz(~)> ls -l /proc/self/cwd lrwxrwxrwx 1 neilb people 0 Nov 2 11:50 /proc/self/cwd -> /afs/inf.ed.ac.uk/user/n/neilb (deleted) (note the "deleted) but if I cd down one level in my home dir: neilb@jingz(tmp)> ls -l /proc/self/cwd lrwxrwxrwx 1 neilb people 0 Nov 2 11:49 /proc/self/cwd -> /afs/inf.ed.ac.uk/user/n/neilb/tmp/ If it really is just us that's seeing this. I wonder if it may be related to our use of automount (autofs). We use automount for various mappings, but one is to map /autofs/nethome/USERNAME -> to the corresponding /afs/ path and /home/ is a symlink to /autofs/nethome/. When the bug strikes then I can't access /home/neilb or /autofs/nethome/neilb neilb@jingz(~)> ls /autofs/nethome/neilb ls: cannot access /autofs/nethome/neilb: No such file or directory I have references to /home/neilb in various dot files. I'm going to remove those and see if things improve. Neil -- Neil Brown - Computing Officer - Appleton Tower 7.12a | Neil.Brown @ ed. ac.uk School of Informatics, University of Edinburgh | Tel: +44 131 6504422 The University of Edinburgh is a charitable body, registered in Scotland, with registration number SC005336. From jsbillin@umich.edu Fri Nov 2 13:22:18 2018 From: jsbillin@umich.edu (Jonathan Billings) Date: Fri, 2 Nov 2018 08:22:18 -0400 Subject: [OpenAFS] Return of the getcwd bug? In-Reply-To: References: Message-ID: --000000000000d1b7990579ad9645 Content-Type: text/plain; charset="UTF-8" On Fri, Nov 2, 2018 at 7:57 AM Neil Brown wrote: > If it really is just us that's seeing this. I wonder if it may be related > to our use of automount (autofs). We use automount for various mappings, > but one is to map /autofs/nethome/USERNAME -> to the corresponding /afs/ > path and /home/ is a symlink to /autofs/nethome/. When the bug strikes > then I can't access /home/neilb or /autofs/nethome/neilb > I've been using autofs to create bind mounts into AFS (because we use an even more complicated path in AFS that can't be templated), and I've been seeing a similar bug. The mount() syscall fails despite being able to see the AFS home directory via ls and other tools. -- Jonathan Billings College of Engineering - CAEN - Unix and Linux Support --000000000000d1b7990579ad9645 Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable
On Fri, Nov 2, 2018 at 7:57 AM Neil Brown <neilb+afs@inf.ed.ac.uk> wrote:
If it really is just us that's seeing this. I wonder if it may be relat= ed
to our use of automount (autofs). We use automount for various mappings, but one is to map /autofs/nethome/USERNAME -> to the corresponding /afs/=
path and /home/ is a symlink to /autofs/nethome/. When the bug strikes
then I can't access /home/neilb or /autofs/nethome/neilb

I've been using autofs to create bind mounts into= AFS (because we use an even more complicated path in AFS that can't be templated), and I'= ve=20 been seeing a similar bug.=C2=A0 The mount() syscall fails despite being ab= le to see the AFS home directory via ls and other tools.

= --
Jonathan Billings <jsbillin@umich.edu>
College of Engineering - CAEN - = Unix and Linux Support

--000000000000d1b7990579ad9645-- From kaduk@mit.edu Sat Nov 3 15:44:10 2018 From: kaduk@mit.edu (Benjamin Kaduk) Date: Sat, 3 Nov 2018 09:44:10 -0500 Subject: [OpenAFS] Return of the getcwd bug? In-Reply-To: References: Message-ID: <20181103144410.GE54966@kduck.kaduk.org> On Fri, Nov 02, 2018 at 11:55:42AM +0000, Neil Brown wrote: > If it really is just us that's seeing this. I wonder if it may be related > to our use of automount (autofs). We use automount for various mappings, > but one is to map /autofs/nethome/USERNAME -> to the corresponding /afs/ > path and /home/ is a symlink to /autofs/nethome/. When the bug strikes > then I can't access /home/neilb or /autofs/nethome/neilb Automount seems likely to engage the annoying interactions required by the linux kernel VFS's insistence on a single canonical path for a given dentry, yes. -Ben From jsbillin@umich.edu Mon Nov 5 13:24:54 2018 From: jsbillin@umich.edu (Jonathan Billings) Date: Mon, 5 Nov 2018 08:24:54 -0500 Subject: [OpenAFS] Return of the getcwd bug? In-Reply-To: <20181103144410.GE54966@kduck.kaduk.org> References: <20181103144410.GE54966@kduck.kaduk.org> Message-ID: --000000000000321c270579ead047 Content-Type: text/plain; charset="UTF-8" On Sat, Nov 3, 2018 at 11:10 AM Benjamin Kaduk wrote: > Automount seems likely to engage the annoying interactions required by the > linux kernel VFS's insistence on a single canonical path for a given > dentry, yes. > Any suggestions for providing a uniform path for AFS homedirs that can be used with software like sssd's homedir_template for users like ours, who have complicated paths for their AFS home? Unfortunately, I have little control over how volumes are mounted in the cell I use. The only other thing I can think of is to instead create a symlink in /home, but that might cause issues. -- Jonathan Billings College of Engineering - CAEN - Unix and Linux Support --000000000000321c270579ead047 Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable

On Sat, No= v 3, 2018 at 11:10 AM Benjamin Kaduk <k= aduk@mit.edu> wrote:
Automount seems likely to engage the annoying interactions required by the<= br> linux kernel VFS's insistence on a single canonical path for a given dentry, yes.
=C2=A0
Any suggestions fo= r providing a uniform path for AFS homedirs that=20 can be used with software like sssd's homedir_template for users like= =20 ours, who have complicated paths for their AFS home?=C2=A0 Unfortunately, I= have little control over how volumes are mounted in the cell I use.

The only other thing I can think of is to instead cr= eate a symlink in /home, but that might cause issues.

--
Jo= nathan Billings <jsbillin@umich.edu>
College of Engineering - CAEN - Unix and Lin= ux Support

--000000000000321c270579ead047-- From pkd@umd.edu Thu Nov 8 02:41:06 2018 From: pkd@umd.edu (Prasad K. Dharmasena) Date: Wed, 7 Nov 2018 21:41:06 -0500 Subject: [OpenAFS] Building 1.8.2 with transarc-paths Message-ID: --000000000000310999057a1e2c6e Content-Type: text/plain; charset="UTF-8" I've been building 1.6.x on Ubuntu 16.04 with the following options and it has worked well for me. --enable-transarc-paths --prefix=/usr/afsws --enable-supergroups Building 1.8.x on the same OS with the same option has a problem that appears to be an rpath issue. ldd /usr/vice/etc/afsd | grep not libafshcrypto.so.2 => not found librokenafs.so.2 => not found Those libraries are installed in /usr/afsws/lib, so I can get the client to run if I set the LD_LIBRARY_PATH. Any hints to what I need to tweak in 'configure' to make it build properly? Thanks. --000000000000310999057a1e2c6e Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable
I've been buildin= g 1.6.x on Ubuntu 16.04 with the following options and it has worked well f= or me.

=C2=A0 =C2=A0 =C2= =A0 =C2=A0 --enable-transarc-paths=C2=A0
=C2=A0 =C2=A0 =C2=A0 =C2= =A0 --prefix=3D/usr/afsws=C2=A0
=C2=A0 =C2=A0 =C2=A0 =C2=A0 --enabl= e-supergroups=C2=A0

Buildin= g 1.8.x on the same OS with the same option has a problem that appears to b= e an rpath issue.

ldd /usr/= vice/etc/afsd | grep not
libafshcrypto.so.2 =3D&g= t; not found
<= span style=3D"white-space:pre"> librokenafs.so.2 =3D> not found

=
Those libraries= are installed in /usr/afsws/lib, so I can get the client to run if I set t= he LD_LIBRARY_PATH.=C2=A0 Any hints to what I need to tweak in 'configu= re' to make it build properly?

Thanks.
=
--000000000000310999057a1e2c6e-- From sopko@cs.unc.edu Thu Nov 8 17:22:49 2018 From: sopko@cs.unc.edu (John Sopko) Date: Thu, 8 Nov 2018 12:22:49 -0500 Subject: [OpenAFS] accessing /afs processes go into device wait Message-ID: I have been running two legacy Redhat 6.x web servers for several years. The apache httpd processes started to go into device wait state the last few days on one of the servers, the other server is fine, both are configured pretty much the same. I tracked this down to the web server trying to stat /afs/.htaccess. If I try to do an ls in /afs or cat /afs/.htaccess which does not exist, the commands take a long time to complete and first go into device wait state, it can take several minutes or they may hang indefinitely. The afs file system seems to be working fine, just accessing under /afs is the problem. On other Redhat 6.x systems accessing /afs is fast and have no problems. I am running afsd with: /usr/vice/etc/afsd -dynroot -fakestat-all -afsdb Note I tried fakestat-all to see if that would help, I have been running just -fakesat, our db servers have afsdb records. I removed all cells accept for our cell in CellServDB so only have this: % pwd /afs % ls -l total 4 lrwxr-xr-x 1 root root 10 Dec 31 1969 cs -> cs.unc.edu/ drwxr-xr-x 8 root root 2048 Mar 6 2015 cs.unc.edu/ lrwxr-xr-x 1 root root 10 Dec 31 1969 unc -> cs.unc.edu/ I re-formatted the /usr/vice/cache partition and that did not help. I cannot find any hardware problems, no clues in the syslog or on the console, the system disk including the cache is on a raid1/mirror disk. This is a Dell server and I run Dell OpenMange which is really good at reporting system and especially disk errors. I am running the same afsd verison on our remaining rhel 6.x servers: % fs version openafs 1.6.22.2 Distributor ID: RedHatEnterpriseWorkstation Release: 6.10 The problem is intermittent but goes into device wait most of the time, for example the first time ran fine, the second time it took 14.96 seconds. % time ls -l total 4 lrwxr-xr-x 1 root root 10 Dec 31 1969 cs -> cs.unc.edu drwxr-xr-x 8 root root 2048 Mar 6 2015 cs.unc.edu lrwxr-xr-x 1 root root 10 Dec 31 1969 unc -> cs.unc.edu 0.000u 0.000s 0:00.00 0.0% 0+0k 0+0io 0pf+0w % time ls -l total 4 lrwxr-xr-x 1 root root 10 Dec 31 1969 cs -> cs.unc.edu drwxr-xr-x 8 root root 2048 Mar 6 2015 cs.unc.edu lrwxr-xr-x 1 root root 10 Dec 31 1969 unc -> cs.unc.edu 0.000u 0.000s 0:14.96 0.0% 0+0k 0+0io 0pf+0w Thanks for any help or ideas to try. -- John W. Sopko Jr. University of North Carolina Computer Science Dept CB 3175 Chapel Hill, NC 27599-3175 Fred Brooks Building; Room 140 Computer Services Systems Specialist email: sopko AT cs.unc.edu phone: 919-590-6144 From stephan.wiesand@desy.de Thu Nov 8 17:52:54 2018 From: stephan.wiesand@desy.de (Stephan Wiesand) Date: Thu, 8 Nov 2018 18:52:54 +0100 Subject: [OpenAFS] accessing /afs processes go into device wait In-Reply-To: References: Message-ID: > On 8. Nov 2018, at 18:22, John Sopko wrote: > > I have been running two legacy Redhat 6.x web servers for several > years. The apache httpd processes started to go into device wait state > the last few days on one of the servers, the other server is fine, > both are configured pretty much the same. I tracked this down to the > web server trying to stat /afs/.htaccess. If I try to do an ls in /afs > or cat /afs/.htaccess which does not exist, the commands take a long > time to complete and first go into device wait state, it can take > several minutes or they may hang indefinitely. The afs file system > seems to be working fine, just accessing under /afs is the problem. On > other Redhat 6.x systems accessing /afs is fast and have no problems. Are the nsswitch and DNS resolver configurations the same on all systems? Any differences in network restrictions? Does it help to run afsd without -afsdb? Just a wild guess, Stephan > > I am running afsd with: > > /usr/vice/etc/afsd -dynroot -fakestat-all -afsdb > > Note I tried fakestat-all to see if that would help, I have been > running just -fakesat, our db servers have afsdb records. > > I removed all cells accept for our cell in CellServDB so only have this: > > % pwd > /afs > > % ls -l > total 4 > lrwxr-xr-x 1 root root 10 Dec 31 1969 cs -> cs.unc.edu/ > drwxr-xr-x 8 root root 2048 Mar 6 2015 cs.unc.edu/ > lrwxr-xr-x 1 root root 10 Dec 31 1969 unc -> cs.unc.edu/ > > I re-formatted the /usr/vice/cache partition and that did not help. > > I cannot find any hardware problems, no clues in the syslog or on the > console, the system disk including the cache is on a raid1/mirror > disk. This is a Dell server and I run Dell OpenMange which is really > good at reporting system and especially disk errors. > > I am running the same afsd verison on our remaining rhel 6.x servers: > > % fs version > openafs 1.6.22.2 > > Distributor ID: RedHatEnterpriseWorkstation > Release: 6.10 > > The problem is intermittent but goes into device wait most of the > time, for example the first time ran fine, the second time it took > 14.96 seconds. > > % time ls -l > total 4 > lrwxr-xr-x 1 root root 10 Dec 31 1969 cs -> cs.unc.edu > drwxr-xr-x 8 root root 2048 Mar 6 2015 cs.unc.edu > lrwxr-xr-x 1 root root 10 Dec 31 1969 unc -> cs.unc.edu > 0.000u 0.000s 0:00.00 0.0% 0+0k 0+0io 0pf+0w > > % time ls -l > total 4 > lrwxr-xr-x 1 root root 10 Dec 31 1969 cs -> cs.unc.edu > drwxr-xr-x 8 root root 2048 Mar 6 2015 cs.unc.edu > lrwxr-xr-x 1 root root 10 Dec 31 1969 unc -> cs.unc.edu > 0.000u 0.000s 0:14.96 0.0% 0+0k 0+0io 0pf+0w > > Thanks for any help or ideas to try. -- Stephan Wiesand DESY -DV- Platanenallee 6 15738 Zeuthen, Germany From sopko@cs.unc.edu Thu Nov 8 18:48:08 2018 From: sopko@cs.unc.edu (John Sopko) Date: Thu, 8 Nov 2018 13:48:08 -0500 Subject: [OpenAFS] accessing /afs processes go into device wait In-Reply-To: References: Message-ID: nsswitch and DNS the same, the AFSDB records resolve fine, the /afs/cs.unc.edu cell works fine, just not /afs. On Thu, Nov 8, 2018 at 12:52 PM Stephan Wiesand wrote: > > > > On 8. Nov 2018, at 18:22, John Sopko wrote: > > > > I have been running two legacy Redhat 6.x web servers for several > > years. The apache httpd processes started to go into device wait state > > the last few days on one of the servers, the other server is fine, > > both are configured pretty much the same. I tracked this down to the > > web server trying to stat /afs/.htaccess. If I try to do an ls in /afs > > or cat /afs/.htaccess which does not exist, the commands take a long > > time to complete and first go into device wait state, it can take > > several minutes or they may hang indefinitely. The afs file system > > seems to be working fine, just accessing under /afs is the problem. On > > other Redhat 6.x systems accessing /afs is fast and have no problems. > > Are the nsswitch and DNS resolver configurations the same on all systems? > Any differences in network restrictions? > Does it help to run afsd without -afsdb? > > Just a wild guess, > Stephan > > > > > I am running afsd with: > > > > /usr/vice/etc/afsd -dynroot -fakestat-all -afsdb > > > > Note I tried fakestat-all to see if that would help, I have been > > running just -fakesat, our db servers have afsdb records. > > > > I removed all cells accept for our cell in CellServDB so only have this: > > > > % pwd > > /afs > > > > % ls -l > > total 4 > > lrwxr-xr-x 1 root root 10 Dec 31 1969 cs -> cs.unc.edu/ > > drwxr-xr-x 8 root root 2048 Mar 6 2015 cs.unc.edu/ > > lrwxr-xr-x 1 root root 10 Dec 31 1969 unc -> cs.unc.edu/ > > > > I re-formatted the /usr/vice/cache partition and that did not help. > > > > I cannot find any hardware problems, no clues in the syslog or on the > > console, the system disk including the cache is on a raid1/mirror > > disk. This is a Dell server and I run Dell OpenMange which is really > > good at reporting system and especially disk errors. > > > > I am running the same afsd verison on our remaining rhel 6.x servers: > > > > % fs version > > openafs 1.6.22.2 > > > > Distributor ID: RedHatEnterpriseWorkstation > > Release: 6.10 > > > > The problem is intermittent but goes into device wait most of the > > time, for example the first time ran fine, the second time it took > > 14.96 seconds. > > > > % time ls -l > > total 4 > > lrwxr-xr-x 1 root root 10 Dec 31 1969 cs -> cs.unc.edu > > drwxr-xr-x 8 root root 2048 Mar 6 2015 cs.unc.edu > > lrwxr-xr-x 1 root root 10 Dec 31 1969 unc -> cs.unc.edu > > 0.000u 0.000s 0:00.00 0.0% 0+0k 0+0io 0pf+0w > > > > % time ls -l > > total 4 > > lrwxr-xr-x 1 root root 10 Dec 31 1969 cs -> cs.unc.edu > > drwxr-xr-x 8 root root 2048 Mar 6 2015 cs.unc.edu > > lrwxr-xr-x 1 root root 10 Dec 31 1969 unc -> cs.unc.edu > > 0.000u 0.000s 0:14.96 0.0% 0+0k 0+0io 0pf+0w > > > > Thanks for any help or ideas to try. > > -- > Stephan Wiesand > DESY -DV- > Platanenallee 6 > 15738 Zeuthen, Germany > > > -- John W. Sopko Jr. University of North Carolina Computer Science Dept CB 3175 Chapel Hill, NC 27599-3175 Fred Brooks Building; Room 140 Computer Services Systems Specialist email: sopko AT cs.unc.edu phone: 919-590-6144 From stephan.wiesand@desy.de Thu Nov 8 18:59:18 2018 From: stephan.wiesand@desy.de (Stephan Wiesand) Date: Thu, 8 Nov 2018 19:59:18 +0100 Subject: [OpenAFS] accessing /afs processes go into device wait In-Reply-To: References: Message-ID: Have you tried w/o -afsdb? > On 08 Nov 2018, at 19:48, John Sopko wrote: >=20 > nsswitch and DNS the same, the AFSDB records resolve fine, the > /afs/cs.unc.edu cell works fine, just not /afs. >=20 >=20 > On Thu, Nov 8, 2018 at 12:52 PM Stephan Wiesand = wrote: >>=20 >>=20 >>> On 8. Nov 2018, at 18:22, John Sopko wrote: >>>=20 >>> I have been running two legacy Redhat 6.x web servers for several >>> years. The apache httpd processes started to go into device wait = state >>> the last few days on one of the servers, the other server is fine, >>> both are configured pretty much the same. I tracked this down to the >>> web server trying to stat /afs/.htaccess. If I try to do an ls in = /afs >>> or cat /afs/.htaccess which does not exist, the commands take a long >>> time to complete and first go into device wait state, it can take >>> several minutes or they may hang indefinitely. The afs file system >>> seems to be working fine, just accessing under /afs is the problem. = On >>> other Redhat 6.x systems accessing /afs is fast and have no = problems. >>=20 >> Are the nsswitch and DNS resolver configurations the same on all = systems? >> Any differences in network restrictions? >> Does it help to run afsd without -afsdb? >>=20 >> Just a wild guess, >> Stephan >>=20 >>>=20 >>> I am running afsd with: >>>=20 >>> /usr/vice/etc/afsd -dynroot -fakestat-all -afsdb >>>=20 >>> Note I tried fakestat-all to see if that would help, I have been >>> running just -fakesat, our db servers have afsdb records. >>>=20 >>> I removed all cells accept for our cell in CellServDB so only have = this: >>>=20 >>> % pwd >>> /afs >>>=20 >>> % ls -l >>> total 4 >>> lrwxr-xr-x 1 root root 10 Dec 31 1969 cs -> cs.unc.edu/ >>> drwxr-xr-x 8 root root 2048 Mar 6 2015 cs.unc.edu/ >>> lrwxr-xr-x 1 root root 10 Dec 31 1969 unc -> cs.unc.edu/ >>>=20 >>> I re-formatted the /usr/vice/cache partition and that did not help. >>>=20 >>> I cannot find any hardware problems, no clues in the syslog or on = the >>> console, the system disk including the cache is on a raid1/mirror >>> disk. This is a Dell server and I run Dell OpenMange which is really >>> good at reporting system and especially disk errors. >>>=20 >>> I am running the same afsd verison on our remaining rhel 6.x = servers: >>>=20 >>> % fs version >>> openafs 1.6.22.2 >>>=20 >>> Distributor ID: RedHatEnterpriseWorkstation >>> Release: 6.10 >>>=20 >>> The problem is intermittent but goes into device wait most of the >>> time, for example the first time ran fine, the second time it took >>> 14.96 seconds. >>>=20 >>> % time ls -l >>> total 4 >>> lrwxr-xr-x 1 root root 10 Dec 31 1969 cs -> cs.unc.edu >>> drwxr-xr-x 8 root root 2048 Mar 6 2015 cs.unc.edu >>> lrwxr-xr-x 1 root root 10 Dec 31 1969 unc -> cs.unc.edu >>> 0.000u 0.000s 0:00.00 0.0% 0+0k 0+0io 0pf+0w >>>=20 >>> % time ls -l >>> total 4 >>> lrwxr-xr-x 1 root root 10 Dec 31 1969 cs -> cs.unc.edu >>> drwxr-xr-x 8 root root 2048 Mar 6 2015 cs.unc.edu >>> lrwxr-xr-x 1 root root 10 Dec 31 1969 unc -> cs.unc.edu >>> 0.000u 0.000s 0:14.96 0.0% 0+0k 0+0io 0pf+0w >>>=20 >>> Thanks for any help or ideas to try. From sopko@cs.unc.edu Thu Nov 8 19:41:07 2018 From: sopko@cs.unc.edu (John Sopko) Date: Thu, 8 Nov 2018 14:41:07 -0500 Subject: [OpenAFS] accessing /afs processes go into device wait In-Reply-To: References: Message-ID: Wow! Removing -afsdb and adding our db servers in the CellServDB seems to have fixed the problem. Does not make any sense, this machine and others running many years with -afsdb. And fs listcells works when -afsdb is used: % fs listcells Cell dynroot on hosts. Cell cs.unc.edu on hosts toucan.cs.unc.edu quail.cs.unc.edu kiwi.cs.unc.edu. % host -t AFSDB cs.unc.edu cs.unc.edu has AFSDB record 1 kiwi.cs.unc.edu. cs.unc.edu has AFSDB record 1 quail.cs.unc.edu. cs.unc.edu has AFSDB record 1 toucan.cs.unc.edu. Thanks for the help. Is this a known issue? On Thu, Nov 8, 2018 at 1:59 PM Stephan Wiesand wrote: > > Have you tried w/o -afsdb? > > > On 08 Nov 2018, at 19:48, John Sopko wrote: > > > > nsswitch and DNS the same, the AFSDB records resolve fine, the > > /afs/cs.unc.edu cell works fine, just not /afs. > > > > > > On Thu, Nov 8, 2018 at 12:52 PM Stephan Wiesand wrote: > >> > >> > >>> On 8. Nov 2018, at 18:22, John Sopko wrote: > >>> > >>> I have been running two legacy Redhat 6.x web servers for several > >>> years. The apache httpd processes started to go into device wait state > >>> the last few days on one of the servers, the other server is fine, > >>> both are configured pretty much the same. I tracked this down to the > >>> web server trying to stat /afs/.htaccess. If I try to do an ls in /afs > >>> or cat /afs/.htaccess which does not exist, the commands take a long > >>> time to complete and first go into device wait state, it can take > >>> several minutes or they may hang indefinitely. The afs file system > >>> seems to be working fine, just accessing under /afs is the problem. On > >>> other Redhat 6.x systems accessing /afs is fast and have no problems. > >> > >> Are the nsswitch and DNS resolver configurations the same on all systems? > >> Any differences in network restrictions? > >> Does it help to run afsd without -afsdb? > >> > >> Just a wild guess, > >> Stephan > >> > >>> > >>> I am running afsd with: > >>> > >>> /usr/vice/etc/afsd -dynroot -fakestat-all -afsdb > >>> > >>> Note I tried fakestat-all to see if that would help, I have been > >>> running just -fakesat, our db servers have afsdb records. > >>> > >>> I removed all cells accept for our cell in CellServDB so only have this: > >>> > >>> % pwd > >>> /afs > >>> > >>> % ls -l > >>> total 4 > >>> lrwxr-xr-x 1 root root 10 Dec 31 1969 cs -> cs.unc.edu/ > >>> drwxr-xr-x 8 root root 2048 Mar 6 2015 cs.unc.edu/ > >>> lrwxr-xr-x 1 root root 10 Dec 31 1969 unc -> cs.unc.edu/ > >>> > >>> I re-formatted the /usr/vice/cache partition and that did not help. > >>> > >>> I cannot find any hardware problems, no clues in the syslog or on the > >>> console, the system disk including the cache is on a raid1/mirror > >>> disk. This is a Dell server and I run Dell OpenMange which is really > >>> good at reporting system and especially disk errors. > >>> > >>> I am running the same afsd verison on our remaining rhel 6.x servers: > >>> > >>> % fs version > >>> openafs 1.6.22.2 > >>> > >>> Distributor ID: RedHatEnterpriseWorkstation > >>> Release: 6.10 > >>> > >>> The problem is intermittent but goes into device wait most of the > >>> time, for example the first time ran fine, the second time it took > >>> 14.96 seconds. > >>> > >>> % time ls -l > >>> total 4 > >>> lrwxr-xr-x 1 root root 10 Dec 31 1969 cs -> cs.unc.edu > >>> drwxr-xr-x 8 root root 2048 Mar 6 2015 cs.unc.edu > >>> lrwxr-xr-x 1 root root 10 Dec 31 1969 unc -> cs.unc.edu > >>> 0.000u 0.000s 0:00.00 0.0% 0+0k 0+0io 0pf+0w > >>> > >>> % time ls -l > >>> total 4 > >>> lrwxr-xr-x 1 root root 10 Dec 31 1969 cs -> cs.unc.edu > >>> drwxr-xr-x 8 root root 2048 Mar 6 2015 cs.unc.edu > >>> lrwxr-xr-x 1 root root 10 Dec 31 1969 unc -> cs.unc.edu > >>> 0.000u 0.000s 0:14.96 0.0% 0+0k 0+0io 0pf+0w > >>> > >>> Thanks for any help or ideas to try. > -- John W. Sopko Jr. University of North Carolina Computer Science Dept CB 3175 Chapel Hill, NC 27599-3175 Fred Brooks Building; Room 140 Computer Services Systems Specialist email: sopko AT cs.unc.edu phone: 919-590-6144 From stephan.wiesand@desy.de Thu Nov 8 19:53:54 2018 From: stephan.wiesand@desy.de (Stephan Wiesand) Date: Thu, 8 Nov 2018 20:53:54 +0100 Subject: [OpenAFS] accessing /afs processes go into device wait In-Reply-To: References: Message-ID: <2E1F6BCA-55F9-4CDA-A73D-CEDDFFAE3235@desy.de> My guess is that attempting to retrieve SRV and then AFSDB DNS records for an "htaccess" top level domain is very slow to fail on the problematic system for some reason. I think it's kind of a known issue which has crept up in the past for things like ".trash" as well. You could probably find out where things get stuck by comparing tcpdump outputs. - Stephan > On 08 Nov 2018, at 20:41, John Sopko wrote: >=20 > Wow! Removing -afsdb and adding our db servers in the CellServDB seems > to have fixed the problem. Does not make any sense, this machine and > others running many years with -afsdb. And fs listcells works when > -afsdb is used: >=20 > % fs listcells > Cell dynroot on hosts. > Cell cs.unc.edu on hosts toucan.cs.unc.edu quail.cs.unc.edu = kiwi.cs.unc.edu. >=20 > % host -t AFSDB cs.unc.edu > cs.unc.edu has AFSDB record 1 kiwi.cs.unc.edu. > cs.unc.edu has AFSDB record 1 quail.cs.unc.edu. > cs.unc.edu has AFSDB record 1 toucan.cs.unc.edu. >=20 > Thanks for the help. Is this a known issue? >=20 >=20 > On Thu, Nov 8, 2018 at 1:59 PM Stephan Wiesand = wrote: >>=20 >> Have you tried w/o -afsdb? >>=20 >>> On 08 Nov 2018, at 19:48, John Sopko wrote: >>>=20 >>> nsswitch and DNS the same, the AFSDB records resolve fine, the >>> /afs/cs.unc.edu cell works fine, just not /afs. >>>=20 >>>=20 >>> On Thu, Nov 8, 2018 at 12:52 PM Stephan Wiesand = wrote: >>>>=20 >>>>=20 >>>>> On 8. Nov 2018, at 18:22, John Sopko wrote: >>>>>=20 >>>>> I have been running two legacy Redhat 6.x web servers for several >>>>> years. The apache httpd processes started to go into device wait = state >>>>> the last few days on one of the servers, the other server is fine, >>>>> both are configured pretty much the same. I tracked this down to = the >>>>> web server trying to stat /afs/.htaccess. If I try to do an ls in = /afs >>>>> or cat /afs/.htaccess which does not exist, the commands take a = long >>>>> time to complete and first go into device wait state, it can take >>>>> several minutes or they may hang indefinitely. The afs file system >>>>> seems to be working fine, just accessing under /afs is the = problem. On >>>>> other Redhat 6.x systems accessing /afs is fast and have no = problems. >>>>=20 >>>> Are the nsswitch and DNS resolver configurations the same on all = systems? >>>> Any differences in network restrictions? >>>> Does it help to run afsd without -afsdb? >>>>=20 >>>> Just a wild guess, >>>> Stephan >>>>=20 >>>>>=20 >>>>> I am running afsd with: >>>>>=20 >>>>> /usr/vice/etc/afsd -dynroot -fakestat-all -afsdb >>>>>=20 >>>>> Note I tried fakestat-all to see if that would help, I have been >>>>> running just -fakesat, our db servers have afsdb records. >>>>>=20 >>>>> I removed all cells accept for our cell in CellServDB so only have = this: >>>>>=20 >>>>> % pwd >>>>> /afs >>>>>=20 >>>>> % ls -l >>>>> total 4 >>>>> lrwxr-xr-x 1 root root 10 Dec 31 1969 cs -> cs.unc.edu/ >>>>> drwxr-xr-x 8 root root 2048 Mar 6 2015 cs.unc.edu/ >>>>> lrwxr-xr-x 1 root root 10 Dec 31 1969 unc -> cs.unc.edu/ >>>>>=20 >>>>> I re-formatted the /usr/vice/cache partition and that did not = help. >>>>>=20 >>>>> I cannot find any hardware problems, no clues in the syslog or on = the >>>>> console, the system disk including the cache is on a raid1/mirror >>>>> disk. This is a Dell server and I run Dell OpenMange which is = really >>>>> good at reporting system and especially disk errors. >>>>>=20 >>>>> I am running the same afsd verison on our remaining rhel 6.x = servers: >>>>>=20 >>>>> % fs version >>>>> openafs 1.6.22.2 >>>>>=20 >>>>> Distributor ID: RedHatEnterpriseWorkstation >>>>> Release: 6.10 >>>>>=20 >>>>> The problem is intermittent but goes into device wait most of the >>>>> time, for example the first time ran fine, the second time it took >>>>> 14.96 seconds. >>>>>=20 >>>>> % time ls -l >>>>> total 4 >>>>> lrwxr-xr-x 1 root root 10 Dec 31 1969 cs -> cs.unc.edu >>>>> drwxr-xr-x 8 root root 2048 Mar 6 2015 cs.unc.edu >>>>> lrwxr-xr-x 1 root root 10 Dec 31 1969 unc -> cs.unc.edu >>>>> 0.000u 0.000s 0:00.00 0.0% 0+0k 0+0io 0pf+0w >>>>>=20 >>>>> % time ls -l >>>>> total 4 >>>>> lrwxr-xr-x 1 root root 10 Dec 31 1969 cs -> cs.unc.edu >>>>> drwxr-xr-x 8 root root 2048 Mar 6 2015 cs.unc.edu >>>>> lrwxr-xr-x 1 root root 10 Dec 31 1969 unc -> cs.unc.edu >>>>> 0.000u 0.000s 0:14.96 0.0% 0+0k 0+0io 0pf+0w >>>>>=20 >>>>> Thanks for any help or ideas to try. From jaltman@auristor.com Thu Nov 8 20:42:01 2018 From: jaltman@auristor.com (Jeffrey Altman) Date: Thu, 8 Nov 2018 15:42:01 -0500 Subject: [OpenAFS] accessing /afs processes go into device wait In-Reply-To: References: Message-ID: <068ea177-89ca-66ee-e9ec-4f2d958c48d0@auristor.com> This is a cryptographically signed message in MIME format. --------------ms060802080100070600000804 Content-Type: multipart/mixed; boundary="------------CF6E15CA2F4B9BAAF5137DC1" Content-Language: en-US This is a multi-part message in MIME format. --------------CF6E15CA2F4B9BAAF5137DC1 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable On 11/8/2018 12:22 PM, John Sopko wrote:> > I am running afsd with: > > /usr/vice/etc/afsd -dynroot -fakestat-all -afsdb -dynroot do not mount a root.afs volume. instead populate the /afs directory with the results of cell lookups -afsdb if the requested name does not match a cell found in the CellServDB file, query DNS first for SRV records and if no match, then AFSDB records Note that default RHEL6 configuration for the DNS resolver does not cache negative DNS results. An attempt to open /afs/.htaccess therefore results in DNS queries for "htaccess" plus whatever domains are in the search list. If the search list is cs.unc.edu and unc.edu then for each access there will be the following DNS queries SRV _afs3-vlserver._udp.htaccess.cs.unc.edu SRV _afs3-vlserver._udp.unc.edu AFSDB htaccess.cs.unc.edu AFSDB htaccess.unc.edu You can add a dummy htaccess.cs.unc.edu entry to CellServDB. You can add a blacklist for that name. You can stop using -afsdb or you can stop using -dynroot and rely upon a locally managed root.afs volume. Jeffrey Altman --------------CF6E15CA2F4B9BAAF5137DC1 Content-Type: text/x-vcard; charset=utf-8; name="jaltman.vcf" Content-Transfer-Encoding: quoted-printable Content-Disposition: attachment; filename="jaltman.vcf" begin:vcard fn:Jeffrey Altman n:Altman;Jeffrey org:AuriStor, Inc. adr:Suite 6B;;255 West 94Th Street;New York;New York;10025-6985;United St= ates email;internet:jaltman@auristor.com title:Founder and CEO tel;work:+1-212-769-9018 note;quoted-printable:LinkedIn: https://www.linkedin.com/in/jeffreyaltman= =3D0D=3D0A=3D Skype: jeffrey.e.altman=3D0D=3D0A=3D =09 url:https://www.auristor.com/ version:2.1 end:vcard --------------CF6E15CA2F4B9BAAF5137DC1-- --------------ms060802080100070600000804 Content-Type: application/pkcs7-signature; name="smime.p7s" Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="smime.p7s" Content-Description: S/MIME Cryptographic Signature MIAGCSqGSIb3DQEHAqCAMIACAQExDzANBglghkgBZQMEAgEFADCABgkqhkiG9w0BBwEAAKCC DGswggXSMIIEuqADAgECAhBAAWbTGehnfUuu91hYwM5DMA0GCSqGSIb3DQEBCwUAMDoxCzAJ BgNVBAYTAlVTMRIwEAYDVQQKEwlJZGVuVHJ1c3QxFzAVBgNVBAMTDlRydXN0SUQgQ0EgQTEy MB4XDTE4MTEwMjA2MjYyMloXDTE5MTEwMjA2MjYyMlowcDEvMC0GCgmSJomT8ixkAQETH0Ew MTQyN0UwMDAwMDE2NkQzMTlFODFBMDAwMDdBN0IxGTAXBgNVBAMTEEplZmZyZXkgRSBBbHRt YW4xFTATBgNVBAoTDEF1cmlTdG9yIEluYzELMAkGA1UEBhMCVVMwggEiMA0GCSqGSIb3DQEB AQUAA4IBDwAwggEKAoIBAQDqEYwjLORE23Gc8m7YgKqbGzWn/fmVGtoZkBNwOEYlrFOu84Pb EhV4sxQrChhPyXVW2jquV2rg2/5dsVC8RO+RwlXuAkUvR9KhWJLu6GJXwUnZr83wtEzJ8nqp THj6W+3velLwWx7qhADyrMnKN0bTYh+5M9HWt2We4qYi6i1/ejgKtM0arWYxVx6Iwb4xZpil MDNqV15Dwuunnkq4vNEByIT81zDoClqylMxxKJpvc3tqC66+BHHM5RxF+z36Pt8fb3Q54Vry txXFm+kVSclKGaWgjq5SqV4tR0FWv6OnMY8tAx1YrljfvgxW5npZgBbo+YVoYEfUrz77WIYQ yzn7AgMBAAGjggKcMIICmDAOBgNVHQ8BAf8EBAMCBPAwgYQGCCsGAQUFBwEBBHgwdjAwBggr BgEFBQcwAYYkaHR0cDovL2NvbW1lcmNpYWwub2NzcC5pZGVudHJ1c3QuY29tMEIGCCsGAQUF BzAChjZodHRwOi8vdmFsaWRhdGlvbi5pZGVudHJ1c3QuY29tL2NlcnRzL3RydXN0aWRjYWEx Mi5wN2MwHwYDVR0jBBgwFoAUpHPa72k1inXMoBl7CDL4a4nkQuwwCQYDVR0TBAIwADCCASsG A1UdIASCASIwggEeMIIBGgYLYIZIAYb5LwAGAgEwggEJMEoGCCsGAQUFBwIBFj5odHRwczov L3NlY3VyZS5pZGVudHJ1c3QuY29tL2NlcnRpZmljYXRlcy9wb2xpY3kvdHMvaW5kZXguaHRt bDCBugYIKwYBBQUHAgIwga0agapUaGlzIFRydXN0SUQgQ2VydGlmaWNhdGUgaGFzIGJlZW4g aXNzdWVkIGluIGFjY29yZGFuY2Ugd2l0aCBJZGVuVHJ1c3QncyBUcnVzdElEIENlcnRpZmlj YXRlIFBvbGljeSBmb3VuZCBhdCBodHRwczovL3NlY3VyZS5pZGVudHJ1c3QuY29tL2NlcnRp ZmljYXRlcy9wb2xpY3kvdHMvaW5kZXguaHRtbDBFBgNVHR8EPjA8MDqgOKA2hjRodHRwOi8v dmFsaWRhdGlvbi5pZGVudHJ1c3QuY29tL2NybC90cnVzdGlkY2FhMTIuY3JsMB8GA1UdEQQY MBaBFGphbHRtYW5AYXVyaXN0b3IuY29tMB0GA1UdDgQWBBQevV8IqWfIUNkQqAugGhxR938z +jAdBgNVHSUEFjAUBggrBgEFBQcDAgYIKwYBBQUHAwQwDQYJKoZIhvcNAQELBQADggEBAKsU kshF6tfL43itTIVy9vjYqqPErG9n8kX5FlRYbtIVlWIYTxQpeqtDpUPur1jfBiNY+xT+9Pay O2+XxXu9ZEykCz5T4+3q7s5t5RLsHu1dxYcMnAgfUqb13mhZxY8PVPE4PTHSvZLjPZ6Nt7j0 tXjddZJqjDhr7neNpmYgQWSe+oaIxbUqQ34rVW/hDimv9Y2DnCXL0LopCfABQDK9HDzmsuXd bVH6LUpS6ncge9kQEh1QIGuwqEv2tHCWeauWM6h3BOXj3dlfbJEawUYz2hvc3nSXpscFlCN5 tGAyUAE8QbKnH1ha/zZVrJY1EglFhnDho34lWl35t7pE5NP4kscwggaRMIIEeaADAgECAhEA +d5Wf8lNDHdw+WAbUtoVOzANBgkqhkiG9w0BAQsFADBKMQswCQYDVQQGEwJVUzESMBAGA1UE ChMJSWRlblRydXN0MScwJQYDVQQDEx5JZGVuVHJ1c3QgQ29tbWVyY2lhbCBSb290IENBIDEw HhcNMTUwMjE4MjIyNTE5WhcNMjMwMjE4MjIyNTE5WjA6MQswCQYDVQQGEwJVUzESMBAGA1UE ChMJSWRlblRydXN0MRcwFQYDVQQDEw5UcnVzdElEIENBIEExMjCCASIwDQYJKoZIhvcNAQEB BQADggEPADCCAQoCggEBANGRTTzPCic0kq5L6ZrUJWt5LE/n6tbPXPhGt2Egv7plJMoEpvVJ JDqGqDYymaAsd8Hn9ZMAuKUEFdlx5PgCkfu7jL5zgiMNnAFVD9PyrsuF+poqmlxhlQ06sFY2 hbhQkVVQ00KCNgUzKcBUIvjv04w+fhNPkwGW5M7Ae5K5OGFGwOoRck9GG6MUVKvTNkBw2/vN MOd29VGVTtR0tjH5PS5yDXss48Yl1P4hDStO2L4wTsW2P37QGD27//XGN8K6amWB6F2XOgff /PmlQjQOORT95PmLkwwvma5nj0AS0CVp8kv0K2RHV7GonllKpFDMT0CkxMQKwoj+tWEWJTiD KSsCAwEAAaOCAoAwggJ8MIGJBggrBgEFBQcBAQR9MHswMAYIKwYBBQUHMAGGJGh0dHA6Ly9j b21tZXJjaWFsLm9jc3AuaWRlbnRydXN0LmNvbTBHBggrBgEFBQcwAoY7aHR0cDovL3ZhbGlk YXRpb24uaWRlbnRydXN0LmNvbS9yb290cy9jb21tZXJjaWFscm9vdGNhMS5wN2MwHwYDVR0j BBgwFoAU7UQZwNPwBovupHu+QucmVMiONnYwDwYDVR0TAQH/BAUwAwEB/zCCASAGA1UdIASC ARcwggETMIIBDwYEVR0gADCCAQUwggEBBggrBgEFBQcCAjCB9DBFFj5odHRwczovL3NlY3Vy ZS5pZGVudHJ1c3QuY29tL2NlcnRpZmljYXRlcy9wb2xpY3kvdHMvaW5kZXguaHRtbDADAgEB GoGqVGhpcyBUcnVzdElEIENlcnRpZmljYXRlIGhhcyBiZWVuIGlzc3VlZCBpbiBhY2NvcmRh bmNlIHdpdGggSWRlblRydXN0J3MgVHJ1c3RJRCBDZXJ0aWZpY2F0ZSBQb2xpY3kgZm91bmQg YXQgaHR0cHM6Ly9zZWN1cmUuaWRlbnRydXN0LmNvbS9jZXJ0aWZpY2F0ZXMvcG9saWN5L3Rz L2luZGV4Lmh0bWwwSgYDVR0fBEMwQTA/oD2gO4Y5aHR0cDovL3ZhbGlkYXRpb24uaWRlbnRy dXN0LmNvbS9jcmwvY29tbWVyY2lhbHJvb3RjYTEuY3JsMB0GA1UdJQQWMBQGCCsGAQUFBwMC BggrBgEFBQcDBDAOBgNVHQ8BAf8EBAMCAYYwHQYDVR0OBBYEFKRz2u9pNYp1zKAZewgy+GuJ 5ELsMA0GCSqGSIb3DQEBCwUAA4ICAQAN4YKu0vv062MZfg+xMSNUXYKvHwvZIk+6H1pUmivy DI4I6A3wWzxlr83ZJm0oGIF6PBsbgKJ/fhyyIzb+vAYFJmyI8I/0mGlc+nIQNuV2XY8cypPo VJKgpnzp/7cECXkX8R4NyPtEn8KecbNdGBdEaG4a7AkZ3ujlJofZqYdHxN29tZPdDlZ8fR36 /mAFeCEq0wOtOOc0Eyhs29+9MIZYjyxaPoTS+l8xLcuYX3RWlirRyH6RPfeAi5kySOEhG1qu NHe06QIwpigjyFT6v/vRqoIBr7WpDOSt1VzXPVbSj1PcWBgkwyGKHlQUOuSbHbHcjOD8w8wH SDbL+L2he8hNN54doy1e1wJHKmnfb0uBAeISoxRbJnMMWvgAlH5FVrQWlgajeH/6NbYbBSRx ALuEOqEQepmJM6qz4oD2sxdq4GMN5adAdYEswkY/o0bRKyFXTD3mdqeRXce0jYQbWm7oapqS ZBccFvUgYOrB78tB6c1bxIgaQKRShtWR1zMM0JfqUfD9u8Fg7G5SVO0IG/GcxkSvZeRjhYcb TfqF2eAgprpyzLWmdr0mou3bv1Sq4OuBhmTQCnqxAXr4yVTRYHkp5lCvRgeJAme1OTVpVPth /O7HJ7VuEP9GOr6kCXCXmjB4P3UJ2oU0NqfoQdcSSSt9hliALnExTEjii20B2nSDojGCAxQw ggMQAgEBME4wOjELMAkGA1UEBhMCVVMxEjAQBgNVBAoTCUlkZW5UcnVzdDEXMBUGA1UEAxMO VHJ1c3RJRCBDQSBBMTICEEABZtMZ6Gd9S673WFjAzkMwDQYJYIZIAWUDBAIBBQCgggGXMBgG CSqGSIb3DQEJAzELBgkqhkiG9w0BBwEwHAYJKoZIhvcNAQkFMQ8XDTE4MTEwODIwNDIwMVow LwYJKoZIhvcNAQkEMSIEILUbmUVgtq0M9/sVoKNysrSwlqeVZ+HOzte3Kuqx3dkiMF0GCSsG AQQBgjcQBDFQME4wOjELMAkGA1UEBhMCVVMxEjAQBgNVBAoTCUlkZW5UcnVzdDEXMBUGA1UE AxMOVHJ1c3RJRCBDQSBBMTICEEABZtMZ6Gd9S673WFjAzkMwXwYLKoZIhvcNAQkQAgsxUKBO MDoxCzAJBgNVBAYTAlVTMRIwEAYDVQQKEwlJZGVuVHJ1c3QxFzAVBgNVBAMTDlRydXN0SUQg Q0EgQTEyAhBAAWbTGehnfUuu91hYwM5DMGwGCSqGSIb3DQEJDzFfMF0wCwYJYIZIAWUDBAEq MAsGCWCGSAFlAwQBAjAKBggqhkiG9w0DBzAOBggqhkiG9w0DAgICAIAwDQYIKoZIhvcNAwIC AUAwBwYFKw4DAgcwDQYIKoZIhvcNAwICASgwDQYJKoZIhvcNAQEBBQAEggEAD++S0w+YMXjk cLZills2EkhQHhmjI3NsoGtuqTWg5dr7lsNgAMvRh5T2ixtRIXq3x3ukA50vP6piFWUKezWL 6Q5MeY1Fy1KpCA6PCPsNWLqHQkGf5OW1VOg5nQAtlXv1717hToI5HYFvYS9+tWvyeUYiVIDA +lEzJFoeXQfJ95MJGDprnd0gpFYCgW8uzQ0kr4mfsA7qfyBFif6BrIXjoCsLBOXjaGzThHkF QTtcZTInZ4B3YBpZSvNgRYgY9sZpQJWeOXwpi2u8yVrxsiQn8e8xZMvlDxzVO7fEbtDSo7+i tUz/J11zRRT+88p1aC+0SdnBxrdovdX/nw6B2EYg4gAAAAAAAA== --------------ms060802080100070600000804-- From mmeffie@sinenomine.net Thu Nov 8 21:27:09 2018 From: mmeffie@sinenomine.net (Michael Meffie) Date: Thu, 8 Nov 2018 16:27:09 -0500 Subject: [OpenAFS] Building 1.8.2 with transarc-paths In-Reply-To: References: Message-ID: <20181108162709.97d732f54c444cefc9094cff@sinenomine.net> On Wed, 7 Nov 2018 21:41:06 -0500 "Prasad K. Dharmasena" wrote: > I've been building 1.6.x on Ubuntu 16.04 with the following options and it > has worked well for me. > > --enable-transarc-paths > --prefix=/usr/afsws > --enable-supergroups > > Building 1.8.x on the same OS with the same option has a problem that > appears to be an rpath issue. > > ldd /usr/vice/etc/afsd | grep not > libafshcrypto.so.2 => not found > librokenafs.so.2 => not found > > Those libraries are installed in /usr/afsws/lib, so I can get the client to > run if I set the LD_LIBRARY_PATH. Any hints to what I need to tweak in > 'configure' to make it build properly? > > Thanks. Hello Prasad, OpenAFS 1.8.x introduced those two shared object libraries. When not installing from packages you'll need to run ldconfig or set the LD_LIBRARY_PATH. Since you've copied the files to /usr/afsws/lib, you can create a ldconfig configuration file to let it know where to find them. For example, $ cat /etc/ld.so.conf.d/openafs.conf /usr/afsws/lib or perphaps better, install them to a standard location recognized by ldconfig. Best regards, Mike -- Michael Meffie From andreas.ladanyi@kit.edu Fri Nov 9 14:48:23 2018 From: andreas.ladanyi@kit.edu (Andreas Ladanyi) Date: Fri, 9 Nov 2018 15:48:23 +0100 Subject: [OpenAFS] automatic replication of ro volumes Message-ID: Hi, it is common an openafs admin has to sync an ro volume after something is added to rw volume. This is done by the vos release command. I think its the only way. Are there automatic sync functions in the vol / fs server. Andreas From jaltman@auristor.com Fri Nov 9 15:38:57 2018 From: jaltman@auristor.com (Jeffrey Altman) Date: Fri, 9 Nov 2018 10:38:57 -0500 Subject: [OpenAFS] automatic replication of ro volumes In-Reply-To: References: Message-ID: <49ce294c-83a7-ee9f-14d4-804a33f19d25@auristor.com> This is a cryptographically signed message in MIME format. --------------ms080500020704010702050205 Content-Type: multipart/mixed; boundary="------------FAF77ACAE13E4176ABB13DA1" Content-Language: en-US This is a multi-part message in MIME format. --------------FAF77ACAE13E4176ABB13DA1 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable On 11/9/2018 9:48 AM, Andreas Ladanyi wrote: > Hi, >=20 > it is common an openafs admin has to sync an ro volume after something > is added to rw volume. This is done by the vos release command. I think= > its the only way. Are there automatic sync functions in the vol / fs se= rver. The risk of automated volume releases is that the automated system does not know when the volume contents are in a consistent and quiescent state= =2E Sites often use remctl to grant end users the ability to release their own volumes. Automated releases of RO volumes are a poor substitute for replicated RW volumes. RW replication is a feature which was never completed for OpenA= FS. Jeffrey Altman --------------FAF77ACAE13E4176ABB13DA1 Content-Type: text/x-vcard; charset=utf-8; name="jaltman.vcf" Content-Transfer-Encoding: quoted-printable Content-Disposition: attachment; filename="jaltman.vcf" begin:vcard fn:Jeffrey Altman n:Altman;Jeffrey org:AuriStor, Inc. adr:Suite 6B;;255 West 94Th Street;New York;New York;10025-6985;United St= ates email;internet:jaltman@auristor.com title:Founder and CEO tel;work:+1-212-769-9018 note;quoted-printable:LinkedIn: https://www.linkedin.com/in/jeffreyaltman= =3D0D=3D0A=3D Skype: jeffrey.e.altman=3D0D=3D0A=3D =09 url:https://www.auristor.com/ version:2.1 end:vcard --------------FAF77ACAE13E4176ABB13DA1-- --------------ms080500020704010702050205 Content-Type: application/pkcs7-signature; name="smime.p7s" Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="smime.p7s" Content-Description: S/MIME Cryptographic Signature MIAGCSqGSIb3DQEHAqCAMIACAQExDzANBglghkgBZQMEAgEFADCABgkqhkiG9w0BBwEAAKCC DGswggXSMIIEuqADAgECAhBAAWbTGehnfUuu91hYwM5DMA0GCSqGSIb3DQEBCwUAMDoxCzAJ BgNVBAYTAlVTMRIwEAYDVQQKEwlJZGVuVHJ1c3QxFzAVBgNVBAMTDlRydXN0SUQgQ0EgQTEy MB4XDTE4MTEwMjA2MjYyMloXDTE5MTEwMjA2MjYyMlowcDEvMC0GCgmSJomT8ixkAQETH0Ew MTQyN0UwMDAwMDE2NkQzMTlFODFBMDAwMDdBN0IxGTAXBgNVBAMTEEplZmZyZXkgRSBBbHRt YW4xFTATBgNVBAoTDEF1cmlTdG9yIEluYzELMAkGA1UEBhMCVVMwggEiMA0GCSqGSIb3DQEB AQUAA4IBDwAwggEKAoIBAQDqEYwjLORE23Gc8m7YgKqbGzWn/fmVGtoZkBNwOEYlrFOu84Pb EhV4sxQrChhPyXVW2jquV2rg2/5dsVC8RO+RwlXuAkUvR9KhWJLu6GJXwUnZr83wtEzJ8nqp THj6W+3velLwWx7qhADyrMnKN0bTYh+5M9HWt2We4qYi6i1/ejgKtM0arWYxVx6Iwb4xZpil MDNqV15Dwuunnkq4vNEByIT81zDoClqylMxxKJpvc3tqC66+BHHM5RxF+z36Pt8fb3Q54Vry txXFm+kVSclKGaWgjq5SqV4tR0FWv6OnMY8tAx1YrljfvgxW5npZgBbo+YVoYEfUrz77WIYQ yzn7AgMBAAGjggKcMIICmDAOBgNVHQ8BAf8EBAMCBPAwgYQGCCsGAQUFBwEBBHgwdjAwBggr BgEFBQcwAYYkaHR0cDovL2NvbW1lcmNpYWwub2NzcC5pZGVudHJ1c3QuY29tMEIGCCsGAQUF BzAChjZodHRwOi8vdmFsaWRhdGlvbi5pZGVudHJ1c3QuY29tL2NlcnRzL3RydXN0aWRjYWEx Mi5wN2MwHwYDVR0jBBgwFoAUpHPa72k1inXMoBl7CDL4a4nkQuwwCQYDVR0TBAIwADCCASsG A1UdIASCASIwggEeMIIBGgYLYIZIAYb5LwAGAgEwggEJMEoGCCsGAQUFBwIBFj5odHRwczov L3NlY3VyZS5pZGVudHJ1c3QuY29tL2NlcnRpZmljYXRlcy9wb2xpY3kvdHMvaW5kZXguaHRt bDCBugYIKwYBBQUHAgIwga0agapUaGlzIFRydXN0SUQgQ2VydGlmaWNhdGUgaGFzIGJlZW4g aXNzdWVkIGluIGFjY29yZGFuY2Ugd2l0aCBJZGVuVHJ1c3QncyBUcnVzdElEIENlcnRpZmlj YXRlIFBvbGljeSBmb3VuZCBhdCBodHRwczovL3NlY3VyZS5pZGVudHJ1c3QuY29tL2NlcnRp ZmljYXRlcy9wb2xpY3kvdHMvaW5kZXguaHRtbDBFBgNVHR8EPjA8MDqgOKA2hjRodHRwOi8v dmFsaWRhdGlvbi5pZGVudHJ1c3QuY29tL2NybC90cnVzdGlkY2FhMTIuY3JsMB8GA1UdEQQY MBaBFGphbHRtYW5AYXVyaXN0b3IuY29tMB0GA1UdDgQWBBQevV8IqWfIUNkQqAugGhxR938z +jAdBgNVHSUEFjAUBggrBgEFBQcDAgYIKwYBBQUHAwQwDQYJKoZIhvcNAQELBQADggEBAKsU kshF6tfL43itTIVy9vjYqqPErG9n8kX5FlRYbtIVlWIYTxQpeqtDpUPur1jfBiNY+xT+9Pay O2+XxXu9ZEykCz5T4+3q7s5t5RLsHu1dxYcMnAgfUqb13mhZxY8PVPE4PTHSvZLjPZ6Nt7j0 tXjddZJqjDhr7neNpmYgQWSe+oaIxbUqQ34rVW/hDimv9Y2DnCXL0LopCfABQDK9HDzmsuXd bVH6LUpS6ncge9kQEh1QIGuwqEv2tHCWeauWM6h3BOXj3dlfbJEawUYz2hvc3nSXpscFlCN5 tGAyUAE8QbKnH1ha/zZVrJY1EglFhnDho34lWl35t7pE5NP4kscwggaRMIIEeaADAgECAhEA +d5Wf8lNDHdw+WAbUtoVOzANBgkqhkiG9w0BAQsFADBKMQswCQYDVQQGEwJVUzESMBAGA1UE ChMJSWRlblRydXN0MScwJQYDVQQDEx5JZGVuVHJ1c3QgQ29tbWVyY2lhbCBSb290IENBIDEw HhcNMTUwMjE4MjIyNTE5WhcNMjMwMjE4MjIyNTE5WjA6MQswCQYDVQQGEwJVUzESMBAGA1UE ChMJSWRlblRydXN0MRcwFQYDVQQDEw5UcnVzdElEIENBIEExMjCCASIwDQYJKoZIhvcNAQEB BQADggEPADCCAQoCggEBANGRTTzPCic0kq5L6ZrUJWt5LE/n6tbPXPhGt2Egv7plJMoEpvVJ JDqGqDYymaAsd8Hn9ZMAuKUEFdlx5PgCkfu7jL5zgiMNnAFVD9PyrsuF+poqmlxhlQ06sFY2 hbhQkVVQ00KCNgUzKcBUIvjv04w+fhNPkwGW5M7Ae5K5OGFGwOoRck9GG6MUVKvTNkBw2/vN MOd29VGVTtR0tjH5PS5yDXss48Yl1P4hDStO2L4wTsW2P37QGD27//XGN8K6amWB6F2XOgff /PmlQjQOORT95PmLkwwvma5nj0AS0CVp8kv0K2RHV7GonllKpFDMT0CkxMQKwoj+tWEWJTiD KSsCAwEAAaOCAoAwggJ8MIGJBggrBgEFBQcBAQR9MHswMAYIKwYBBQUHMAGGJGh0dHA6Ly9j b21tZXJjaWFsLm9jc3AuaWRlbnRydXN0LmNvbTBHBggrBgEFBQcwAoY7aHR0cDovL3ZhbGlk YXRpb24uaWRlbnRydXN0LmNvbS9yb290cy9jb21tZXJjaWFscm9vdGNhMS5wN2MwHwYDVR0j BBgwFoAU7UQZwNPwBovupHu+QucmVMiONnYwDwYDVR0TAQH/BAUwAwEB/zCCASAGA1UdIASC ARcwggETMIIBDwYEVR0gADCCAQUwggEBBggrBgEFBQcCAjCB9DBFFj5odHRwczovL3NlY3Vy ZS5pZGVudHJ1c3QuY29tL2NlcnRpZmljYXRlcy9wb2xpY3kvdHMvaW5kZXguaHRtbDADAgEB GoGqVGhpcyBUcnVzdElEIENlcnRpZmljYXRlIGhhcyBiZWVuIGlzc3VlZCBpbiBhY2NvcmRh bmNlIHdpdGggSWRlblRydXN0J3MgVHJ1c3RJRCBDZXJ0aWZpY2F0ZSBQb2xpY3kgZm91bmQg YXQgaHR0cHM6Ly9zZWN1cmUuaWRlbnRydXN0LmNvbS9jZXJ0aWZpY2F0ZXMvcG9saWN5L3Rz L2luZGV4Lmh0bWwwSgYDVR0fBEMwQTA/oD2gO4Y5aHR0cDovL3ZhbGlkYXRpb24uaWRlbnRy dXN0LmNvbS9jcmwvY29tbWVyY2lhbHJvb3RjYTEuY3JsMB0GA1UdJQQWMBQGCCsGAQUFBwMC BggrBgEFBQcDBDAOBgNVHQ8BAf8EBAMCAYYwHQYDVR0OBBYEFKRz2u9pNYp1zKAZewgy+GuJ 5ELsMA0GCSqGSIb3DQEBCwUAA4ICAQAN4YKu0vv062MZfg+xMSNUXYKvHwvZIk+6H1pUmivy DI4I6A3wWzxlr83ZJm0oGIF6PBsbgKJ/fhyyIzb+vAYFJmyI8I/0mGlc+nIQNuV2XY8cypPo VJKgpnzp/7cECXkX8R4NyPtEn8KecbNdGBdEaG4a7AkZ3ujlJofZqYdHxN29tZPdDlZ8fR36 /mAFeCEq0wOtOOc0Eyhs29+9MIZYjyxaPoTS+l8xLcuYX3RWlirRyH6RPfeAi5kySOEhG1qu NHe06QIwpigjyFT6v/vRqoIBr7WpDOSt1VzXPVbSj1PcWBgkwyGKHlQUOuSbHbHcjOD8w8wH SDbL+L2he8hNN54doy1e1wJHKmnfb0uBAeISoxRbJnMMWvgAlH5FVrQWlgajeH/6NbYbBSRx ALuEOqEQepmJM6qz4oD2sxdq4GMN5adAdYEswkY/o0bRKyFXTD3mdqeRXce0jYQbWm7oapqS ZBccFvUgYOrB78tB6c1bxIgaQKRShtWR1zMM0JfqUfD9u8Fg7G5SVO0IG/GcxkSvZeRjhYcb TfqF2eAgprpyzLWmdr0mou3bv1Sq4OuBhmTQCnqxAXr4yVTRYHkp5lCvRgeJAme1OTVpVPth /O7HJ7VuEP9GOr6kCXCXmjB4P3UJ2oU0NqfoQdcSSSt9hliALnExTEjii20B2nSDojGCAxQw ggMQAgEBME4wOjELMAkGA1UEBhMCVVMxEjAQBgNVBAoTCUlkZW5UcnVzdDEXMBUGA1UEAxMO VHJ1c3RJRCBDQSBBMTICEEABZtMZ6Gd9S673WFjAzkMwDQYJYIZIAWUDBAIBBQCgggGXMBgG CSqGSIb3DQEJAzELBgkqhkiG9w0BBwEwHAYJKoZIhvcNAQkFMQ8XDTE4MTEwOTE1Mzg1N1ow LwYJKoZIhvcNAQkEMSIEIFfbnj20GGPYHTeDERyTD+HQ+VWptWdvdBWNu10nOg1KMF0GCSsG AQQBgjcQBDFQME4wOjELMAkGA1UEBhMCVVMxEjAQBgNVBAoTCUlkZW5UcnVzdDEXMBUGA1UE AxMOVHJ1c3RJRCBDQSBBMTICEEABZtMZ6Gd9S673WFjAzkMwXwYLKoZIhvcNAQkQAgsxUKBO MDoxCzAJBgNVBAYTAlVTMRIwEAYDVQQKEwlJZGVuVHJ1c3QxFzAVBgNVBAMTDlRydXN0SUQg Q0EgQTEyAhBAAWbTGehnfUuu91hYwM5DMGwGCSqGSIb3DQEJDzFfMF0wCwYJYIZIAWUDBAEq MAsGCWCGSAFlAwQBAjAKBggqhkiG9w0DBzAOBggqhkiG9w0DAgICAIAwDQYIKoZIhvcNAwIC AUAwBwYFKw4DAgcwDQYIKoZIhvcNAwICASgwDQYJKoZIhvcNAQEBBQAEggEA6aZooKMvuEGq e38lFxcgYbt7iBwHa7tkoX7pYq0HXvCiHXLPNxKxRZzaIk61GiHSgSE3ybV/ucOLgOia3Pfl V9W8aHEYkbDJ5oxr1dwoCyY2vN6Rm5l+bb4sSj25GoREgMEfuORMHDfSot7eEh060HW3A/2t RanFLuxl+9H/dHjy1x8rW/1xR28gk5+/q9eQ6F2uhEqz2VxuQYDKJSEiDf3crNdc3z6cy0q7 99Eh0g1ikFahomXQXM8UIkP7gwqbxcXelpoSrkR3ua5kkKwiPIvKM8V+MHON+wZIBaN2McAS /N23a6iO5j15Js5vFjgCd4ULYETq6y9QUVuNG7gh1AAAAAAAAA== --------------ms080500020704010702050205-- From sopko@cs.unc.edu Fri Nov 9 15:45:39 2018 From: sopko@cs.unc.edu (John Sopko) Date: Fri, 9 Nov 2018 10:45:39 -0500 Subject: [OpenAFS] accessing /afs processes go into device wait In-Reply-To: <068ea177-89ca-66ee-e9ec-4f2d958c48d0@auristor.com> References: <068ea177-89ca-66ee-e9ec-4f2d958c48d0@auristor.com> Message-ID: Thanks for the explanation. I had never had this issue for years, my guess is we have more .htaccess files being created and accessed in afs. After researching when a .htaccess file is encountered, the server then traverses up the file system looking for .htacces files in all parent directories. By default apache configures / with "AllowOverride None" which tells the server .htaccess is not allowed and don't traverse. I added /afs and our cell as show below, no need to look for .htaccess in these top level directories. # Each directory to which Apache has access can be configured with respect # to which services and features are allowed and/or disabled in that # directory (and its subdirectories). # # First, we configure the "default" to be a very restrictive set of # features. # Options FollowSymLinks AllowOverride None AllowOverride None AllowOverride None AllowOverride None On Thu, Nov 8, 2018 at 3:42 PM Jeffrey Altman wrote: > > On 11/8/2018 12:22 PM, John Sopko wrote:> > > I am running afsd with: > > > > /usr/vice/etc/afsd -dynroot -fakestat-all -afsdb > > -dynroot > > do not mount a root.afs volume. instead populate the /afs directory > with the results of cell lookups > > -afsdb > > if the requested name does not match a cell found in the CellServDB > file, query DNS first for SRV records and if no match, then AFSDB > records > > Note that default RHEL6 configuration for the DNS resolver does not > cache negative DNS results. > > An attempt to open /afs/.htaccess therefore results in DNS queries for > "htaccess" plus whatever domains are in the search list. If the search > list is cs.unc.edu and unc.edu then for each access there will be the > following DNS queries > > SRV _afs3-vlserver._udp.htaccess.cs.unc.edu > SRV _afs3-vlserver._udp.unc.edu > AFSDB htaccess.cs.unc.edu > AFSDB htaccess.unc.edu > > You can add a dummy htaccess.cs.unc.edu entry to CellServDB. You can > add a blacklist for that name. You can stop using -afsdb or you can > stop using -dynroot and rely upon a locally managed root.afs volume. > > Jeffrey Altman > > > > -- John W. Sopko Jr. University of North Carolina Computer Science Dept CB 3175 Chapel Hill, NC 27599-3175 Fred Brooks Building; Room 140 Computer Services Systems Specialist email: sopko AT cs.unc.edu phone: 919-590-6144 From kaduk@mit.edu Sat Nov 10 14:25:01 2018 From: kaduk@mit.edu (Benjamin Kaduk) Date: Sat, 10 Nov 2018 08:25:01 -0600 Subject: [OpenAFS] Building 1.8.2 with transarc-paths In-Reply-To: <20181108162709.97d732f54c444cefc9094cff@sinenomine.net> References: <20181108162709.97d732f54c444cefc9094cff@sinenomine.net> Message-ID: <20181110142501.GX65098@kduck.kaduk.org> On Thu, Nov 08, 2018 at 04:27:09PM -0500, Michael Meffie wrote: > On Wed, 7 Nov 2018 21:41:06 -0500 > "Prasad K. Dharmasena" wrote: > > > I've been building 1.6.x on Ubuntu 16.04 with the following options and it > > has worked well for me. > > > > --enable-transarc-paths > > --prefix=/usr/afsws > > --enable-supergroups > > > > Building 1.8.x on the same OS with the same option has a problem that > > appears to be an rpath issue. > > > > ldd /usr/vice/etc/afsd | grep not > > libafshcrypto.so.2 => not found > > librokenafs.so.2 => not found > > > > Those libraries are installed in /usr/afsws/lib, so I can get the client to > > run if I set the LD_LIBRARY_PATH. Any hints to what I need to tweak in > > 'configure' to make it build properly? > > > > Thanks. > > Hello Prasad, > > OpenAFS 1.8.x introduced those two shared object libraries. When not installing > from packages you'll need to run ldconfig or set the LD_LIBRARY_PATH. Since > you've copied the files to /usr/afsws/lib, you can create a ldconfig configuration > file to let it know where to find them. For example, > > $ cat /etc/ld.so.conf.d/openafs.conf > /usr/afsws/lib > > or perphaps better, install them to a standard location recognized by ldconfig. I might also ask why you are using the transarc paths at all -- wouldn't it be easier to conform to the de facto standard filesystem hierarchy with the default openafs configuration? (There's also the option of passing --enable-static --disable-shared to configure, though I don't remember exactly what that ends up doing.) Thanks, Ben From andreas.ladanyi@kit.edu Mon Nov 12 14:21:36 2018 From: andreas.ladanyi@kit.edu (Andreas Ladanyi) Date: Mon, 12 Nov 2018 15:21:36 +0100 Subject: [OpenAFS] automatic replication of ro volumes In-Reply-To: <49ce294c-83a7-ee9f-14d4-804a33f19d25@auristor.com> References: <49ce294c-83a7-ee9f-14d4-804a33f19d25@auristor.com> Message-ID: <7f948ae4-f9b8-f5ee-7017-7ea2a3f028e7@kit.edu> Hi Jeffrey, >> it is common an openafs admin has to sync an ro volume after something >> is added to rw volume. This is done by the vos release command. I think >> its the only way. Are there automatic sync functions in the vol / fs server. > The risk of automated volume releases is that the automated system does > not know when the volume contents are in a consistent and quiescent state. ok, but vos release "knows" them ? Is there something against a crontab script as root with vos lock and vos release to all volumes (with an ro site)  ? > > Sites often use remctl to grant end users the ability to release their > own volumes. > > Automated releases of RO volumes are a poor substitute for replicated RW > volumes. RW replication is a feature which was never completed for OpenAFS. > > Jeffrey Altman > Andreas From scs@umich.edu Mon Nov 12 19:03:12 2018 From: scs@umich.edu (Steve Simmons) Date: Mon, 12 Nov 2018 14:03:12 -0500 Subject: [OpenAFS] automatic replication of ro volumes In-Reply-To: <7f948ae4-f9b8-f5ee-7017-7ea2a3f028e7@kit.edu> References: <49ce294c-83a7-ee9f-14d4-804a33f19d25@auristor.com> <7f948ae4-f9b8-f5ee-7017-7ea2a3f028e7@kit.edu> Message-ID: --000000000000711b0c057a7c5a3d Content-Type: text/plain; charset="UTF-8" Cron has no more knowledge about when the r/w volume is in a consistent state than does AFS. Only the person(s) who make the changes to the r/w volume know when it's ready to release. Steve Simmons ITS Unix Support/SCS Admins On Mon, Nov 12, 2018 at 9:22 AM Andreas Ladanyi wrote: > Hi Jeffrey, > >> it is common an openafs admin has to sync an ro volume after something > >> is added to rw volume. This is done by the vos release command. I think > >> its the only way. Are there automatic sync functions in the vol / fs > server. > > The risk of automated volume releases is that the automated system does > > not know when the volume contents are in a consistent and quiescent > state. > > ok, but vos release "knows" them ? > > Is there something against a crontab script as root with vos lock and > vos release to all volumes (with an ro site) ? > > > > > Sites often use remctl to grant end users the ability to release their > > own volumes. > > > > Automated releases of RO volumes are a poor substitute for replicated RW > > volumes. RW replication is a feature which was never completed for > OpenAFS. > > > > Jeffrey Altman > > > Andreas > _______________________________________________ > OpenAFS-info mailing list > OpenAFS-info@openafs.org > https://lists.openafs.org/mailman/listinfo/openafs-info > --000000000000711b0c057a7c5a3d Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable
Cron has no more knowledge about when the r/w volume is= in a consistent state than does AFS. Only the person(s) who make the chang= es to the r/w volume know when it's ready to release.

Steve Simmons
ITS Unix Support/SCS Admi= ns


On Mon, Nov 12, 2018 at 9:22 AM Andreas Ladanyi <andreas.ladanyi@kit.edu> wrote:
=
Hi Jeffrey,
>> it is common an openafs admin has to sync an ro volume after somet= hing
>> is added to rw volume. This is done by the vos release command. I = think
>> its the only way. Are there automatic sync functions in the vol / = fs server.
> The risk of automated volume releases is that the automated system doe= s
> not know when the volume contents are in a consistent and quiescent st= ate.

ok, but vos release "knows" them ?

Is there something against a crontab script as root with vos lock and
vos release to all volumes (with an ro site)=C2=A0 ?

>
> Sites often use remctl to grant end users the ability to release their=
> own volumes.
>
> Automated releases of RO volumes are a poor substitute for replicated = RW
> volumes.=C2=A0 RW replication is a feature which was never completed f= or OpenAFS.
>
> Jeffrey Altman
>
Andreas
_______________________________________________
OpenAFS-info mailing list
OpenAFS-info@= openafs.org
https://lists.openafs.org/mailman/listinfo/op= enafs-info
--000000000000711b0c057a7c5a3d-- From touzhinski@gmail.com Tue Nov 13 23:35:40 2018 From: touzhinski@gmail.com (Theo Ouzhinski) Date: Tue, 13 Nov 2018 18:35:40 -0500 Subject: [OpenAFS] Unexpected no space left on device error Message-ID: <45b8f291-7685-5d08-5a5b-92a2404c102d@gmail.com> SGkgYWxsLAoKUmVjZW50bHksIEkndmUgc2VlbiBhbiB1cHRpY2sgaW4gIm5vIHNwYWNlIGxl ZnQgb24gZGV2aWNlIiBlcnJvcnMgZm9yCnNvbWUgb2YgdGhlIGhvbWUgZGlyZWN0b3JpZXMg SSBhZG1pbmlzdGVyLsKgCgpGb3IgZXhhbXBsZSwKCm1hdHN1bW90byA8VVNFUk5BTUU+ICMg dG91Y2ggYQp0b3VjaDogY2Fubm90IHRvdWNoICdhJzogTm8gc3BhY2UgbGVmdCBvbiBkZXZp Y2UKCldlIGFyZSBub3QgZXZlbiBjbG9zZSB0byBmaWxsaW5nIHVwIHRoZSBjYWNoZSAobG9j YXRlZCBhdAovdmFyL2NhY2hlL29wZW5hZnMpIG9uIHRoaXMgY2xpZW50IG1hY2hpbmUuCgpt YXRzdW1vdG8gfiAjIGZzIGdldGNhY2hlcGFybXMKQUZTIHVzaW5nIDEwMzE0IG9mIHRoZSBj YWNoZSdzIGF2YWlsYWJsZSAxMDAwMDAwMCAxSyBieXRlIGJsb2Nrcy4KbWF0c3Vtb3RvIH4g IyBkZiAtaApGaWxlc3lzdGVtwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg IFNpemXCoCBVc2VkIEF2YWlsIFVzZSUgTW91bnRlZCBvbgouLi4uCi9kZXYvbWFwcGVyL3Zn d3Jrc3RuLXJvb3TCoMKgwqAgNDU2R8KgwqAgMTdHwqAgNDE3R8KgwqAgNCUgLwouLi4uCkFG U8KgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIDIu MFTCoMKgwqDCoCAwwqAgMi4wVMKgwqAgMCUgL2FmcwoKCk5vciBpcyB0aGlzIGhvbWUgZGly ZWN0b3J5IG9yIGFueSBvdGhlciBwcm9ibGVtYXRpYyBob21lIGRpcmVjdG9yeSBjbG9zZQp0 byB0aGVpciBxdW90YS4KCm1hdHN1bW90byA8VVNFUk5BTUU+ICMgZnMgbHEKVm9sdW1lIE5h bWXCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCBRdW90YcKgwqDCoMKg wqDCoCBVc2VkICVVc2VkwqDCoCBQYXJ0aXRpb24KPFZPTFVNRSBOQU1FPiDCoMKgwqDCoMKg wqDCoMKgwqDCoMKgwqAgNDE5NDMwNMKgwqDCoMKgIDE5NDQwM8KgwqDCoCA1JcKgwqDCoMKg wqDCoMKgwqAgMzclwqAKCkFjY29yZGluZyB0byBwcmV2aW91cyBwb3N0cyBvbiB0aGlzIGxp c3QsIG1hbnkgaXNzdWVzIGNhbiBiZSBhdHRyaWJ1dGVkCnRvIGhpZ2ggaW5vZGUgdXNhZ2Uu wqAgSG93ZXZlciwgdGhpcyBpcyBub3QgdGhlIGNhc2Ugb24gb3VyIG1hY2hpbmVzLgoKSGVy ZSBpcyBzYW1wbGUgb3V0cHV0IGZyb20gb25lIG9mIG91ciBPcGVuQUZTIHNlcnZlcnMsIHdo aWNoIGlzIHNpbWlsYXIKdG8gYWxsIG9mIHRoZSBmb3VyIG90aGVyIG9uZXMuCgpvcGVuYWZz MSB+ICMgZGYgLWkKRmlsZXN5c3RlbcKgwqDCoMKgwqDCoMKgwqAgSW5vZGVzwqDCoCBJVXNl ZMKgwqDCoMKgwqAgSUZyZWUgSVVzZSUgTW91bnRlZCBvbgp1ZGV2wqDCoMKgwqDCoMKgwqDC oMKgwqDCoMKgwqAgMTkwMzgxNsKgwqDCoMKgIDQxM8KgwqDCoCAxOTAzNDAzwqDCoMKgIDEl IC9kZXYKdG1wZnPCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAgMTkxMTIxMMKgwqDCoMKgIDU1 McKgwqDCoCAxOTEwNjU5wqDCoMKgIDElIC9ydW4KL2Rldi92ZGExwqDCoMKgwqDCoMKgwqDC oCAxOTA1MDA4wqAgMTU0ODIxwqDCoMKgIDE3NTAxODfCoMKgwqAgOSUgLwp0bXBmc8KgwqDC oMKgwqDCoMKgwqDCoMKgwqDCoCAxOTExMjEwwqDCoMKgwqDCoMKgIDHCoMKgwqAgMTkxMTIw OcKgwqDCoCAxJSAvZGV2L3NobQp0bXBmc8KgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCAxOTEx MjEwwqDCoMKgwqDCoMKgIDXCoMKgwqAgMTkxMTIwNcKgwqDCoCAxJSAvcnVuL2xvY2sKdG1w ZnPCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAgMTkxMTIxMMKgwqDCoMKgwqAgMTfCoMKgwqAg MTkxMTE5M8KgwqDCoCAxJSAvc3lzL2ZzL2Nncm91cAovZGV2L3ZkYsKgwqDCoMKgwqDCoMKg wqAgMTk2NjA4MDAgMzQ2MTIwM8KgwqAgMTYxOTk1OTfCoMKgIDE4JSAvdmljZXBhCi9kZXYv dmRjwqDCoMKgwqDCoMKgwqDCoCAxOTY2MDgwMCAxNTA1OTU4wqDCoCAxODE1NDg0MsKgwqDC oCA4JSAvdmljZXBiCnRtcGZzwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIDE5MTEyMTDCoMKg wqDCoMKgwqAgNMKgwqDCoCAxOTExMjA2wqDCoMKgIDElIC9ydW4vdXNlci8wCkFGU8KgwqDC oMKgwqDCoMKgwqDCoMKgwqAgMjE0NzQ4MzY0N8KgwqDCoMKgwqDCoCAwIDIxNDc0ODM2NDfC oMKgwqAgMCUgL2FmcwoKCldlIGFyZSBydW5uaW5nIHRoZSBsYXRlc3QgSFdFIGtlcm5lbCAo NC4xNS4wLTM4LWdlbmVyaWMpIGZvciBVYnVudHUKMTYuMDQgKHdoaWNoIGlzIHRoZSBPUyBm b3IgYm90aCBzZXJ2ZXIgYW5kIGNsaWVudCBtYWNoaW5lcykuIFdlIGFyZQpydW5uaW5nIG9u IHRoZSBjbGllbnRzLCB0aGUgZm9sbG93aW5nIHZlcnNpb25zOgoKb3BlbmFmcy1jbGllbnQv eGVuaWFsLG5vdyAxLjguMi0wcHBhMn51YnVudHUxNi4wNC4xIGFtZDY0IFtpbnN0YWxsZWRd Cm9wZW5hZnMta3JiNS94ZW5pYWwsbm93IDEuOC4yLTBwcGEyfnVidW50dTE2LjA0LjEgYW1k NjQgW2luc3RhbGxlZF0Kb3BlbmFmcy1tb2R1bGVzLWRrbXMveGVuaWFsLHhlbmlhbCxub3cg MS44LjItMHBwYTJ+dWJ1bnR1MTYuMDQuMSBhbGwKW2luc3RhbGxlZF0KCmFuZCBvbiB0aGUg c2VydmVycywgdGhlIGZvbGxvd2luZyB2ZXJzaW9uczoKCm9wZW5hZnMtY2xpZW50L3hlbmlh bCxub3cgMS42LjE1LTF1YnVudHUxIGFtZDY0IFtpbnN0YWxsZWRdCm9wZW5hZnMtZGJzZXJ2 ZXIveGVuaWFsLG5vdyAxLjYuMTUtMXVidW50dTEgYW1kNjQgW2luc3RhbGxlZF0Kb3BlbmFm cy1maWxlc2VydmVyL3hlbmlhbCxub3cgMS42LjE1LTF1YnVudHUxIGFtZDY0IFtpbnN0YWxs ZWRdCm9wZW5hZnMta3JiNS94ZW5pYWwsbm93IDEuNi4xNS0xdWJ1bnR1MSBhbWQ2NCBbaW5z dGFsbGVkXQpvcGVuYWZzLW1vZHVsZXMtZGttcy94ZW5pYWwseGVuaWFsLG5vdyAxLjYuMTUt MXVidW50dTEgYWxsIFtpbnN0YWxsZWRdCgpXaGF0IGNvdWxkIGJlIHRoZSBwcm9ibGVtPyBJ cyB0aGVyZSBzb21ldGhpbmcgSSBtaXNzZWQ/CgoKVGhhbmtzLAoKVGhlbyBPdXpoaW5za2kK Cg== From touzhinski@gmail.com Wed Nov 14 01:46:28 2018 From: touzhinski@gmail.com (Theo Ouzhinski) Date: Tue, 13 Nov 2018 20:46:28 -0500 Subject: [OpenAFS] Unexpected no space left on device error Message-ID: Hi all, Sorry for my previous incorrectly formatted email. Recently, I've seen an uptick in "no space left on device" errors for some of the home directories I administer. For example, matsumoto # touch a touch: cannot touch 'a': No space left on device We are not even close to filling up the cache (located at /var/cache/openafs) on this client machine. matsumoto ~ # fs getcacheparms AFS using 10314 of the cache's available 10000000 1K byte blocks. matsumoto ~ # df -h Filesystem Size Used Avail Use% Mounted on .... /dev/mapper/vgwrkstn-root 456G 17G 417G 4% / .... AFS 2.0T 0 2.0T 0% /afs Nor is this home directory or any other problematic home directory close to their quota. matsumoto # fs lq Volume Name Quota Used %Used Partition 4194304 194403 5% 37% According to previous posts on this list, many issues can be attributed to high inode usage. However, this is not the case on our machines. Here is sample output from one of our OpenAFS servers, which is similar to all of the four other ones. openafs1 ~ # df -i Filesystem Inodes IUsed IFree IUse% Mounted on udev 1903816 413 1903403 1% /dev tmpfs 1911210 551 1910659 1% /run /dev/vda1 1905008 154821 1750187 9% / tmpfs 1911210 1 1911209 1% /dev/shm tmpfs 1911210 5 1911205 1% /run/lock tmpfs 1911210 17 1911193 1% /sys/fs/cgroup /dev/vdb 19660800 3461203 16199597 18% /vicepa /dev/vdc 19660800 1505958 18154842 8% /vicepb tmpfs 1911210 4 1911206 1% /run/user/0 AFS 2147483647 0 2147483647 0% /afs We are running the latest HWE kernel (4.15.0-38-generic) for Ubuntu 16.04 (which is the OS for both server and client machines). We are running on the clients, the following versions: openafs-client/xenial,now 1.8.2-0ppa2~ubuntu16.04.1 amd64 [installed] openafs-krb5/xenial,now 1.8.2-0ppa2~ubuntu16.04.1 amd64 [installed] openafs-modules-dkms/xenial,xenial,now 1.8.2-0ppa2~ubuntu16.04.1 all [installed] and on the servers, the following versions: openafs-client/xenial,now 1.6.15-1ubuntu1 amd64 [installed] openafs-dbserver/xenial,now 1.6.15-1ubuntu1 amd64 [installed] openafs-fileserver/xenial,now 1.6.15-1ubuntu1 amd64 [installed] openafs-krb5/xenial,now 1.6.15-1ubuntu1 amd64 [installed] openafs-modules-dkms/xenial,xenial,now 1.6.15-1ubuntu1 all [installed] What could be the problem? Is there something I missed? Thanks, Theo Ouzhinski From kaduk@mit.edu Wed Nov 14 03:36:54 2018 From: kaduk@mit.edu (Benjamin Kaduk) Date: Tue, 13 Nov 2018 21:36:54 -0600 Subject: [OpenAFS] Unexpected no space left on device error In-Reply-To: References: Message-ID: <20181114033653.GS99562@kduck.kaduk.org> On Tue, Nov 13, 2018 at 08:46:28PM -0500, Theo Ouzhinski wrote: > Hi all, > > Sorry for my previous incorrectly formatted email. > Recently, I've seen an uptick in "no space left on device" errors for > some of the home directories I administer. > > For example, > > matsumoto # touch a > touch: cannot touch 'a': No space left on device > > We are not even close to filling up the cache (located at > /var/cache/openafs) on this client machine. > > matsumoto ~ # fs getcacheparms > AFS using 10314 of the cache's available 10000000 1K byte blocks. > matsumoto ~ # df -h > Filesystem Size Used Avail Use% Mounted on > .... > /dev/mapper/vgwrkstn-root 456G 17G 417G 4% / > .... > AFS 2.0T 0 2.0T 0% /afs > > > Nor is this home directory or any other problematic home directory close > to their quota. > > matsumoto # fs lq > Volume Name Quota Used %Used Partition > 4194304 194403 5% 37% > > According to previous posts on this list, many issues can be attributed > to high inode usage. However, this is not the case on our machines. > > Here is sample output from one of our OpenAFS servers, which is similar > to all of the four other ones. > > openafs1 ~ # df -i > Filesystem Inodes IUsed IFree IUse% Mounted on > udev 1903816 413 1903403 1% /dev > tmpfs 1911210 551 1910659 1% /run > /dev/vda1 1905008 154821 1750187 9% / > tmpfs 1911210 1 1911209 1% /dev/shm > tmpfs 1911210 5 1911205 1% /run/lock > tmpfs 1911210 17 1911193 1% /sys/fs/cgroup > /dev/vdb 19660800 3461203 16199597 18% /vicepa > /dev/vdc 19660800 1505958 18154842 8% /vicepb > tmpfs 1911210 4 1911206 1% /run/user/0 > AFS 2147483647 0 2147483647 0% /afs > > > We are running the latest HWE kernel (4.15.0-38-generic) for Ubuntu > 16.04 (which is the OS for both server and client machines). We are > running on the clients, the following versions: > > openafs-client/xenial,now 1.8.2-0ppa2~ubuntu16.04.1 amd64 [installed] > openafs-krb5/xenial,now 1.8.2-0ppa2~ubuntu16.04.1 amd64 [installed] > openafs-modules-dkms/xenial,xenial,now 1.8.2-0ppa2~ubuntu16.04.1 all > [installed] > > and on the servers, the following versions: > > openafs-client/xenial,now 1.6.15-1ubuntu1 amd64 [installed] > openafs-dbserver/xenial,now 1.6.15-1ubuntu1 amd64 [installed] > openafs-fileserver/xenial,now 1.6.15-1ubuntu1 amd64 [installed] > openafs-krb5/xenial,now 1.6.15-1ubuntu1 amd64 [installed] > openafs-modules-dkms/xenial,xenial,now 1.6.15-1ubuntu1 all [installed] (Off-topic, but that looks to be missing some security fixes.) > What could be the problem? Is there something I missed? It's not really ringing a bell off the top of my head, no. That said, there's a number of potential ways to get ENOSPC, so it would be good to get more data, like an strace of the failing touch, and maybe a packet capture (port 7000) during the touch, both from a clean cache and potentially a second attempt. -Ben From jaltman@auristor.com Wed Nov 14 04:23:24 2018 From: jaltman@auristor.com (Jeffrey Altman) Date: Wed, 14 Nov 2018 05:23:24 +0100 Subject: [OpenAFS] Unexpected no space left on device error In-Reply-To: <20181114033653.GS99562@kduck.kaduk.org> References: <20181114033653.GS99562@kduck.kaduk.org> Message-ID: <58330D04-4884-47EE-AED4-DD5F6FFD3C2D@auristor.com> --Apple-Mail-1E822D2F-F4DB-44A8-ABEC-2F7E3E773A09 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: quoted-printable I'm placing a beer on the directory being full. For extra credit I will gue= ss that the directory is full as a result of abandoned silly rename files. Y= ou should try salvaging the volume with the rebuild directories option. Jeffrey Altman > On Nov 14, 2018, at 4:36 AM, Benjamin Kaduk wrote: >=20 >> On Tue, Nov 13, 2018 at 08:46:28PM -0500, Theo Ouzhinski wrote: >> Hi all, >>=20 >> Sorry for my previous incorrectly formatted email. >> Recently, I've seen an uptick in "no space left on device" errors for >> some of the home directories I administer. >>=20 >> For example, >>=20 >> matsumoto # touch a >> touch: cannot touch 'a': No space left on device >>=20 >> We are not even close to filling up the cache (located at >> /var/cache/openafs) on this client machine. >>=20 >> matsumoto ~ # fs getcacheparms >> AFS using 10314 of the cache's available 10000000 1K byte blocks. >> matsumoto ~ # df -h >> Filesystem Size Used Avail Use% Mounted on >> .... >> /dev/mapper/vgwrkstn-root 456G 17G 417G 4% / >> .... >> AFS 2.0T 0 2.0T 0% /afs >>=20 >>=20 >> Nor is this home directory or any other problematic home directory close >> to their quota. >>=20 >> matsumoto # fs lq >> Volume Name Quota Used %Used Partition >> 4194304 194403 5% 37% >>=20 >> According to previous posts on this list, many issues can be attributed >> to high inode usage. However, this is not the case on our machines. >>=20 >> Here is sample output from one of our OpenAFS servers, which is similar >> to all of the four other ones. >>=20 >> openafs1 ~ # df -i >> Filesystem Inodes IUsed IFree IUse% Mounted on >> udev 1903816 413 1903403 1% /dev >> tmpfs 1911210 551 1910659 1% /run >> /dev/vda1 1905008 154821 1750187 9% / >> tmpfs 1911210 1 1911209 1% /dev/shm >> tmpfs 1911210 5 1911205 1% /run/lock >> tmpfs 1911210 17 1911193 1% /sys/fs/cgroup >> /dev/vdb 19660800 3461203 16199597 18% /vicepa >> /dev/vdc 19660800 1505958 18154842 8% /vicepb >> tmpfs 1911210 4 1911206 1% /run/user/0 >> AFS 2147483647 0 2147483647 0% /afs >>=20 >>=20 >> We are running the latest HWE kernel (4.15.0-38-generic) for Ubuntu >> 16.04 (which is the OS for both server and client machines). We are >> running on the clients, the following versions: >>=20 >> openafs-client/xenial,now 1.8.2-0ppa2~ubuntu16.04.1 amd64 [installed] >> openafs-krb5/xenial,now 1.8.2-0ppa2~ubuntu16.04.1 amd64 [installed] >> openafs-modules-dkms/xenial,xenial,now 1.8.2-0ppa2~ubuntu16.04.1 all >> [installed] >>=20 >> and on the servers, the following versions: >>=20 >> openafs-client/xenial,now 1.6.15-1ubuntu1 amd64 [installed] >> openafs-dbserver/xenial,now 1.6.15-1ubuntu1 amd64 [installed] >> openafs-fileserver/xenial,now 1.6.15-1ubuntu1 amd64 [installed] >> openafs-krb5/xenial,now 1.6.15-1ubuntu1 amd64 [installed] >> openafs-modules-dkms/xenial,xenial,now 1.6.15-1ubuntu1 all [installed] >=20 > (Off-topic, but that looks to be missing some security fixes.) >=20 >> What could be the problem? Is there something I missed? >=20 > It's not really ringing a bell off the top of my head, no. >=20 > That said, there's a number of potential ways to get ENOSPC, so it would b= e > good to get more data, like an strace of the failing touch, and maybe a > packet capture (port 7000) during the touch, both from a clean cache and > potentially a second attempt. >=20 > -Ben > _______________________________________________ > OpenAFS-info mailing list > OpenAFS-info@openafs.org > https://lists.openafs.org/mailman/listinfo/openafs-info --Apple-Mail-1E822D2F-F4DB-44A8-ABEC-2F7E3E773A09 Content-Type: application/pkcs7-signature; name=smime.p7s Content-Disposition: attachment; filename=smime.p7s Content-Transfer-Encoding: base64 MIAGCSqGSIb3DQEHAqCAMIACAQExDzANBglghkgBZQMEAgEFADCABgkqhkiG9w0BBwEAAKCCBdYw ggXSMIIEuqADAgECAhBAAWbTGehnfUuu91hYwM5DMA0GCSqGSIb3DQEBCwUAMDoxCzAJBgNVBAYT AlVTMRIwEAYDVQQKEwlJZGVuVHJ1c3QxFzAVBgNVBAMTDlRydXN0SUQgQ0EgQTEyMB4XDTE4MTEw MjA2MjYyMloXDTE5MTEwMjA2MjYyMlowcDEvMC0GCgmSJomT8ixkAQETH0EwMTQyN0UwMDAwMDE2 NkQzMTlFODFBMDAwMDdBN0IxGTAXBgNVBAMTEEplZmZyZXkgRSBBbHRtYW4xFTATBgNVBAoTDEF1 cmlTdG9yIEluYzELMAkGA1UEBhMCVVMwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDq EYwjLORE23Gc8m7YgKqbGzWn/fmVGtoZkBNwOEYlrFOu84PbEhV4sxQrChhPyXVW2jquV2rg2/5d sVC8RO+RwlXuAkUvR9KhWJLu6GJXwUnZr83wtEzJ8nqpTHj6W+3velLwWx7qhADyrMnKN0bTYh+5 M9HWt2We4qYi6i1/ejgKtM0arWYxVx6Iwb4xZpilMDNqV15Dwuunnkq4vNEByIT81zDoClqylMxx KJpvc3tqC66+BHHM5RxF+z36Pt8fb3Q54VrytxXFm+kVSclKGaWgjq5SqV4tR0FWv6OnMY8tAx1Y rljfvgxW5npZgBbo+YVoYEfUrz77WIYQyzn7AgMBAAGjggKcMIICmDAOBgNVHQ8BAf8EBAMCBPAw gYQGCCsGAQUFBwEBBHgwdjAwBggrBgEFBQcwAYYkaHR0cDovL2NvbW1lcmNpYWwub2NzcC5pZGVu dHJ1c3QuY29tMEIGCCsGAQUFBzAChjZodHRwOi8vdmFsaWRhdGlvbi5pZGVudHJ1c3QuY29tL2Nl cnRzL3RydXN0aWRjYWExMi5wN2MwHwYDVR0jBBgwFoAUpHPa72k1inXMoBl7CDL4a4nkQuwwCQYD VR0TBAIwADCCASsGA1UdIASCASIwggEeMIIBGgYLYIZIAYb5LwAGAgEwggEJMEoGCCsGAQUFBwIB Fj5odHRwczovL3NlY3VyZS5pZGVudHJ1c3QuY29tL2NlcnRpZmljYXRlcy9wb2xpY3kvdHMvaW5k ZXguaHRtbDCBugYIKwYBBQUHAgIwga0agapUaGlzIFRydXN0SUQgQ2VydGlmaWNhdGUgaGFzIGJl ZW4gaXNzdWVkIGluIGFjY29yZGFuY2Ugd2l0aCBJZGVuVHJ1c3QncyBUcnVzdElEIENlcnRpZmlj YXRlIFBvbGljeSBmb3VuZCBhdCBodHRwczovL3NlY3VyZS5pZGVudHJ1c3QuY29tL2NlcnRpZmlj YXRlcy9wb2xpY3kvdHMvaW5kZXguaHRtbDBFBgNVHR8EPjA8MDqgOKA2hjRodHRwOi8vdmFsaWRh dGlvbi5pZGVudHJ1c3QuY29tL2NybC90cnVzdGlkY2FhMTIuY3JsMB8GA1UdEQQYMBaBFGphbHRt YW5AYXVyaXN0b3IuY29tMB0GA1UdDgQWBBQevV8IqWfIUNkQqAugGhxR938z+jAdBgNVHSUEFjAU BggrBgEFBQcDAgYIKwYBBQUHAwQwDQYJKoZIhvcNAQELBQADggEBAKsUkshF6tfL43itTIVy9vjY qqPErG9n8kX5FlRYbtIVlWIYTxQpeqtDpUPur1jfBiNY+xT+9PayO2+XxXu9ZEykCz5T4+3q7s5t 5RLsHu1dxYcMnAgfUqb13mhZxY8PVPE4PTHSvZLjPZ6Nt7j0tXjddZJqjDhr7neNpmYgQWSe+oaI xbUqQ34rVW/hDimv9Y2DnCXL0LopCfABQDK9HDzmsuXdbVH6LUpS6ncge9kQEh1QIGuwqEv2tHCW eauWM6h3BOXj3dlfbJEawUYz2hvc3nSXpscFlCN5tGAyUAE8QbKnH1ha/zZVrJY1EglFhnDho34l Wl35t7pE5NP4kscxggKmMIICogIBATBOMDoxCzAJBgNVBAYTAlVTMRIwEAYDVQQKEwlJZGVuVHJ1 c3QxFzAVBgNVBAMTDlRydXN0SUQgQ0EgQTEyAhBAAWbTGehnfUuu91hYwM5DMA0GCWCGSAFlAwQC AQUAoIIBKTAYBgkqhkiG9w0BCQMxCwYJKoZIhvcNAQcBMBwGCSqGSIb3DQEJBTEPFw0xODExMTQw NDIzMjRaMC8GCSqGSIb3DQEJBDEiBCBF1bSUHeRA1SUKZT5K3SM44ZgG7jixz5xCTDvSmPdqJTBd BgkrBgEEAYI3EAQxUDBOMDoxCzAJBgNVBAYTAlVTMRIwEAYDVQQKEwlJZGVuVHJ1c3QxFzAVBgNV BAMTDlRydXN0SUQgQ0EgQTEyAhBAAWbTGehnfUuu91hYwM5DMF8GCyqGSIb3DQEJEAILMVCgTjA6 MQswCQYDVQQGEwJVUzESMBAGA1UEChMJSWRlblRydXN0MRcwFQYDVQQDEw5UcnVzdElEIENBIEEx MgIQQAFm0xnoZ31LrvdYWMDOQzANBgkqhkiG9w0BAQEFAASCAQCXW85WxGkt0cM0mFWjM7YqF9r1 okbu0NX4Lki35a3sPCiXfGfyhQEnoPahktL8qT2jnP/fQxov6cmgGCKwWVLvA/OYNaNKNhA+Q5tN TSPPWX/BskXqSAVmmyQHk9ZTtBLrBrRsbpSXU5VZ/tvaLixJzarN8Q1+kyN1RFRrwGDiwhQ1EHom hts7pWGmAiGsDqS2ua0EPKk8u/QZ/du9NCHdmp+ZZqr58DfxAp+pzbga6mwYFA2+rzaFpJwMiiKV +4N8cM5TEllBKHMu+stmlo09ICRqTZMRqUgrmRNOR7C1Vykc1oDeFiihq9ckoD5hY5HBB79xPft3 0H1DSvlkH6fEAAAAAAAA --Apple-Mail-1E822D2F-F4DB-44A8-ABEC-2F7E3E773A09-- From hanzer@riseup.net Thu Nov 15 20:32:03 2018 From: hanzer@riseup.net (Adam Jensen) Date: Thu, 15 Nov 2018 15:32:03 -0500 Subject: [OpenAFS] Installation and set up guide for Scientific Linux-7.5 Message-ID: Hi, I would like to explore OpenAFS. Does anyone know of an up-to-date installation and set up guide for SL-7.5? The software seems to be available: [hanzer@moria ~]$ yum list openafs\* Available Packages openafs-1.6-sl.x86_64 1.6.23-289.sl7 sl-security openafs-1.6-sl-authlibs.x86_64 1.6.23-289.sl7 sl-security openafs-1.6-sl-authlibs-devel.x86_64 1.6.23-289.sl7 sl-security openafs-1.6-sl-client.x86_64 1.6.23-289.sl7 sl-security openafs-1.6-sl-compat.x86_64 1.6.23-289.sl7 sl-security openafs-1.6-sl-devel.x86_64 1.6.23-289.sl7 sl-security openafs-1.6-sl-kernel-source.x86_64 1.6.23-289.sl7 sl-security openafs-1.6-sl-kpasswd.x86_64 1.6.23-289.sl7 sl-security openafs-1.6-sl-krb5.x86_64 1.6.23-289.sl7 sl-security openafs-1.6-sl-module-tools.x86_64 1.6.23-289.sl7 sl-security openafs-1.6-sl-plumbing-tools.x86_64 1.6.23-289.sl7 sl-security openafs-1.6-sl-server.x86_64 1.6.23-289.sl7 sl-security But I haven't been able to find an accessible set of instructions that don't require an extensive investment in research, study and translation to a modern environment. Perhaps if those of you with some experience of the technology could guide me through an installation and basic configuration then my notes could be shaped into a guide. I have an SL-7.5 server with LVM partitions /vicep{a..f} that are currently 100GB each but there is plenty of storage space available to expand the capacity of these partitions. I have an SL-7.5 laptop that can act as a client. I also have two Ubuntu-18.04 machines that could be clients, and I have a FreeBSD machine but OpenAFS doesn't seem to be available in its ports system. Given this basis, it might be possible to experiment with several desirable scenarios and record a straightforward set of installation, configuration, and administration instructions for each that would enable people to assess the technology in a tractable, cost-effective way. If this seems reasonable, I would love to get started. Thanks! PS - I've posted this to both the -docs and -info lists. Those familiar with the community/culture might need to lead the conversation to the best list by managing their "reply-to" addresses... From kaduk@mit.edu Thu Nov 15 20:41:02 2018 From: kaduk@mit.edu (Benjamin Kaduk) Date: Thu, 15 Nov 2018 14:41:02 -0600 Subject: [OpenAFS] Installation and set up guide for Scientific Linux-7.5 In-Reply-To: References: Message-ID: <20181115204102.GJ70453@kduck.kaduk.org> On Thu, Nov 15, 2018 at 03:32:03PM -0500, Adam Jensen wrote: > Hi, > > I would like to explore OpenAFS. Does anyone know of an up-to-date > installation and set up guide for SL-7.5? The software seems to be > available: > > [hanzer@moria ~]$ yum list openafs\* > Available Packages > openafs-1.6-sl.x86_64 1.6.23-289.sl7 sl-security > openafs-1.6-sl-authlibs.x86_64 1.6.23-289.sl7 sl-security > openafs-1.6-sl-authlibs-devel.x86_64 1.6.23-289.sl7 sl-security > openafs-1.6-sl-client.x86_64 1.6.23-289.sl7 sl-security > openafs-1.6-sl-compat.x86_64 1.6.23-289.sl7 sl-security > openafs-1.6-sl-devel.x86_64 1.6.23-289.sl7 sl-security > openafs-1.6-sl-kernel-source.x86_64 1.6.23-289.sl7 sl-security > openafs-1.6-sl-kpasswd.x86_64 1.6.23-289.sl7 sl-security > openafs-1.6-sl-krb5.x86_64 1.6.23-289.sl7 sl-security > openafs-1.6-sl-module-tools.x86_64 1.6.23-289.sl7 sl-security > openafs-1.6-sl-plumbing-tools.x86_64 1.6.23-289.sl7 sl-security > openafs-1.6-sl-server.x86_64 1.6.23-289.sl7 sl-security > > But I haven't been able to find an accessible set of instructions that > don't require an extensive investment in research, study and translation > to a modern environment. Perhaps if those of you with some experience of > the technology could guide me through an installation and basic > configuration then my notes could be shaped into a guide. I think that http://docs.openafs.org/QuickStartUnix/index.html is the most current "official" documentation; the source is in git at (e.g.) http://git.openafs.org/?p=openafs.git;a=tree;f=doc/xml/QuickStartUnix;h=9e4fbd3f23b81696d98b1fcb68519364fe365d3f;hb=HEAD if you were interested in supplying patches. (Contributions in other forms, including what you describe below would also be welcome, of course!) > I have an SL-7.5 server with LVM partitions /vicep{a..f} that are > currently 100GB each but there is plenty of storage space available to > expand the capacity of these partitions. I have an SL-7.5 laptop that > can act as a client. I also have two Ubuntu-18.04 machines that could be > clients, and I have a FreeBSD machine but OpenAFS doesn't seem to be > available in its ports system. net/openafs exists, but is on a somewhat older version of openafs that doesn't build on the most current versions of FreeBSD. 1.8.2 should build okay from source, though, IIRC. > Given this basis, it might be possible to experiment with several > desirable scenarios and record a straightforward set of installation, > configuration, and administration instructions for each that would > enable people to assess the technology in a tractable, cost-effective way. > > If this seems reasonable, I would love to get started. That seems reasonable to me. It might be best to work on the initial versions in the wiki (e.g., under https://wiki.openafs.org/admin/index/). Thanks for the offer! -Ben From hanzer@riseup.net Thu Nov 15 22:19:12 2018 From: hanzer@riseup.net (Adam Jensen) Date: Thu, 15 Nov 2018 17:19:12 -0500 Subject: [OpenAFS] Installation and set up guide for Scientific Linux-7.5 In-Reply-To: <20181115204102.GJ70453@kduck.kaduk.org> References: <20181115204102.GJ70453@kduck.kaduk.org> Message-ID: <20797a2b-9d24-3f5d-7ac0-3be01dbe9318@riseup.net> On 11/15/2018 03:41 PM, Benjamin Kaduk wrote: > On Thu, Nov 15, 2018 at 03:32:03PM -0500, Adam Jensen wrote: >> But I haven't been able to find an accessible set of instructions that >> don't require an extensive investment in research, study and translation >> to a modern environment. Perhaps if those of you with some experience of >> the technology could guide me through an installation and basic >> configuration then my notes could be shaped into a guide. > > I think that http://docs.openafs.org/QuickStartUnix/index.html is the most > current "official" documentation; the source is in git at (e.g.) > http://git.openafs.org/?p=openafs.git;a=tree;f=doc/xml/QuickStartUnix;h=9e4fbd3f23b81696d98b1fcb68519364fe365d3f;hb=HEAD > if you were interested in supplying patches. (Contributions in other > forms, including what you describe below would also be welcome, of course!) > I guess this is the document to start with: https://wiki.openafs.org/admin/InstallingOpenAFSonRHEL/ The RHEL 6 to 7 changes are: - systemctl is preferred over service/chkconfig - firewalld is preferred over iptables - the SELinux policy problem might have been fixed I have no idea which parts of the AFS information needs to revised. Would it be a big deal for an experienced user to spin up an SL-7.5 instance in a virtual machine and have a look at the situation? From pkd@umd.edu Thu Nov 15 23:21:39 2018 From: pkd@umd.edu (Prasad K. Dharmasena) Date: Thu, 15 Nov 2018 18:21:39 -0500 Subject: [OpenAFS] Building 1.8.2 with transarc-paths In-Reply-To: <20181110142501.GX65098@kduck.kaduk.org> References: <20181108162709.97d732f54c444cefc9094cff@sinenomine.net> <20181110142501.GX65098@kduck.kaduk.org> Message-ID: --000000000000a55a22057abc5160 Content-Type: text/plain; charset="UTF-8" Thanks, Mike and Ben, for the tips. I decided to try building it with the '--enable-static --disable-shared' options first, and that works. We have been using AFS since (pre-IBM) Transarc days, so a lot of our deployment/upgrade scripts rely on those paths. I was just trying to find the quickest way to upgrade to the 1.8.x series w/o having to make too many changes. Yes, I think, it is time to abandon the transarc paths. On Sat, Nov 10, 2018 at 9:25 AM Benjamin Kaduk wrote: > On Thu, Nov 08, 2018 at 04:27:09PM -0500, Michael Meffie wrote: > > On Wed, 7 Nov 2018 21:41:06 -0500 > > "Prasad K. Dharmasena" wrote: > > > > > I've been building 1.6.x on Ubuntu 16.04 with the following options > and it > > > has worked well for me. > > > > > > --enable-transarc-paths > > > --prefix=/usr/afsws > > > --enable-supergroups > > > > > > Building 1.8.x on the same OS with the same option has a problem that > > > appears to be an rpath issue. > > > > > > ldd /usr/vice/etc/afsd | grep not > > > libafshcrypto.so.2 => not found > > > librokenafs.so.2 => not found > > > > > > Those libraries are installed in /usr/afsws/lib, so I can get the > client to > > > run if I set the LD_LIBRARY_PATH. Any hints to what I need to tweak in > > > 'configure' to make it build properly? > > > > > > Thanks. > > > > Hello Prasad, > > > > OpenAFS 1.8.x introduced those two shared object libraries. When not > installing > > from packages you'll need to run ldconfig or set the LD_LIBRARY_PATH. > Since > > you've copied the files to /usr/afsws/lib, you can create a ldconfig > configuration > > file to let it know where to find them. For example, > > > > $ cat /etc/ld.so.conf.d/openafs.conf > > /usr/afsws/lib > > > > or perphaps better, install them to a standard location recognized by > ldconfig. > > I might also ask why you are using the transarc paths at all -- wouldn't it > be easier to conform to the de facto standard filesystem hierarchy with the > default openafs configuration? > > (There's also the option of passing --enable-static --disable-shared to > configure, though I don't remember exactly what that ends up doing.) > > Thanks, > Ben > --000000000000a55a22057abc5160 Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable
Thanks, Mike and Ben, for the tips.

I decided to try building it with the '--enable-stat= ic --disable-shared' options first, and that works.=C2=A0=C2=A0

We have been using AFS since (pre-IBM) Transarc days, so a lo= t of our deployment/upgrade scripts rely on those paths.=C2=A0 I was just t= rying to find the quickest way to upgrade to the 1.8.x series w/o having to= make too many changes.=C2=A0 Yes, I think, it is time to abandon the trans= arc paths.=C2=A0=C2=A0


On Sat, Nov 10, 2018 at 9:25 AM Benjamin Kaduk &l= t;kaduk@mit.edu> wrote:
On Thu, Nov 08, 2018 at 04:27:09PM -0500, Mich= ael Meffie wrote:
> On Wed, 7 Nov 2018 21:41:06 -0500
> "Prasad K. Dharmasena" <pkd@umd.edu> wrote:
>
> > I've been building 1.6.x on Ubuntu 16.04 with the following o= ptions and it
> > has worked well for me.
> >
> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0--enable-transarc-paths
> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0--prefix=3D/usr/afsws
> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0--enable-supergroups
> >
> > Building 1.8.x on the same OS with the same option has a problem = that
> > appears to be an rpath issue.
> >
> > ldd /usr/vice/etc/afsd | grep not
> > libafshcrypto.so.2 =3D> not found
> > librokenafs.so.2 =3D> not found
> >
> > Those libraries are installed in /usr/afsws/lib, so I can get the= client to
> > run if I set the LD_LIBRARY_PATH.=C2=A0 Any hints to what I need = to tweak in
> > 'configure' to make it build properly?
> >
> > Thanks.
>
> Hello Prasad,
>
> OpenAFS 1.8.x introduced those two shared object libraries. When not i= nstalling
> from packages you'll need to run ldconfig or set the LD_LIBRARY_PA= TH. Since
> you've copied the files to /usr/afsws/lib, you can create a ldconf= ig configuration
> file to let it know where to find them. For example,
>
>=C2=A0 =C2=A0 =C2=A0$ cat /etc/ld.so.conf.d/openafs.conf
>=C2=A0 =C2=A0 =C2=A0/usr/afsws/lib
>
> or perphaps better, install them to a standard location recognized by = ldconfig.

I might also ask why you are using the transarc paths at all -- wouldn'= t it
be easier to conform to the de facto standard filesystem hierarchy with the=
default openafs configuration?

(There's also the option of passing --enable-static --disable-shared to=
configure, though I don't remember exactly what that ends up doing.)
Thanks,
Ben
--000000000000a55a22057abc5160-- From gsgatlin@ncsu.edu Tue Nov 20 15:57:10 2018 From: gsgatlin@ncsu.edu (Gary Gatling) Date: Tue, 20 Nov 2018 10:57:10 -0500 Subject: [OpenAFS] in tree kernel module kafs fedora 29 Message-ID: --0000000000006272b8057b1ab0e3 Content-Type: text/plain; charset="UTF-8" Is the out of tree kernel module for openafs still required in fedora 29 with kernel 4.19.2-300.fc29.x86_64 ? Or would it be possible that someone could build openafs to take advantage of the (now) built in kernel module? (kafs) I can "modprobe kafs" on this kernel ok. https://www.phoronix.com/forums/forum/software/general-linux-open-source/988963-afs-file-system-driver-overhauled-for-linux-4-15 --0000000000006272b8057b1ab0e3 Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable
Is the = out of tree kernel module for openafs still required in fedora 29 with kern= el=C2=A04.19.2-300.fc29.x86_64 ?=C2=A0 Or would it be possible that someone= could build openafs to take advantage of the (now) built in kernel module?= (kafs)

I can "modprobe kafs"= ; on this kernel ok.


<= /div>

--0000000000006272b8057b1ab0e3-- From jsbillin@umich.edu Tue Nov 20 16:25:26 2018 From: jsbillin@umich.edu (Jonathan Billings) Date: Tue, 20 Nov 2018 11:25:26 -0500 Subject: [OpenAFS] in tree kernel module kafs fedora 29 In-Reply-To: References: Message-ID: --0000000000007a74a0057b1b1553 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable On Tue, Nov 20, 2018 at 10:57 Gary Gatling wrote: > Is the out of tree kernel module for openafs still required in fedora 29 > with kernel 4.19.2-300.fc29.x86_64 ? Or would it be possible that someon= e > could build openafs to take advantage of the (now) built in kernel module= ? > (kafs) > > I can "modprobe kafs" on this kernel ok. > > > https://www.phoronix.com/forums/forum/software/general-linux-open-source/= 988963-afs-file-system-driver-overhauled-for-linux-4-15 > You can=E2=80=99t use the OpenAFS tools with the kAFS module, but there are= some tools that will work with it. Others are working on getting the kafs-utils and kafs-client packages in Fedora 29 (and later). I=E2=80=99ve been trying to build packages that work with it here: https://copr.fedorainfracloud.org/coprs/jsbillings/kafs/packages/ Ignore the kernel packages and kafs-aklog packages, they aren=E2=80=99t nec= essary. > > > --=20 Jonathan Billings College of Engineering - CAEN - Unix and Linux Support --0000000000007a74a0057b1b1553 Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable
On Tue, Nov 20, 2018 at 10:57 Gary Gatling <gsgatlin@ncsu.edu> wrote:
Is the out of tree kernel module for o= penafs still required in fedora 29 with kernel=C2=A04.19.2-300.fc29.x86_64 = ?=C2=A0 Or would it be possible that someone could build openafs to take ad= vantage of the (now) built in kernel module? (kafs)
<= br>
I can "modprobe kafs" on this kernel ok.


You can=E2=80=99t use the OpenAFS too= ls with the kAFS module, but there are some tools that will work with it. O= thers are working on getting the kafs-utils and kafs-client packages in Fed= ora 29 (and later).=C2=A0

I=E2=80=99ve been trying to build packages that work with it here:=C2=A0<= div>https://copr.fedorainfracloud.org/coprs/jsbillings/kafs/packages/

--
Jonathan Billings=20 College of Engineering - CAEN - Unix and Linux Support
--0000000000007a74a0057b1b1553-- From gsgatlin@ncsu.edu Tue Nov 20 19:29:59 2018 From: gsgatlin@ncsu.edu (Gary Gatling) Date: Tue, 20 Nov 2018 14:29:59 -0500 Subject: [OpenAFS] in tree kernel module kafs fedora 29 In-Reply-To: References: Message-ID: --000000000000704426057b1da903 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Cool. Thanks for the information about that. I am surprised openafs doesn't work with the kernel module. But it is what it is. :) On Tue, Nov 20, 2018 at 11:25 AM Jonathan Billings wrote: > On Tue, Nov 20, 2018 at 10:57 Gary Gatling wrote: > >> Is the out of tree kernel module for openafs still required in fedora 29 >> with kernel 4.19.2-300.fc29.x86_64 ? Or would it be possible that someo= ne >> could build openafs to take advantage of the (now) built in kernel modul= e? >> (kafs) >> >> I can "modprobe kafs" on this kernel ok. >> >> >> https://www.phoronix.com/forums/forum/software/general-linux-open-source= /988963-afs-file-system-driver-overhauled-for-linux-4-15 >> > > You can=E2=80=99t use the OpenAFS tools with the kAFS module, but there a= re some > tools that will work with it. Others are working on getting the kafs-util= s > and kafs-client packages in Fedora 29 (and later). > > I=E2=80=99ve been trying to build packages that work with it here: > https://copr.fedorainfracloud.org/coprs/jsbillings/kafs/packages/ > > Ignore the kernel packages and kafs-aklog packages, they aren=E2=80=99t n= ecessary. > >> >> >> > -- > Jonathan Billings College of Engineering - CAEN - Unix and Linux Support > --000000000000704426057b1da903 Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable
Cool. Thanks for the information about that. I am surprise= d openafs doesn't work with the kernel module. But it is what it is. :)=

On Tue, Nov 20, 2018 = at 11:25 AM Jonathan Billings <jsb= illin@umich.edu> wrote:
On Tue, Nov 20, 2018 at 10:57 Gary Gatling <gsgatlin@ncsu.edu> wrote:
=
=
Is the out of tree kerne= l module for openafs still required in fedora 29 with kernel=C2=A04.19.2-30= 0.fc29.x86_64 ?=C2=A0 Or would it be possible that someone could build open= afs to take advantage of the (now) built in kernel module? (kafs)

I can "modprobe kafs" on this kernel = ok.


You can=E2=80=99t use t= he OpenAFS tools with the kAFS module, but there are some tools that will w= ork with it. Others are working on getting the kafs-utils and kafs-client p= ackages in Fedora 29 (and later).=C2=A0

I=E2=80=99ve been trying to build packages that work with i= t here:=C2=A0

Ignore the kernel packages and kafs-aklog packages, they aren=E2= =80=99t necessary.=C2=A0
--
Jonathan Billings= =20 College of Engineering - CAEN - Unix and Linux Support
--000000000000704426057b1da903-- From jaltman@auristor.com Tue Nov 20 19:54:17 2018 From: jaltman@auristor.com (Jeffrey Altman) Date: Tue, 20 Nov 2018 14:54:17 -0500 Subject: [OpenAFS] in tree kernel module kafs fedora 29 In-Reply-To: References: Message-ID: This is a cryptographically signed message in MIME format. --------------ms000200090605070303060900 Content-Type: multipart/mixed; boundary="------------74D7D1AB6BCF98EB9BBE418A" Content-Language: en-US This is a multi-part message in MIME format. --------------74D7D1AB6BCF98EB9BBE418A Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable On 11/20/2018 2:29 PM, Gary Gatling wrote: > Cool. Thanks for the information about that. I am surprised openafs > doesn't work with the kernel module. But it is what it is. :) kafs and af_rxrpc are clean room implementations. They are not derived from any IBM Public License 1.0 source code. That is why they can be part of the Linux kernel as in-tree networking stack and file system components. OpenAFS does not work with the kafs kernel module because the kafs file system is an alternative client compatible with IBM AFS 3.6, OpenAFS and AuriStorFS services. Jeffrey Altman --------------74D7D1AB6BCF98EB9BBE418A Content-Type: text/x-vcard; charset=utf-8; name="jaltman.vcf" Content-Transfer-Encoding: quoted-printable Content-Disposition: attachment; filename="jaltman.vcf" begin:vcard fn:Jeffrey Altman n:Altman;Jeffrey org:AuriStor, Inc. adr:Suite 6B;;255 West 94Th Street;New York;New York;10025-6985;United St= ates email;internet:jaltman@auristor.com title:Founder and CEO tel;work:+1-212-769-9018 note;quoted-printable:LinkedIn: https://www.linkedin.com/in/jeffreyaltman= =3D0D=3D0A=3D Skype: jeffrey.e.altman=3D0D=3D0A=3D =09 url:https://www.auristor.com/ version:2.1 end:vcard --------------74D7D1AB6BCF98EB9BBE418A-- --------------ms000200090605070303060900 Content-Type: application/pkcs7-signature; name="smime.p7s" Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="smime.p7s" Content-Description: S/MIME Cryptographic Signature MIAGCSqGSIb3DQEHAqCAMIACAQExDzANBglghkgBZQMEAgEFADCABgkqhkiG9w0BBwEAAKCC DGswggXSMIIEuqADAgECAhBAAWbTGehnfUuu91hYwM5DMA0GCSqGSIb3DQEBCwUAMDoxCzAJ BgNVBAYTAlVTMRIwEAYDVQQKEwlJZGVuVHJ1c3QxFzAVBgNVBAMTDlRydXN0SUQgQ0EgQTEy MB4XDTE4MTEwMjA2MjYyMloXDTE5MTEwMjA2MjYyMlowcDEvMC0GCgmSJomT8ixkAQETH0Ew MTQyN0UwMDAwMDE2NkQzMTlFODFBMDAwMDdBN0IxGTAXBgNVBAMTEEplZmZyZXkgRSBBbHRt YW4xFTATBgNVBAoTDEF1cmlTdG9yIEluYzELMAkGA1UEBhMCVVMwggEiMA0GCSqGSIb3DQEB AQUAA4IBDwAwggEKAoIBAQDqEYwjLORE23Gc8m7YgKqbGzWn/fmVGtoZkBNwOEYlrFOu84Pb EhV4sxQrChhPyXVW2jquV2rg2/5dsVC8RO+RwlXuAkUvR9KhWJLu6GJXwUnZr83wtEzJ8nqp THj6W+3velLwWx7qhADyrMnKN0bTYh+5M9HWt2We4qYi6i1/ejgKtM0arWYxVx6Iwb4xZpil MDNqV15Dwuunnkq4vNEByIT81zDoClqylMxxKJpvc3tqC66+BHHM5RxF+z36Pt8fb3Q54Vry txXFm+kVSclKGaWgjq5SqV4tR0FWv6OnMY8tAx1YrljfvgxW5npZgBbo+YVoYEfUrz77WIYQ yzn7AgMBAAGjggKcMIICmDAOBgNVHQ8BAf8EBAMCBPAwgYQGCCsGAQUFBwEBBHgwdjAwBggr BgEFBQcwAYYkaHR0cDovL2NvbW1lcmNpYWwub2NzcC5pZGVudHJ1c3QuY29tMEIGCCsGAQUF BzAChjZodHRwOi8vdmFsaWRhdGlvbi5pZGVudHJ1c3QuY29tL2NlcnRzL3RydXN0aWRjYWEx Mi5wN2MwHwYDVR0jBBgwFoAUpHPa72k1inXMoBl7CDL4a4nkQuwwCQYDVR0TBAIwADCCASsG A1UdIASCASIwggEeMIIBGgYLYIZIAYb5LwAGAgEwggEJMEoGCCsGAQUFBwIBFj5odHRwczov L3NlY3VyZS5pZGVudHJ1c3QuY29tL2NlcnRpZmljYXRlcy9wb2xpY3kvdHMvaW5kZXguaHRt bDCBugYIKwYBBQUHAgIwga0agapUaGlzIFRydXN0SUQgQ2VydGlmaWNhdGUgaGFzIGJlZW4g aXNzdWVkIGluIGFjY29yZGFuY2Ugd2l0aCBJZGVuVHJ1c3QncyBUcnVzdElEIENlcnRpZmlj YXRlIFBvbGljeSBmb3VuZCBhdCBodHRwczovL3NlY3VyZS5pZGVudHJ1c3QuY29tL2NlcnRp ZmljYXRlcy9wb2xpY3kvdHMvaW5kZXguaHRtbDBFBgNVHR8EPjA8MDqgOKA2hjRodHRwOi8v dmFsaWRhdGlvbi5pZGVudHJ1c3QuY29tL2NybC90cnVzdGlkY2FhMTIuY3JsMB8GA1UdEQQY MBaBFGphbHRtYW5AYXVyaXN0b3IuY29tMB0GA1UdDgQWBBQevV8IqWfIUNkQqAugGhxR938z +jAdBgNVHSUEFjAUBggrBgEFBQcDAgYIKwYBBQUHAwQwDQYJKoZIhvcNAQELBQADggEBAKsU kshF6tfL43itTIVy9vjYqqPErG9n8kX5FlRYbtIVlWIYTxQpeqtDpUPur1jfBiNY+xT+9Pay O2+XxXu9ZEykCz5T4+3q7s5t5RLsHu1dxYcMnAgfUqb13mhZxY8PVPE4PTHSvZLjPZ6Nt7j0 tXjddZJqjDhr7neNpmYgQWSe+oaIxbUqQ34rVW/hDimv9Y2DnCXL0LopCfABQDK9HDzmsuXd bVH6LUpS6ncge9kQEh1QIGuwqEv2tHCWeauWM6h3BOXj3dlfbJEawUYz2hvc3nSXpscFlCN5 tGAyUAE8QbKnH1ha/zZVrJY1EglFhnDho34lWl35t7pE5NP4kscwggaRMIIEeaADAgECAhEA +d5Wf8lNDHdw+WAbUtoVOzANBgkqhkiG9w0BAQsFADBKMQswCQYDVQQGEwJVUzESMBAGA1UE ChMJSWRlblRydXN0MScwJQYDVQQDEx5JZGVuVHJ1c3QgQ29tbWVyY2lhbCBSb290IENBIDEw HhcNMTUwMjE4MjIyNTE5WhcNMjMwMjE4MjIyNTE5WjA6MQswCQYDVQQGEwJVUzESMBAGA1UE ChMJSWRlblRydXN0MRcwFQYDVQQDEw5UcnVzdElEIENBIEExMjCCASIwDQYJKoZIhvcNAQEB BQADggEPADCCAQoCggEBANGRTTzPCic0kq5L6ZrUJWt5LE/n6tbPXPhGt2Egv7plJMoEpvVJ JDqGqDYymaAsd8Hn9ZMAuKUEFdlx5PgCkfu7jL5zgiMNnAFVD9PyrsuF+poqmlxhlQ06sFY2 hbhQkVVQ00KCNgUzKcBUIvjv04w+fhNPkwGW5M7Ae5K5OGFGwOoRck9GG6MUVKvTNkBw2/vN MOd29VGVTtR0tjH5PS5yDXss48Yl1P4hDStO2L4wTsW2P37QGD27//XGN8K6amWB6F2XOgff /PmlQjQOORT95PmLkwwvma5nj0AS0CVp8kv0K2RHV7GonllKpFDMT0CkxMQKwoj+tWEWJTiD KSsCAwEAAaOCAoAwggJ8MIGJBggrBgEFBQcBAQR9MHswMAYIKwYBBQUHMAGGJGh0dHA6Ly9j b21tZXJjaWFsLm9jc3AuaWRlbnRydXN0LmNvbTBHBggrBgEFBQcwAoY7aHR0cDovL3ZhbGlk YXRpb24uaWRlbnRydXN0LmNvbS9yb290cy9jb21tZXJjaWFscm9vdGNhMS5wN2MwHwYDVR0j BBgwFoAU7UQZwNPwBovupHu+QucmVMiONnYwDwYDVR0TAQH/BAUwAwEB/zCCASAGA1UdIASC ARcwggETMIIBDwYEVR0gADCCAQUwggEBBggrBgEFBQcCAjCB9DBFFj5odHRwczovL3NlY3Vy ZS5pZGVudHJ1c3QuY29tL2NlcnRpZmljYXRlcy9wb2xpY3kvdHMvaW5kZXguaHRtbDADAgEB GoGqVGhpcyBUcnVzdElEIENlcnRpZmljYXRlIGhhcyBiZWVuIGlzc3VlZCBpbiBhY2NvcmRh bmNlIHdpdGggSWRlblRydXN0J3MgVHJ1c3RJRCBDZXJ0aWZpY2F0ZSBQb2xpY3kgZm91bmQg YXQgaHR0cHM6Ly9zZWN1cmUuaWRlbnRydXN0LmNvbS9jZXJ0aWZpY2F0ZXMvcG9saWN5L3Rz L2luZGV4Lmh0bWwwSgYDVR0fBEMwQTA/oD2gO4Y5aHR0cDovL3ZhbGlkYXRpb24uaWRlbnRy dXN0LmNvbS9jcmwvY29tbWVyY2lhbHJvb3RjYTEuY3JsMB0GA1UdJQQWMBQGCCsGAQUFBwMC BggrBgEFBQcDBDAOBgNVHQ8BAf8EBAMCAYYwHQYDVR0OBBYEFKRz2u9pNYp1zKAZewgy+GuJ 5ELsMA0GCSqGSIb3DQEBCwUAA4ICAQAN4YKu0vv062MZfg+xMSNUXYKvHwvZIk+6H1pUmivy DI4I6A3wWzxlr83ZJm0oGIF6PBsbgKJ/fhyyIzb+vAYFJmyI8I/0mGlc+nIQNuV2XY8cypPo VJKgpnzp/7cECXkX8R4NyPtEn8KecbNdGBdEaG4a7AkZ3ujlJofZqYdHxN29tZPdDlZ8fR36 /mAFeCEq0wOtOOc0Eyhs29+9MIZYjyxaPoTS+l8xLcuYX3RWlirRyH6RPfeAi5kySOEhG1qu NHe06QIwpigjyFT6v/vRqoIBr7WpDOSt1VzXPVbSj1PcWBgkwyGKHlQUOuSbHbHcjOD8w8wH SDbL+L2he8hNN54doy1e1wJHKmnfb0uBAeISoxRbJnMMWvgAlH5FVrQWlgajeH/6NbYbBSRx ALuEOqEQepmJM6qz4oD2sxdq4GMN5adAdYEswkY/o0bRKyFXTD3mdqeRXce0jYQbWm7oapqS ZBccFvUgYOrB78tB6c1bxIgaQKRShtWR1zMM0JfqUfD9u8Fg7G5SVO0IG/GcxkSvZeRjhYcb TfqF2eAgprpyzLWmdr0mou3bv1Sq4OuBhmTQCnqxAXr4yVTRYHkp5lCvRgeJAme1OTVpVPth /O7HJ7VuEP9GOr6kCXCXmjB4P3UJ2oU0NqfoQdcSSSt9hliALnExTEjii20B2nSDojGCAxQw ggMQAgEBME4wOjELMAkGA1UEBhMCVVMxEjAQBgNVBAoTCUlkZW5UcnVzdDEXMBUGA1UEAxMO VHJ1c3RJRCBDQSBBMTICEEABZtMZ6Gd9S673WFjAzkMwDQYJYIZIAWUDBAIBBQCgggGXMBgG CSqGSIb3DQEJAzELBgkqhkiG9w0BBwEwHAYJKoZIhvcNAQkFMQ8XDTE4MTEyMDE5NTQxN1ow LwYJKoZIhvcNAQkEMSIEIFoU1G3vp1+0t9BaZ3sH5mZmD4lABio3roTVjG3BHg+mMF0GCSsG AQQBgjcQBDFQME4wOjELMAkGA1UEBhMCVVMxEjAQBgNVBAoTCUlkZW5UcnVzdDEXMBUGA1UE AxMOVHJ1c3RJRCBDQSBBMTICEEABZtMZ6Gd9S673WFjAzkMwXwYLKoZIhvcNAQkQAgsxUKBO MDoxCzAJBgNVBAYTAlVTMRIwEAYDVQQKEwlJZGVuVHJ1c3QxFzAVBgNVBAMTDlRydXN0SUQg Q0EgQTEyAhBAAWbTGehnfUuu91hYwM5DMGwGCSqGSIb3DQEJDzFfMF0wCwYJYIZIAWUDBAEq MAsGCWCGSAFlAwQBAjAKBggqhkiG9w0DBzAOBggqhkiG9w0DAgICAIAwDQYIKoZIhvcNAwIC AUAwBwYFKw4DAgcwDQYIKoZIhvcNAwICASgwDQYJKoZIhvcNAQEBBQAEggEAzj0U8sTS2fFO G31KU7QHhagm1N8rFHTE3koPCaksuIouE06IoKrOt/r0yRYwQ7NyTVIu4TCFxUdvdSGpbGnU 26b1JZQnD4cqjgzr80GvQmeJ6E7Wd5LJ9lLmudY1I2ZyOnukil8FHp3endJxe9sD+5fHfD0U WURZF1cwTvFqnLdQROvDUyjLIil8kN7aoJSaixRKVVw6aXxnFmP44iAiDhxXFiEvq6vBLr/J +Mpg8W8sJk/GgvogzswxC6gDxbkIRHg+Ri83EguKIT6BPBHijCq1GK3OzNe16w2kZuTt4g5V qlx/g/h9fBqT6qRKqSASp65W+Jicms6M3Edqj7WLRQAAAAAAAA== --------------ms000200090605070303060900-- From andreas.ladanyi@kit.edu Mon Nov 26 16:02:10 2018 From: andreas.ladanyi@kit.edu (Andreas Ladanyi) Date: Mon, 26 Nov 2018 17:02:10 +0100 Subject: [OpenAFS] cache manager timeout Message-ID: Hi, is it possible to adjust the timeout of the cache manager when asking the next CellServDB or afsdb entry when a server listed in CellServDB / afsdb is offline so for example the users dont get a long waiting for ssh login ? regards, Andreas From susan@psc.edu Tue Nov 27 17:38:16 2018 From: susan@psc.edu (Susan Litzinger) Date: Tue, 27 Nov 2018 12:38:16 -0500 Subject: [OpenAFS] Upgrading to newer OpenAFS procedure docs Message-ID: --000000000000cc2c0d057ba8ea3c Content-Type: text/plain; charset="UTF-8" We are in the process of updating from an older version of OpenAFS, 1.4.14, to a more recent version, 1.6.16. The new 1.6.16 servers have been added to our current cell and we are moving the volumes from the older servers to the new ones. We know that we have to move the various servers, including the VL Server, and root.cell volume from the old servers to the new ones before being able to shut off the old servers. I'm trying to find documentation that describes how to do that. Has anyone done this recently? Is there any documentation that describes the proper sequence? TIA! --000000000000cc2c0d057ba8ea3c Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable
We are in the process of updating fr= om an older version of OpenAFS, 1.4.14,=C2=A0 to a more recent version, 1.6= .16.=C2=A0

The new 1.6.16 servers have been added = to our current cell and we are moving the volumes from the older servers to= the new ones.=C2=A0 We know that we have to move the various servers, incl= uding the VL Server,=C2=A0 and root.cell volume from the old servers to the= new ones before being able to shut off the old servers. I'm trying to = find documentation that describes how to do that.=C2=A0 =C2=A0
Has anyone done this recently?=C2=A0 Is there any documentatio= n that describes the proper sequence?=C2=A0=C2=A0

= TIA!=C2=A0
--000000000000cc2c0d057ba8ea3c-- From cg2v@andrew.cmu.edu Tue Nov 27 19:21:21 2018 From: cg2v@andrew.cmu.edu (Chaskiel Grundman) Date: Tue, 27 Nov 2018 14:21:21 -0500 Subject: [OpenAFS] Current "balance" practice? In-Reply-To: <3cfeede0-c3d1-8abe-0cc0-aef2bf45dd63@auristor.com> References: <20181019150424.jzyfxjfr3jc7mf7t@csail.mit.edu> <3cfeede0-c3d1-8abe-0cc0-aef2bf45dd63@auristor.com> Message-ID: > ubik_VL_SetLock() > ubik_VL_ReleaseLock() > ubik_Call() is no longer used internally by OpenAFS but it is still > exported. ubik_Call() relies upon varargs that are unlikely to > interpret parameter types properly on systems with 64-bit pointers > and/or size_t. I'll probably be looking at upgrading balance for our own internal use in the next month or two, but another ftp.andrew.cmu.edu release is probably not in the cards (especially since something else I'm doing in the next few months is retiring ftp.andrew.cmu.edu). Perhaps I'll put it on github. There is another problem beyond 64-bit safety. It appears that the some of the openafs devs didn't learn from the project's own experience with the linux developers, as extern afs_int32 vsu_ClientInit(int noAuthFlag, const char *confDir, char *cellName, afs_int32 sauth, struct ubik_client **uclientp, int (*secproc)(struct rx_securityClass *, afs_int32)); in <= 1.6 has become extern afs_int32 vsu_ClientInit(const char *confDir, char *cellName, int secFlags, int (*secproc)(struct rx_securityClass *, afs_int32), struct ubik_client **uclientp); in 1.8. and I can't even use #ifdef VS2SC_NEVER to detect the change -- it's an enum. From jaltman@auristor.com Tue Nov 27 21:51:54 2018 From: jaltman@auristor.com (Jeffrey Altman) Date: Tue, 27 Nov 2018 16:51:54 -0500 Subject: [OpenAFS] Current "balance" practice? In-Reply-To: References: <20181019150424.jzyfxjfr3jc7mf7t@csail.mit.edu> <3cfeede0-c3d1-8abe-0cc0-aef2bf45dd63@auristor.com> Message-ID: This is a cryptographically signed message in MIME format. --------------ms090602000307090105090507 Content-Type: multipart/mixed; boundary="------------D48266BA7A21BB028545B9E3" Content-Language: en-US This is a multi-part message in MIME format. --------------D48266BA7A21BB028545B9E3 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable On 11/27/2018 2:21 PM, Chaskiel Grundman wrote: > There is another problem beyond 64-bit safety. It appears that the some= of > the openafs devs didn't learn from the project's own experience with th= e > linux developers, as > > extern afs_int32 vsu_ClientInit(int noAuthFlag, const char *confDir, > char *cellName, afs_int32 sauth, > struct ubik_client **uclientp, > int (*secproc)(struct rx_securityClass = *, > afs_int32)); >=20 >=20 > in <=3D 1.6 has become >=20 > extern afs_int32 vsu_ClientInit(const char *confDir, char *cellName, > int secFlags, > int (*secproc)(struct rx_securityClass = *, > afs_int32), > struct ubik_client **uclientp); >=20 >=20 > in 1.8. and I can't even use #ifdef VS2SC_NEVER to detect the change --= > it's an enum. That would be AuriStor's fault. The change in question was commit 3720f6b646857cca523659519f6fd4441e41dc7a Author: Simon Wilkinson Date: Sun Oct 23 16:21:52 2011 +0100 Rework the ugen_* interface The vsu_ClientInit() signature change was a side-effect of the refactoring of ugen_ClientInit(). No one remembered the possible out of tree usage of vsu_ClientInit(). vsu_ClientInit() is not an exported function. As such its status as public is murky at best. I suggest using the existence of one of these CPP macros as a test. They were added shortly after the vsu_ClientInit() signature change was merged. /* Values for the UV_ReleaseVolume flags parameters */ #define REL_COMPLETE 0x000001 /* force a complete release */ #define REL_FULLDUMPS 0x000002 /* force full dumps */ #define REL_STAYUP 0x000004 /* dump to clones to avoid offline time */ The introduction of enum vol_s2s_crypt came much later. If you would prefer AuriStor can submit a change to restore the prior signature. Jeffrey Altman --------------D48266BA7A21BB028545B9E3 Content-Type: text/x-vcard; charset=utf-8; name="jaltman.vcf" Content-Transfer-Encoding: quoted-printable Content-Disposition: attachment; filename="jaltman.vcf" begin:vcard fn:Jeffrey Altman n:Altman;Jeffrey org:AuriStor, Inc. adr:Suite 6B;;255 West 94Th Street;New York;New York;10025-6985;United St= ates email;internet:jaltman@auristor.com title:Founder and CEO tel;work:+1-212-769-9018 note;quoted-printable:LinkedIn: https://www.linkedin.com/in/jeffreyaltman= =3D0D=3D0A=3D Skype: jeffrey.e.altman=3D0D=3D0A=3D =09 url:https://www.auristor.com/ version:2.1 end:vcard --------------D48266BA7A21BB028545B9E3-- --------------ms090602000307090105090507 Content-Type: application/pkcs7-signature; name="smime.p7s" Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="smime.p7s" Content-Description: S/MIME Cryptographic Signature MIAGCSqGSIb3DQEHAqCAMIACAQExDzANBglghkgBZQMEAgEFADCABgkqhkiG9w0BBwEAAKCC DGswggXSMIIEuqADAgECAhBAAWbTGehnfUuu91hYwM5DMA0GCSqGSIb3DQEBCwUAMDoxCzAJ BgNVBAYTAlVTMRIwEAYDVQQKEwlJZGVuVHJ1c3QxFzAVBgNVBAMTDlRydXN0SUQgQ0EgQTEy MB4XDTE4MTEwMjA2MjYyMloXDTE5MTEwMjA2MjYyMlowcDEvMC0GCgmSJomT8ixkAQETH0Ew MTQyN0UwMDAwMDE2NkQzMTlFODFBMDAwMDdBN0IxGTAXBgNVBAMTEEplZmZyZXkgRSBBbHRt YW4xFTATBgNVBAoTDEF1cmlTdG9yIEluYzELMAkGA1UEBhMCVVMwggEiMA0GCSqGSIb3DQEB AQUAA4IBDwAwggEKAoIBAQDqEYwjLORE23Gc8m7YgKqbGzWn/fmVGtoZkBNwOEYlrFOu84Pb EhV4sxQrChhPyXVW2jquV2rg2/5dsVC8RO+RwlXuAkUvR9KhWJLu6GJXwUnZr83wtEzJ8nqp THj6W+3velLwWx7qhADyrMnKN0bTYh+5M9HWt2We4qYi6i1/ejgKtM0arWYxVx6Iwb4xZpil MDNqV15Dwuunnkq4vNEByIT81zDoClqylMxxKJpvc3tqC66+BHHM5RxF+z36Pt8fb3Q54Vry txXFm+kVSclKGaWgjq5SqV4tR0FWv6OnMY8tAx1YrljfvgxW5npZgBbo+YVoYEfUrz77WIYQ yzn7AgMBAAGjggKcMIICmDAOBgNVHQ8BAf8EBAMCBPAwgYQGCCsGAQUFBwEBBHgwdjAwBggr BgEFBQcwAYYkaHR0cDovL2NvbW1lcmNpYWwub2NzcC5pZGVudHJ1c3QuY29tMEIGCCsGAQUF BzAChjZodHRwOi8vdmFsaWRhdGlvbi5pZGVudHJ1c3QuY29tL2NlcnRzL3RydXN0aWRjYWEx Mi5wN2MwHwYDVR0jBBgwFoAUpHPa72k1inXMoBl7CDL4a4nkQuwwCQYDVR0TBAIwADCCASsG A1UdIASCASIwggEeMIIBGgYLYIZIAYb5LwAGAgEwggEJMEoGCCsGAQUFBwIBFj5odHRwczov L3NlY3VyZS5pZGVudHJ1c3QuY29tL2NlcnRpZmljYXRlcy9wb2xpY3kvdHMvaW5kZXguaHRt bDCBugYIKwYBBQUHAgIwga0agapUaGlzIFRydXN0SUQgQ2VydGlmaWNhdGUgaGFzIGJlZW4g aXNzdWVkIGluIGFjY29yZGFuY2Ugd2l0aCBJZGVuVHJ1c3QncyBUcnVzdElEIENlcnRpZmlj YXRlIFBvbGljeSBmb3VuZCBhdCBodHRwczovL3NlY3VyZS5pZGVudHJ1c3QuY29tL2NlcnRp ZmljYXRlcy9wb2xpY3kvdHMvaW5kZXguaHRtbDBFBgNVHR8EPjA8MDqgOKA2hjRodHRwOi8v dmFsaWRhdGlvbi5pZGVudHJ1c3QuY29tL2NybC90cnVzdGlkY2FhMTIuY3JsMB8GA1UdEQQY MBaBFGphbHRtYW5AYXVyaXN0b3IuY29tMB0GA1UdDgQWBBQevV8IqWfIUNkQqAugGhxR938z +jAdBgNVHSUEFjAUBggrBgEFBQcDAgYIKwYBBQUHAwQwDQYJKoZIhvcNAQELBQADggEBAKsU kshF6tfL43itTIVy9vjYqqPErG9n8kX5FlRYbtIVlWIYTxQpeqtDpUPur1jfBiNY+xT+9Pay O2+XxXu9ZEykCz5T4+3q7s5t5RLsHu1dxYcMnAgfUqb13mhZxY8PVPE4PTHSvZLjPZ6Nt7j0 tXjddZJqjDhr7neNpmYgQWSe+oaIxbUqQ34rVW/hDimv9Y2DnCXL0LopCfABQDK9HDzmsuXd bVH6LUpS6ncge9kQEh1QIGuwqEv2tHCWeauWM6h3BOXj3dlfbJEawUYz2hvc3nSXpscFlCN5 tGAyUAE8QbKnH1ha/zZVrJY1EglFhnDho34lWl35t7pE5NP4kscwggaRMIIEeaADAgECAhEA +d5Wf8lNDHdw+WAbUtoVOzANBgkqhkiG9w0BAQsFADBKMQswCQYDVQQGEwJVUzESMBAGA1UE ChMJSWRlblRydXN0MScwJQYDVQQDEx5JZGVuVHJ1c3QgQ29tbWVyY2lhbCBSb290IENBIDEw HhcNMTUwMjE4MjIyNTE5WhcNMjMwMjE4MjIyNTE5WjA6MQswCQYDVQQGEwJVUzESMBAGA1UE ChMJSWRlblRydXN0MRcwFQYDVQQDEw5UcnVzdElEIENBIEExMjCCASIwDQYJKoZIhvcNAQEB BQADggEPADCCAQoCggEBANGRTTzPCic0kq5L6ZrUJWt5LE/n6tbPXPhGt2Egv7plJMoEpvVJ JDqGqDYymaAsd8Hn9ZMAuKUEFdlx5PgCkfu7jL5zgiMNnAFVD9PyrsuF+poqmlxhlQ06sFY2 hbhQkVVQ00KCNgUzKcBUIvjv04w+fhNPkwGW5M7Ae5K5OGFGwOoRck9GG6MUVKvTNkBw2/vN MOd29VGVTtR0tjH5PS5yDXss48Yl1P4hDStO2L4wTsW2P37QGD27//XGN8K6amWB6F2XOgff /PmlQjQOORT95PmLkwwvma5nj0AS0CVp8kv0K2RHV7GonllKpFDMT0CkxMQKwoj+tWEWJTiD KSsCAwEAAaOCAoAwggJ8MIGJBggrBgEFBQcBAQR9MHswMAYIKwYBBQUHMAGGJGh0dHA6Ly9j b21tZXJjaWFsLm9jc3AuaWRlbnRydXN0LmNvbTBHBggrBgEFBQcwAoY7aHR0cDovL3ZhbGlk YXRpb24uaWRlbnRydXN0LmNvbS9yb290cy9jb21tZXJjaWFscm9vdGNhMS5wN2MwHwYDVR0j BBgwFoAU7UQZwNPwBovupHu+QucmVMiONnYwDwYDVR0TAQH/BAUwAwEB/zCCASAGA1UdIASC ARcwggETMIIBDwYEVR0gADCCAQUwggEBBggrBgEFBQcCAjCB9DBFFj5odHRwczovL3NlY3Vy ZS5pZGVudHJ1c3QuY29tL2NlcnRpZmljYXRlcy9wb2xpY3kvdHMvaW5kZXguaHRtbDADAgEB GoGqVGhpcyBUcnVzdElEIENlcnRpZmljYXRlIGhhcyBiZWVuIGlzc3VlZCBpbiBhY2NvcmRh bmNlIHdpdGggSWRlblRydXN0J3MgVHJ1c3RJRCBDZXJ0aWZpY2F0ZSBQb2xpY3kgZm91bmQg YXQgaHR0cHM6Ly9zZWN1cmUuaWRlbnRydXN0LmNvbS9jZXJ0aWZpY2F0ZXMvcG9saWN5L3Rz L2luZGV4Lmh0bWwwSgYDVR0fBEMwQTA/oD2gO4Y5aHR0cDovL3ZhbGlkYXRpb24uaWRlbnRy dXN0LmNvbS9jcmwvY29tbWVyY2lhbHJvb3RjYTEuY3JsMB0GA1UdJQQWMBQGCCsGAQUFBwMC BggrBgEFBQcDBDAOBgNVHQ8BAf8EBAMCAYYwHQYDVR0OBBYEFKRz2u9pNYp1zKAZewgy+GuJ 5ELsMA0GCSqGSIb3DQEBCwUAA4ICAQAN4YKu0vv062MZfg+xMSNUXYKvHwvZIk+6H1pUmivy DI4I6A3wWzxlr83ZJm0oGIF6PBsbgKJ/fhyyIzb+vAYFJmyI8I/0mGlc+nIQNuV2XY8cypPo VJKgpnzp/7cECXkX8R4NyPtEn8KecbNdGBdEaG4a7AkZ3ujlJofZqYdHxN29tZPdDlZ8fR36 /mAFeCEq0wOtOOc0Eyhs29+9MIZYjyxaPoTS+l8xLcuYX3RWlirRyH6RPfeAi5kySOEhG1qu NHe06QIwpigjyFT6v/vRqoIBr7WpDOSt1VzXPVbSj1PcWBgkwyGKHlQUOuSbHbHcjOD8w8wH SDbL+L2he8hNN54doy1e1wJHKmnfb0uBAeISoxRbJnMMWvgAlH5FVrQWlgajeH/6NbYbBSRx ALuEOqEQepmJM6qz4oD2sxdq4GMN5adAdYEswkY/o0bRKyFXTD3mdqeRXce0jYQbWm7oapqS ZBccFvUgYOrB78tB6c1bxIgaQKRShtWR1zMM0JfqUfD9u8Fg7G5SVO0IG/GcxkSvZeRjhYcb TfqF2eAgprpyzLWmdr0mou3bv1Sq4OuBhmTQCnqxAXr4yVTRYHkp5lCvRgeJAme1OTVpVPth /O7HJ7VuEP9GOr6kCXCXmjB4P3UJ2oU0NqfoQdcSSSt9hliALnExTEjii20B2nSDojGCAxQw ggMQAgEBME4wOjELMAkGA1UEBhMCVVMxEjAQBgNVBAoTCUlkZW5UcnVzdDEXMBUGA1UEAxMO VHJ1c3RJRCBDQSBBMTICEEABZtMZ6Gd9S673WFjAzkMwDQYJYIZIAWUDBAIBBQCgggGXMBgG CSqGSIb3DQEJAzELBgkqhkiG9w0BBwEwHAYJKoZIhvcNAQkFMQ8XDTE4MTEyNzIxNTE1NFow LwYJKoZIhvcNAQkEMSIEIEucOMvymwxW2PoxHqnv4iImCKGVBmPPSsVmXZQVtOA7MF0GCSsG AQQBgjcQBDFQME4wOjELMAkGA1UEBhMCVVMxEjAQBgNVBAoTCUlkZW5UcnVzdDEXMBUGA1UE AxMOVHJ1c3RJRCBDQSBBMTICEEABZtMZ6Gd9S673WFjAzkMwXwYLKoZIhvcNAQkQAgsxUKBO MDoxCzAJBgNVBAYTAlVTMRIwEAYDVQQKEwlJZGVuVHJ1c3QxFzAVBgNVBAMTDlRydXN0SUQg Q0EgQTEyAhBAAWbTGehnfUuu91hYwM5DMGwGCSqGSIb3DQEJDzFfMF0wCwYJYIZIAWUDBAEq MAsGCWCGSAFlAwQBAjAKBggqhkiG9w0DBzAOBggqhkiG9w0DAgICAIAwDQYIKoZIhvcNAwIC AUAwBwYFKw4DAgcwDQYIKoZIhvcNAwICASgwDQYJKoZIhvcNAQEBBQAEggEAbMqJ9mpmqMCS vKp2KyO9PZW3N92JsPw4DRm22Y84IqUgo4BQnuib/b5Uo+5rDrXs35ustOD1CmuSfekvaZyR DiM5nhoenCUNPg+Wvligvqz9unJRyEx2wRcM5mwgIG9i5bUMNco2dc6jm3sPDj9jYTxXu2kW nuuDtTLihGcTOg6Q7vLuDUq1MZbDVoaTYLd5lua5tMtMSWfxhbaoTAm+dljaTsgyJtCEAmND QJWIGSzHotwJoDcpelg3IP6TqJlItVC6LnyQU77YgdTKfuMtpLc3ozrrHrdwn8977NAahKxo ERrcl3j8LkjPzXxhceveCcBjHpS3fEHCFQkeDJ914gAAAAAAAA== --------------ms090602000307090105090507-- From kaduk@mit.edu Thu Nov 29 03:05:34 2018 From: kaduk@mit.edu (Benjamin Kaduk) Date: Wed, 28 Nov 2018 21:05:34 -0600 Subject: [OpenAFS] cache manager timeout In-Reply-To: References: Message-ID: <20181129030534.GP10033@kduck.kaduk.org> On Mon, Nov 26, 2018 at 05:02:10PM +0100, Andreas Ladanyi wrote: > Hi, > > is it possible to adjust the timeout of the cache manager when asking > the next CellServDB or afsdb entry when a server listed in CellServDB / > afsdb is offline so for example the users dont get a long waiting for > ssh login ? It's just using the system resolver (res_search()) via a userspace helper thread, so you could adjust the timeout in /etc/resolv.conf, if I understand correctly. -Ben From mvanderw@nd.edu Thu Nov 29 17:01:47 2018 From: mvanderw@nd.edu (Matt Vander Werf) Date: Thu, 29 Nov 2018 12:01:47 -0500 Subject: [OpenAFS] https://lists.openafs.org Web Cert Expired Message-ID: --0000000000009a333b057bd0a592 Content-Type: text/plain; charset="UTF-8" It looks like the https://lists.openafs.org/mailman/listinfo web certificate has expired as of earlier this morning. The cert error shows up in the iframe when clicking on the lists links on openafs.org too. https://openafs.org by itself seems to be working fine. Just an FYI to the necessary people to get it fixed. Thanks. -- Matt Vander Werf HPC System Administrator University of Notre Dame Center for Research Computing - Union Station 506 W. South Street South Bend, IN 46601 Phone: (574) 631-0692 --0000000000009a333b057bd0a592 Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable

Just an FYI to = the necessary people to get it fixed.

Thanks.

--
Matt Vander Werf
HPC System Administrator
University of N= otre Dame
Center for Research Computing - Union Station
506 W. South = Street
South Bend, IN 46601
Phone: (574) 631-0692
--0000000000009a333b057bd0a592-- From kaduk@mit.edu Fri Nov 30 15:02:39 2018 From: kaduk@mit.edu (Benjamin Kaduk) Date: Fri, 30 Nov 2018 09:02:39 -0600 Subject: [OpenAFS] Upgrading to newer OpenAFS procedure docs In-Reply-To: References: Message-ID: <20181130150238.GA87441@kduck.kaduk.org> On Tue, Nov 27, 2018 at 12:38:16PM -0500, Susan Litzinger wrote: > We are in the process of updating from an older version of OpenAFS, > 1.4.14, to a more recent version, 1.6.16. > > The new 1.6.16 servers have been added to our current cell and we are > moving the volumes from the older servers to the new ones. We know that we > have to move the various servers, including the VL Server, and root.cell > volume from the old servers to the new ones before being able to shut off > the old servers. I'm trying to find documentation that describes how to do > that. The main obstacle is usually just the running database servers, including both VL server and PT server (and also potentially a few others if in use at your site, for the buserver, updateserver, etc.) AFAIK root.cell does not need to be hosted on a fileserver colocated with a dbserver (but is just conventionally done so during normal operation for maximum resiliency). A key question is whether the dbservers will be getting new IP addresses as part of this upgrade -- that requires a somewhat more complicated procedure, whereas if the same IP addresses are used it's pretty straightforward to just cycle machines in/out of active service. Other factors that come into play is whether all clients are known and/or under the control of central administrators, so that whether they are using AFSDB or SRV records or a static CellServDB to locate dbservers can be known. > Has anyone done this recently? Is there any documentation that describes > the proper sequence? The situation tends to be fairly customized for each site (and is not a terribly common operation), so I don't know of any formal documentation. The thread starting at https://lists.openafs.org/pipermail/openafs-info/2017-January/042007.html is probably the most recent discussion (n.b. the TLS certificate for that site recently expired so you'll have to click through a cert warning at the moment; other mail-archive sites may have the content as well, for the thread subject "procedure for changing database server IP address"). -Ben