[OpenAFS] Problems on AFS Unix clients after AFS fileserver moves
Rich Sudlow
rich@nd.edu
Tue, 09 Aug 2005 13:00:08 -0500
Derek Atkins wrote:
> Hmm... Did you "vos addsite" without "release" to the priddy server?
Actually I didn't even see that - I'll forward to our AFS admins to check.
Rich
>
> -derek
>
> Quoting Rich Sudlow <rich@nd.edu>:
>
>> Rich Sudlow wrote:
>>
>>> Derrick J Brashear wrote:
>>>
>>>> On Tue, 9 Aug 2005, Rich Sudlow wrote:
>>>>
>>>>>>> [root@xeon028 root]# ls -l /afs/nd.edu/common/custom
>>>>>>> ls: /afs/nd.edu/common/custom/hpcc: Connection timed out
>>
>>
>> Now I'm even getting:
>>
>> xeon028{root}14: cd /afs/.nd.edu
>> /afs/.nd.edu: Connection timed out.
>>
>> I should note that over the last couple years we've also been
>> having problems with our root.cell releasing consitently without
>> connection timeout problems.
>>
>> But everything looks ok now
>>
>>
>> [root@xeon028 custom]# vos ex root.cell -v
>> Fetching VLDB entry for 536889194 .. done
>> Getting volume listing from the server bubba.helios.nd.edu .. done
>> root.cell 536889194 RW 37345 K On-line
>> bubba.helios.nd.edu /vicepa
>> RWrite 536889194 ROnly 536889195 Backup 536889196
>> MaxQuota 50000 K
>> Creation Tue Jul 2 08:46:36 1991
>> Last Update Tue Aug 9 08:43:05 2005
>> 438949 accesses in the past day (i.e., vnode references)
>>
>> RWrite: 536889194 ROnly: 536889195 Backup: 536889196
>> number of sites -> 8
>> server bubba.helios.nd.edu partition /vicepa RW Site
>> server priddy.helios.nd.edu partition /vicepa RO Site
>> server bubba.helios.nd.edu partition /vicepa RO Site
>> server nevada.helios.nd.edu partition /vicepb RO Site
>> server emdall.helios.nd.edu partition /vicepb RO Site
>> server widmark.helios.nd.edu partition /vicepb RO Site
>> server yaya.helios.nd.edu partition /vicepa RO Site
>> server planet10.helios.nd.edu partition /vicepa RO Site
>> [root@xeon028 custom]# vos ex root.cell.readonly
>> Could not fetch the information about volume 536889195 from the server
>> : No such device
>> Volume does not exist on server priddy.helios.nd.edu as indicated by
>> the VLDB
>>
>> root.cell.readonly 536889195 RO 37345 K On-line
>> bubba.helios.nd.edu /vicepa
>> RWrite 536889194 ROnly 536889195 Backup 536889196
>> MaxQuota 50000 K
>> Creation Tue Aug 9 08:59:11 2005
>> Last Update Tue Aug 9 08:59:11 2005
>> 71583 accesses in the past day (i.e., vnode references)
>>
>> root.cell.readonly 536889195 RO 37345 K On-line
>> nevada.helios.nd.edu /vicepb
>> RWrite 536889194 ROnly 0 Backup 0
>> MaxQuota 50000 K
>> Creation Tue Aug 9 08:59:11 2005
>> Last Update Tue Aug 9 08:59:11 2005
>> 5236 accesses in the past day (i.e., vnode references)
>>
>> root.cell.readonly 536889195 RO 37345 K On-line
>> emdall.helios.nd.edu /vicepb
>> RWrite 536889194 ROnly 0 Backup 0
>> MaxQuota 50000 K
>> Creation Tue Aug 9 08:59:11 2005
>> Last Update Tue Aug 9 08:59:11 2005
>> 621076 accesses in the past day (i.e., vnode references)
>>
>> root.cell.readonly 536889195 RO 37345 K On-line
>> widmark.helios.nd.edu /vicepb
>> RWrite 536889194 ROnly 0 Backup 0
>> MaxQuota 50000 K
>> Creation Tue Aug 9 08:59:11 2005
>> Last Update Tue Aug 9 08:59:11 2005
>> 47125 accesses in the past day (i.e., vnode references)
>>
>> root.cell.readonly 536889195 RO 37345 K On-line
>> yaya.helios.nd.edu /vicepa
>> RWrite 536889194 ROnly 0 Backup 0
>> MaxQuota 50000 K
>> Creation Tue Aug 9 08:59:11 2005
>> Last Update Tue Aug 9 08:59:11 2005
>> 171534 accesses in the past day (i.e., vnode references)
>>
>> root.cell.readonly 536889195 RO 37345 K On-line
>> planet10.helios.nd.edu /vicepa
>> RWrite 536889194 ROnly 0 Backup 0
>> MaxQuota 50000 K
>> Creation Tue Aug 9 08:59:11 2005
>> Last Update Tue Aug 9 08:59:11 2005
>> 53916 accesses in the past day (i.e., vnode references)
>>
>> RWrite: 536889194 ROnly: 536889195 Backup: 536889196
>> number of sites -> 8
>> server bubba.helios.nd.edu partition /vicepa RW Site
>> server priddy.helios.nd.edu partition /vicepa RO Site
>> server bubba.helios.nd.edu partition /vicepa RO Site
>> server nevada.helios.nd.edu partition /vicepb RO Site
>> server emdall.helios.nd.edu partition /vicepb RO Site
>> server widmark.helios.nd.edu partition /vicepb RO Site
>> server yaya.helios.nd.edu partition /vicepa RO Site
>> server planet10.helios.nd.edu partition /vicepa RO Site
>> [root@xeon028 custom]#
>>
>> Rich
>>
>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> i assume if you fs flushmount hpcc it gets better?
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> I wish ;-)
>>>>
>>>>
>>>>
>>>>
>>>> ok, so what happens network-wise after you say fs flushv in common,
>>>> then ls again. (tcpdump -vv -s 1500 port 7000 and watch the network
>>>> traffic)
>>>
>>>
>>>
>>> fs flushvolume /afs/nd.edu /afs/nd.edu/common
>>> /afs/nd.edu/common/custom /afs/nd.edu/common/custom/hpcc
>>>
>>> fs flushmount /afs/nd.edu /afs/nd.edu/common
>>> /afs/nd.edu/common/custom /afs/nd.edu/common/custom/hpcc
>>>
>>> while using tcpdump -vv -s 1500 -i eth1 port 7000
>>>
>>> 12:27:09.030372 xeon028.afs3-callback >
>>> emdall.helios.nd.edu.afs3-fileserver: [udp sum ok] rx data cid
>>> ceccc7b4 call# 42 seq 1 ser 173 <client-init>,<last-pckt> fs call
>>> fetch-status fid 536870919/2/22297 (44) (DF) (ttl 64, id 57995, len 72)
>>> 12:27:09.031116 emdall.helios.nd.edu.afs3-fileserver >
>>> xeon028.afs3-callback: [udp sum ok] rx data cid ceccc7b4 call# 42
>>> seq 1 ser 236 <last-pckt> fs reply fetch-status (148) (DF) (ttl 252,
>>> id 21104, len 176)
>>> 12:27:09.031615 xeon028.afs3-callback >
>>> emdall.helios.nd.edu.afs3-fileserver: [udp sum ok] rx ack cid
>>> ceccc7b4 call# 42 seq 0 ser 174 <client-init>,<slow-start> fir 2 0n
>>> (65) (DF) (ttl 64, id 57996, len 93)
>>> 12:27:09.031788 xeon028.afs3-callback >
>>> emdall.helios.nd.edu.afs3-fileserver: [udp sum ok] rx data cid
>>> ceccc7b4 call# 43 seq 1 ser 175 <client-init>,<last-pckt> fs call
>>> fetch-status fid 536889195/4/3 (44) (DF) (ttl 64, id 57997, len 72)
>>> 12:27:09.032313 emdall.helios.nd.edu.afs3-fileserver >
>>> xeon028.afs3-callback: [udp sum ok] rx data cid ceccc7b4 call# 43
>>> seq 1 ser 237 <last-pckt> fs reply fetch-status (148) (DF) (ttl 252,
>>> id 21105, len 176)
>>> 12:27:09.032510 xeon028.afs3-callback >
>>> emdall.helios.nd.edu.afs3-fileserver: [udp sum ok] rx ack cid
>>> ceccc7b4 call# 43 seq 0 ser 176 <client-init>,<slow-start> fir 2 0n
>>> (65) (DF) (ttl 64, id 57998, len 93)
>>> 12:27:09.032749 xeon028.afs3-callback >
>>> priddy.helios.nd.edu.afs3-fileserver: [udp sum ok] rx data cid
>>> ceccc80c call# 30 seq 1 ser 62 <client-init>,<last-pckt> fs call
>>> fetch-status fid 536872982/34/31616 (44) (DF) (ttl 64, id 57999, len 72)
>>> 12:27:09.033861 priddy.helios.nd.edu.afs3-fileserver >
>>> xeon028.afs3-callback: [udp sum ok] rx data cid ceccc80c call# 30
>>> seq 1 ser 35 <last-pckt> fs reply fetch-status (148) (DF) (ttl 252,
>>> id 58352, len 176)
>>> 12:27:09.034041 xeon028.afs3-callback >
>>> priddy.helios.nd.edu.afs3-fileserver: [udp sum ok] rx ack cid
>>> ceccc80c call# 30 seq 0 ser 63 <client-init>,<slow-start> fir 2 0n
>>> (65) (DF) (ttl 64, id 58000, len 93)
>>> 12:27:12.370168 xeon028.afs3-callback >
>>> emdall.helios.nd.edu.afs3-fileserver: [udp sum ok] rx data cid
>>> ceccc7b4 call# 44 seq 1 ser 177 <client-init>,<last-pckt> fs call
>>> fetch-status fid 536870919/2/22297 (44) (DF) (ttl 64, id 58001, len 72)
>>> 12:27:12.370715 emdall.helios.nd.edu.afs3-fileserver >
>>> xeon028.afs3-callback: [udp sum ok] rx data cid ceccc7b4 call# 44
>>> seq 1 ser 238 <last-pckt> fs reply fetch-status (148) (DF) (ttl 252,
>>> id 21106, len 176)
>>> 12:27:12.370877 xeon028.afs3-callback >
>>> emdall.helios.nd.edu.afs3-fileserver: [udp sum ok] rx ack cid
>>> ceccc7b4 call# 44 seq 0 ser 178 <client-init>,<slow-start> fir 2 0n
>>> (65) (DF) (ttl 64, id 58002, len 93)
>>> 12:27:12.371005 xeon028.afs3-callback >
>>> emdall.helios.nd.edu.afs3-fileserver: [udp sum ok] rx data cid
>>> ceccc7b4 call# 45 seq 1 ser 179 <client-init>,<last-pckt> fs call
>>> fetch-status fid 536889195/4/3 (44) (DF) (ttl 64, id 58003, len 72)
>>> 12:27:12.371649 emdall.helios.nd.edu.afs3-fileserver >
>>> xeon028.afs3-callback: [udp sum ok] rx data cid ceccc7b4 call# 45
>>> seq 1 ser 239 <last-pckt> fs reply fetch-status (148) (DF) (ttl 252,
>>> id 21107, len 176)
>>> 12:27:12.371746 xeon028.afs3-callback >
>>> emdall.helios.nd.edu.afs3-fileserver: [udp sum ok] rx ack cid
>>> ceccc7b4 call# 45 seq 0 ser 180 <client-init>,<slow-start> fir 2 0n
>>> (65) (DF) (ttl 64, id 58004, len 93)
>>> 12:27:12.371873 xeon028.afs3-callback >
>>> priddy.helios.nd.edu.afs3-fileserver: [udp sum ok] rx data cid
>>> ceccc80c call# 31 seq 1 ser 64 <client-init>,<last-pckt> fs call
>>> fetch-status fid 536872982/34/31616 (44) (DF) (ttl 64, id 58005, len 72)
>>> 12:27:12.372945 priddy.helios.nd.edu.afs3-fileserver >
>>> xeon028.afs3-callback: [udp sum ok] rx data cid ceccc80c call# 31
>>> seq 1 ser 36 <last-pckt> fs reply fetch-status (148) (DF) (ttl 252,
>>> id 58353, len 176)
>>> 12:27:12.373028 xeon028.afs3-callback >
>>> priddy.helios.nd.edu.afs3-fileserver: [udp sum ok] rx ack cid
>>> ceccc80c call# 31 seq 0 ser 65 <client-init>,<slow-start> fir 2 0n
>>> (65) (DF) (ttl 64, id 58006, len 93)
>>> 12:27:12.373152 xeon028.afs3-callback >
>>> bert.helios.nd.edu.afs3-fileserver: [udp sum ok] rx data cid
>>> ceccc7b0 call# 89 seq 1 ser 219 <client-init>,<last-pckt> fs call
>>> fetch-status fid 1918991272/966/11826 (44) (DF) (ttl 64, id 58007,
>>> len 72)
>>> 12:27:12.374128 bert.helios.nd.edu.afs3-fileserver >
>>> xeon028.afs3-callback: [udp sum ok] rx ack cid ceccc7b0 call# 89 seq
>>> 0 ser 131 <req-ack>,<slow-start> fir 1 0p* (66) (DF) (ttl 252, id
>>> 3509, len 94)
>>> 12:27:12.374180 xeon028.afs3-callback >
>>> bert.helios.nd.edu.afs3-fileserver: [udp sum ok] rx ack cid ceccc7b0
>>> call# 89 seq 0 ser 220 <client-init>,<slow-start> fir 1 131r (65)
>>> (DF) (ttl 64, id 58008, len 93)
>>> 12:27:12.374732 bert.helios.nd.edu.afs3-fileserver >
>>> xeon028.afs3-callback: [udp sum ok] rx data cid ceccc7b0 call# 89
>>> seq 1 ser 132 <last-pckt> fs reply fetch-status (148) (DF) (ttl 252,
>>> id 3510, len 176)
>>> 12:27:12.374846 xeon028.afs3-callback >
>>> bert.helios.nd.edu.afs3-fileserver: [udp sum ok] rx ack cid ceccc7b0
>>> call# 89 seq 0 ser 221 <client-init>,<slow-start> fir 2 0n (65) (DF)
>>> (ttl 64, id 58009, len 93)
>>> 12:27:12.374927 xeon028.afs3-callback >
>>> bert.helios.nd.edu.afs3-fileserver: [udp sum ok] rx data cid
>>> ceccc7b0 call# 90 seq 1 ser 222 <client-init>,<last-pckt> fs call
>>> fetch-data fid 1918991272/966/11826 offset 0 length 17 (52) (DF) (ttl
>>> 64, id 58010, len 80)
>>> 12:27:12.375727 bert.helios.nd.edu.afs3-fileserver >
>>> xeon028.afs3-callback: [udp sum ok] rx data cid ceccc7b0 call# 90
>>> seq 1 ser 133 <last-pckt> fs reply fetch-data (169) (DF) (ttl 252, id
>>> 3511, len 197)
>>> 12:27:12.375830 xeon028.afs3-callback >
>>> bert.helios.nd.edu.afs3-fileserver: [udp sum ok] rx ack cid ceccc7b0
>>> call# 90 seq 0 ser 223 <client-init>,<slow-start> fir 2 0n (65) (DF)
>>> (ttl 64, id 58011, len 93)
>>>
>>> 26 packets received by filter
>>>
>>> I also saw some attempts to contact reno when running tcpdump too.
>>>
>>> 12:22:00.717951 xeon028.afs3-callback >
>>> reno.helios.nd.edu.afs3-fileserver: [udp sum ok] rx data cid
>>> ceccc87c call# 1 seq 1 ser 1 <client-init>,<last-pckt> fs call
>>> get-time (32) (DF) (ttl 64, id 52214, len 60)
>>> 12:22:01.507927 xeon028.afs3-callback >
>>> reno.helios.nd.edu.afs3-fileserver: [udp sum ok] rx data cid
>>> ceccc87c call# 1 seq 1 ser 2 <client-init>,<req-ack>,<last-pckt> fs
>>> call get-time (32) (DF) (ttl 64, id 52215, len 60)
>>> 12:22:02.527991 xeon028.afs3-callback >
>>> reno.helios.nd.edu.afs3-fileserver: [udp sum ok] rx data cid
>>> ceccc87c call# 1 seq 1 ser 3 <client-init>,<req-ack>,<last-pckt> fs
>>> call get-time (32) (DF) (ttl 64, id 52216, len 60)
>>> 12:22:04.058099 xeon028.afs3-callback >
>>> reno.helios.nd.edu.afs3-fileserver: [udp sum ok] rx data cid
>>> ceccc87c call# 1 seq 1 ser 4 <client-init>,<req-ack>,<last-pckt> fs
>>> call get-time (32) (DF) (ttl 64, id 52217, len 60)
>>> 12:22:06.098225 xeon028.afs3-callback >
>>> reno.helios.nd.edu.afs3-fileserver: [udp sum ok] rx ack cid ceccc87c
>>> call# 1 seq 0 ser 5 <client-init>,<req-ack>,<slow-start> fir 1 0p
>>> (65) (DF) (ttl 64, id 52218, len 93)
>>> 12:22:06.608256 xeon028.afs3-callback >
>>> reno.helios.nd.edu.afs3-fileserver: [udp sum ok] rx data cid
>>> ceccc87c call# 1 seq 1 ser 6 <client-init>,<req-ack>,<last-pckt> fs
>>> call get-time (32) (DF) (ttl 64, id 52219, len 60)
>>>
>>>
>>> Rich
>>>
>>>
>>>>
>>>>
>>>> _______________________________________________
>>>> OpenAFS-info mailing list
>>>> OpenAFS-info@openafs.org
>>>> https://lists.openafs.org/mailman/listinfo/openafs-info
>>>
>>>
>>>
>>>
>>
>>
>> --
>> Rich Sudlow
>> University of Notre Dame
>> Office of Information Technologies
>> 321 Information Technologies Center
>> PO Box 539
>> Notre Dame, IN 46556-0539
>>
>> (574) 631-7258 office phone
>> (574) 631-9283 office fax
>>
>> _______________________________________________
>> OpenAFS-info mailing list
>> OpenAFS-info@openafs.org
>> https://lists.openafs.org/mailman/listinfo/openafs-info
>>
>
>
>
--
Rich Sudlow
University of Notre Dame
Office of Information Technologies
321 Information Technologies Center
PO Box 539
Notre Dame, IN 46556-0539
(574) 631-7258 office phone
(574) 631-9283 office fax