From me@janh.de Thu Aug 3 14:04:06 2023 From: me@janh.de (Jan Henrik Sylvester) Date: Thu, 3 Aug 2023 15:04:06 +0200 Subject: [OpenAFS] 1.8.10 in ppa:openafs/stable for Ubuntu 22.04 (kernel 6.2)? Message-ID: <6a652668-8b0d-d843-810d-fa6dc341331d@janh.de> Hello, today, Ubuntu 22.04 replaced the default desktop kernel 5.19 (originally from 22.10) with 6.2 (originally from 23.04). Since desktop defaults to the HWE kernel, this kernel will be installed automatically on desktop installation immediately. 1.8.9 from ppa:openafs/stable fails to build for kernel 6.2, which is not surprising, since 1.8.9 release notes state that mainline kernels up to 6.0 are supported. Only in 1.8.10, kernel support is extended up to 6.4. Please, could ppa:openafs/stable be updated to 1.8.10 as soon as possible, since there are now Ubuntu LTS systems without AFS. Thanks a lot, Jan Henrik From jaltman@auristor.com Thu Aug 3 16:02:40 2023 From: jaltman@auristor.com (Jeffrey E Altman) Date: Thu, 3 Aug 2023 11:02:40 -0400 Subject: [OpenAFS] 1.8.10 in ppa:openafs/stable for Ubuntu 22.04 (kernel 6.2)? In-Reply-To: <6a652668-8b0d-d843-810d-fa6dc341331d@janh.de> References: <6a652668-8b0d-d843-810d-fa6dc341331d@janh.de> Message-ID: This is a cryptographically signed message in MIME format. --------------ms010608010004010602000302 Content-Type: multipart/alternative; boundary="------------zZtLzFOC0HdItrRgALEGYBuJ" --------------zZtLzFOC0HdItrRgALEGYBuJ Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit On 8/3/2023 9:04 AM, Jan Henrik Sylvester wrote: > ... there are now Ubuntu LTS systems without AFS. > Jan, As a reminder, Ubuntu 22.04 LTS systems include the Linux kernel afs file system (kafs).  As kafs is built as part of the kernel it is always up-to-date. To use kafs: 1. apt-get install kafs-client 2. systemctl start afs.mount 3. acquire tokens using aklog-kafs 1.  or install kafs-compat to rename aklog-kafs to aklog 4. To enable afs.mount at boot, systemctl enable afs.mount 5. Read "man kafs" Even if you prefer OpenAFS, kafs is available to access /afs until updated OpenAFS packages are available. Jeffrey Altman --------------zZtLzFOC0HdItrRgALEGYBuJ Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: 8bit On 8/3/2023 9:04 AM, Jan Henrik Sylvester wrote:
... there are now Ubuntu LTS systems without AFS.

Jan,

As a reminder, Ubuntu 22.04 LTS systems include the Linux kernel afs file system (kafs).  As kafs is built as part of the kernel it is always up-to-date.

To use kafs:

  1. apt-get install kafs-client
  2. systemctl start afs.mount
  3. acquire tokens using aklog-kafs
    1.  or install kafs-compat to rename aklog-kafs to aklog
  4. To enable afs.mount at boot, systemctl enable afs.mount
  5. Read "man kafs"

Even if you prefer OpenAFS, kafs is available to access /afs until updated OpenAFS packages are available.

Jeffrey Altman


--------------zZtLzFOC0HdItrRgALEGYBuJ-- --------------ms010608010004010602000302 Content-Type: application/pkcs7-signature; name="smime.p7s" Content-Transfer-Encoding: base64 Content-Disposition: attachment; filename="smime.p7s" Content-Description: S/MIME Cryptographic Signature MIAGCSqGSIb3DQEHAqCAMIACAQExDzANBglghkgBZQMEAgEFADCABgkqhkiG9w0BBwEAAKCC DHEwggXSMIIEuqADAgECAhBAAYJpmi/rPn/F0fJyDlzMMA0GCSqGSIb3DQEBCwUAMDoxCzAJ BgNVBAYTAlVTMRIwEAYDVQQKEwlJZGVuVHJ1c3QxFzAVBgNVBAMTDlRydXN0SUQgQ0EgQTEz MB4XDTIyMDgwNDE2MDQ0OFoXDTI1MTAzMTE2MDM0OFowcDEvMC0GCgmSJomT8ixkAQETH0Ew MTQxMEQwMDAwMDE4MjY5OUEyRkQyMDAwMjMzQ0QxGTAXBgNVBAMTEEplZmZyZXkgRSBBbHRt YW4xFTATBgNVBAoTDEF1cmlTdG9yIEluYzELMAkGA1UEBhMCVVMwggEiMA0GCSqGSIb3DQEB AQUAA4IBDwAwggEKAoIBAQCkC7PKBBZnQqDKPtZPMLAy77zo2DPvwtGnd1hNjPvbXrpGxUb3 xHZRtv179LHKAOcsY2jIctzieMxf82OMyhpBziMPsFAG/ukihBMFj3/xEeZVso3K27pSAyyN fO/wJ0rX7G+ges22Dd7goZul8rPaTJBIxbZDuaykJMGpNq4PQ8VPcnYZx+6b+nJwJJoJ46kI EEfNh3UKvB/vM0qtxS690iAdgmQIhTl+qfXq4IxWB6b+3NeQxgR6KLU4P7v88/tvJTpxIKkg 9xj89ruzeThyRFd2DSe3vfdnq9+g4qJSHRXyTft6W3Lkp7UWTM4kMqOcc4VSRdufVKBQNXjG IcnhAgMBAAGjggKcMIICmDAOBgNVHQ8BAf8EBAMCBPAwgYQGCCsGAQUFBwEBBHgwdjAwBggr BgEFBQcwAYYkaHR0cDovL2NvbW1lcmNpYWwub2NzcC5pZGVudHJ1c3QuY29tMEIGCCsGAQUF BzAChjZodHRwOi8vdmFsaWRhdGlvbi5pZGVudHJ1c3QuY29tL2NlcnRzL3RydXN0aWRjYWEx My5wN2MwHwYDVR0jBBgwFoAULbfeG1l+KpguzeHUG+PFEBJe6RQwCQYDVR0TBAIwADCCASsG A1UdIASCASIwggEeMIIBGgYLYIZIAYb5LwAGAgEwggEJMEoGCCsGAQUFBwIBFj5odHRwczov L3NlY3VyZS5pZGVudHJ1c3QuY29tL2NlcnRpZmljYXRlcy9wb2xpY3kvdHMvaW5kZXguaHRt bDCBugYIKwYBBQUHAgIwga0MgapUaGlzIFRydXN0SUQgQ2VydGlmaWNhdGUgaGFzIGJlZW4g aXNzdWVkIGluIGFjY29yZGFuY2Ugd2l0aCBJZGVuVHJ1c3QncyBUcnVzdElEIENlcnRpZmlj YXRlIFBvbGljeSBmb3VuZCBhdCBodHRwczovL3NlY3VyZS5pZGVudHJ1c3QuY29tL2NlcnRp ZmljYXRlcy9wb2xpY3kvdHMvaW5kZXguaHRtbDBFBgNVHR8EPjA8MDqgOKA2hjRodHRwOi8v dmFsaWRhdGlvbi5pZGVudHJ1c3QuY29tL2NybC90cnVzdGlkY2FhMTMuY3JsMB8GA1UdEQQY MBaBFGphbHRtYW5AYXVyaXN0b3IuY29tMB0GA1UdDgQWBBQB+nzqgljLocLTsiUn2yWqEc2s gjAdBgNVHSUEFjAUBggrBgEFBQcDAgYIKwYBBQUHAwQwDQYJKoZIhvcNAQELBQADggEBAJwV eycprp8Ox1npiTyfwc5QaVaqtoe8Dcg2JXZc0h4DmYGW2rRLHp8YL43snEV93rPJVk6B2v4c WLeQfaMrnyNeEuvHx/2CT44cdLtaEk5zyqo3GYJYlLcRVz6EcSGHv1qPXgDT0xB/25etwGYq utYF4Chkxu4KzIpq90eDMw5ajkexw+8ARQz4N5+d6NRbmMCovd7wTGi8th/BZvz8hgKUiUJo Qle4wDxrdXdnIhCP7g87InXKefWgZBF4VX21t2+hkc04qrhIJlHrocPG9mRSnnk2WpsY0MXt a8ivbVKtfpY7uSNDZSKTDi1izEFH5oeQdYRkgIGb319a7FjslV8wggaXMIIEf6ADAgECAhBA AXA7OrqBjMk8rp4OuNQSMA0GCSqGSIb3DQEBCwUAMEoxCzAJBgNVBAYTAlVTMRIwEAYDVQQK EwlJZGVuVHJ1c3QxJzAlBgNVBAMTHklkZW5UcnVzdCBDb21tZXJjaWFsIFJvb3QgQ0EgMTAe Fw0yMDAyMTIyMTA3NDlaFw0zMDAyMTIyMTA3NDlaMDoxCzAJBgNVBAYTAlVTMRIwEAYDVQQK EwlJZGVuVHJ1c3QxFzAVBgNVBAMTDlRydXN0SUQgQ0EgQTEzMIIBIjANBgkqhkiG9w0BAQEF AAOCAQ8AMIIBCgKCAQEAu6sUO01SDD99PM+QdZkNxKxJNt0NgQE+Zt6ixaNP0JKSjTd+SG5L wqxBWjnOgI/3dlwgtSNeN77AgSs+rA4bK4GJ75cUZZANUXRKw/et8pf9Qn6iqgB63OdHxBN/ 15KbM3HR+PyiHXQoUVIevCKW8nnlWnnZabT1FejOhRRKVUg5HACGOTfnCOONrlxlg+m1Vjgn o1uNqNuLM/jkD1z6phNZ/G9IfZGI0ppHX5AA/bViWceX248VmefNhSR14ADZJtlAAWOi2un0 3bqrBPHA9nDyXxI8rgWLfUP5rDy8jx2hEItg95+ORF5wfkGUq787HBjspE86CcaduLka/Bk2 VwIDAQABo4IChzCCAoMwEgYDVR0TAQH/BAgwBgEB/wIBADAOBgNVHQ8BAf8EBAMCAYYwgYkG CCsGAQUFBwEBBH0wezAwBggrBgEFBQcwAYYkaHR0cDovL2NvbW1lcmNpYWwub2NzcC5pZGVu dHJ1c3QuY29tMEcGCCsGAQUFBzAChjtodHRwOi8vdmFsaWRhdGlvbi5pZGVudHJ1c3QuY29t L3Jvb3RzL2NvbW1lcmNpYWxyb290Y2ExLnA3YzAfBgNVHSMEGDAWgBTtRBnA0/AGi+6ke75C 5yZUyI42djCCASQGA1UdIASCARswggEXMIIBEwYEVR0gADCCAQkwSgYIKwYBBQUHAgEWPmh0 dHBzOi8vc2VjdXJlLmlkZW50cnVzdC5jb20vY2VydGlmaWNhdGVzL3BvbGljeS90cy9pbmRl eC5odG1sMIG6BggrBgEFBQcCAjCBrQyBqlRoaXMgVHJ1c3RJRCBDZXJ0aWZpY2F0ZSBoYXMg YmVlbiBpc3N1ZWQgaW4gYWNjb3JkYW5jZSB3aXRoIElkZW5UcnVzdCdzIFRydXN0SUQgQ2Vy dGlmaWNhdGUgUG9saWN5IGZvdW5kIGF0IGh0dHBzOi8vc2VjdXJlLmlkZW50cnVzdC5jb20v Y2VydGlmaWNhdGVzL3BvbGljeS90cy9pbmRleC5odG1sMEoGA1UdHwRDMEEwP6A9oDuGOWh0 dHA6Ly92YWxpZGF0aW9uLmlkZW50cnVzdC5jb20vY3JsL2NvbW1lcmNpYWxyb290Y2ExLmNy bDAdBgNVHQ4EFgQULbfeG1l+KpguzeHUG+PFEBJe6RQwHQYDVR0lBBYwFAYIKwYBBQUHAwIG CCsGAQUFBwMEMA0GCSqGSIb3DQEBCwUAA4ICAQB/7BKcygLX6Nl4a03cDHt7TLdPxCzFvDF2 bkVYCFTRX47UfeomF1gBPFDee3H/IPlLRmuTPoNt0qjdpfQzmDWN95jUXLdLPRToNxyaoB5s 0hOhcV6H08u3FHACBif55i0DTDzVSaBv0AZ9h1XeuGx4Fih1Vm3Xxz24GBqqVudvPRLyMJ7u 6hvBqTIKJ53uCs3dyQLZT9DXnp+kJv8y7ZSAY+QVrI/dysT8avtn8d7k7azNBkfnbRq+0e88 QoBnel6u+fpwbd5NLRHywXeH+phbzULCa+bLPRMqJaW2lbhvSWrMHRDy3/d8HvgnLCBFK2s4 Spns4YCN4xVcbqlGWzgolHCKUH39vpcsDo1ymZFrJ8QR6ihIn8FmJ5oKwAnnd/G6ADXFC9bu db9+532phSAXOZrrecIQn+vtP366PC+aClAPsIIDJDsotS5z4X2JUFsNIuEgXGqhiKE7SuZb rFG9sdcLprSlJN7TsRDc0W2b9nqwD+rj/5MN0C+eKwha+8ydv0+qzTyxPP90KRgaegGowC4d UsZyTk2n4Z3MuAHX5nAZL/Vh/SyDj/ajorV44yqZBzQ3ChKhXbfUSwe2xMmygA2Z5DRwMRJn p/BscizYdNk2WXJMTnH+wVLN8sLEwEtQR4eTLoFmQvrK2AMBS9kW5sBkMzINt/ZbbcZ3F+eA MDGCAxQwggMQAgEBME4wOjELMAkGA1UEBhMCVVMxEjAQBgNVBAoTCUlkZW5UcnVzdDEXMBUG A1UEAxMOVHJ1c3RJRCBDQSBBMTMCEEABgmmaL+s+f8XR8nIOXMwwDQYJYIZIAWUDBAIBBQCg ggGXMBgGCSqGSIb3DQEJAzELBgkqhkiG9w0BBwEwHAYJKoZIhvcNAQkFMQ8XDTIzMDgwMzE1 MDI0MFowLwYJKoZIhvcNAQkEMSIEIG1MBg+k0jGKmi9iCq/qM0PN1c5cnGlXQChSyT+iIEsm MF0GCSsGAQQBgjcQBDFQME4wOjELMAkGA1UEBhMCVVMxEjAQBgNVBAoTCUlkZW5UcnVzdDEX MBUGA1UEAxMOVHJ1c3RJRCBDQSBBMTMCEEABgmmaL+s+f8XR8nIOXMwwXwYLKoZIhvcNAQkQ AgsxUKBOMDoxCzAJBgNVBAYTAlVTMRIwEAYDVQQKEwlJZGVuVHJ1c3QxFzAVBgNVBAMTDlRy dXN0SUQgQ0EgQTEzAhBAAYJpmi/rPn/F0fJyDlzMMGwGCSqGSIb3DQEJDzFfMF0wCwYJYIZI AWUDBAEqMAsGCWCGSAFlAwQBAjAKBggqhkiG9w0DBzAOBggqhkiG9w0DAgICAIAwDQYIKoZI hvcNAwICAUAwBwYFKw4DAgcwDQYIKoZIhvcNAwICASgwDQYJKoZIhvcNAQEBBQAEggEAb3/u O3W+1zBLTOU8Webs4hKWwcRKyjV6euDtFA05twt4vd5+8wjJ9ZG1o7ke1b24EOQaPwE5hwQE u5+4ri0T9GjUr9EIxB8vszqL/dwHAiq/ACo2V3ZnTDrKdDR/tWsnJFH0JlGulygqjP9+VU// L8fCuOkIh1Xk0KAiLXTCgbY/tWBUQE93V1xBGAh2mnycHiCMQ4x7J89qVd0qAjX3s9Y2lr29 6VIREvVUSi2ldqZ1Npic7MktDQfveaKyPmXq6Nnde43gb8P95F+0csli3BixOHnjCbSszW55 yJDIQ6oUC6jRBwe/hmUWPEKvSbIlsPhfOFDaU/Y5kQFmE6AwzQAAAAAAAA== --------------ms010608010004010602000302-- From me@janh.de Thu Aug 3 17:00:49 2023 From: me@janh.de (Jan Henrik Sylvester) Date: Thu, 3 Aug 2023 18:00:49 +0200 Subject: [OpenAFS] 1.8.10 in ppa:openafs/stable for Ubuntu 22.04 (kernel 6.2)? In-Reply-To: References: <6a652668-8b0d-d843-810d-fa6dc341331d@janh.de> Message-ID: <92dbb878-6fdc-60d8-fdd0-93b376d77c25@janh.de> On 8/3/23 17:02, Jeffrey E Altman wrote: > Even if you prefer OpenAFS, kafs is available to access /afs until > updated OpenAFS packages are available. Thanks for the reminder. In the meantime, I have noticed to have missed the packages in jammy-updates. These are only 1.8.8, but with patches for kernel 6.2 (which are not in the pure jammy packages). Until 1.8.10 packages are backported (which is not too hard, since Debian trixie already has 1.8.10), Ubuntu 22.04 can use the packages from jammy-updates: apt-mark hold openafs-client openafs-modules-dkms openafs-krb5 apt-get install -t jammy-updates openafs-client=1.8.8.1-3ubuntu2~22.04.2 openafs-krb5=1.8.8.1-3ubuntu2~22.04.2 openafs-modules-dkms=1.8.8.1-3ubuntu2~22.04.2 Best, Jan Henrik From me@janh.de Fri Aug 4 00:20:23 2023 From: me@janh.de (Jan Henrik Sylvester) Date: Fri, 4 Aug 2023 01:20:23 +0200 Subject: [OpenAFS] 1.8.10 in ppa:openafs/stable for Ubuntu 22.04 (kernel 6.2)? In-Reply-To: <6a652668-8b0d-d843-810d-fa6dc341331d@janh.de> References: <6a652668-8b0d-d843-810d-fa6dc341331d@janh.de> Message-ID: On 8/3/23 15:04, Jan Henrik Sylvester wrote: > Please, could ppa:openafs/stable be updated to 1.8.10 as soon as > possible, since there are now Ubuntu LTS systems without AFS. I see that the PPA has 1.8.10 now. Anders, thanks for the quick fix! Best wishes, Jan Henrik From Collin.Gros@ricoh-usa.com Mon Aug 14 23:29:57 2023 From: Collin.Gros@ricoh-usa.com (Collin Gros) Date: Mon, 14 Aug 2023 22:29:57 +0000 Subject: [OpenAFS] Advice regarding OpenAFS performance? Message-ID: --_004_DM6PR17MB4181BC29946B9CDF07706998AC17ADM6PR17MB4181namp_ Content-Type: multipart/alternative; boundary="_000_DM6PR17MB4181BC29946B9CDF07706998AC17ADM6PR17MB4181namp_" --_000_DM6PR17MB4181BC29946B9CDF07706998AC17ADM6PR17MB4181namp_ Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable Dear OpenAFS community, We are administrators for an OpenAFS environment of (what will be) about 40= 0 users and are running into some performance issues, for which we hope you= might have some advice... 1. Do you have any sources we can look at that might help us in adjusting c= onfiguration to improve performance? We read the man page for `dafileserver= ` and messed around a lot with our arguments to `dafileserver` (increasing = them past the values set for -L, or Large)... though we haven't noticed muc= h of an improvement in performance through our testing. See below for the c= onfiguration we currently have set for `dafileserver` on all of our OpenAFS= file servers. 2. Do you know what kind of read/write speed we should expect for an enviro= ment/configuration of this size? It would be helpful for us to know what we= should be expecting in our environment as far as performance is concerned. =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D Our performance test =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D Here are results from our testing with a binary file (7103053824 bytes in s= ize, or 6.7GB), copying it from one client to AFS: client1: openSUSE 15.1 server: AFS file server that hosts the AFS volumes used for our testing `scp`: client1 (local) -> server (local): 102.2MB/s (66s) `cp`: client1 (local) -> client1 (AFS file space): 19.2MB/s (352s) `cp`: client1 (AFS file space) -> client1 (AFS file space): 19.46MB/s (34= 8s) Here are results from our testing with the same binary file (7103053824 byt= es in size, or 6.7GB), copying it in parallel from two clients to the same = AFS volume: client1 (local) -> server (AFS file space): 10.22MB/s (663s) client2 (local) -> server (AFS file space): 9.69MB/s (699s) client1 (AFS file space) -> client1 (AFS file space): 5.38MB/s (1258s) client2 (AFS file space) -> client2 (AFS file space): 7MB/s (965s) client1 (AFS file space) -> client1 (local): 13.15MB/s (515s) client2 (AFS file space) -> client2 (local): 15.57MB/s (435s) client1 total time taken: 2436s client2 total time taken: 2099s Here is a snapshot of what `top` looks like from the AFS file server while = the copy is taking place: top - 16:14:14 up 5 days, 7:29, 2 users, load average: 1.06, 0.37, 0.2= 6 Tasks: 297 total, 2 running, 294 sleeping, 1 stopped, 0 zombie %Cpu0 : 17.3 us, 6.5 sy, 0.0 ni, 69.4 id, 1.7 wa, 1.0 hi, 4.1 si, = 0.0 st %Cpu1 : 16.2 us, 4.1 sy, 0.0 ni, 65.5 id, 13.2 wa, 0.7 hi, 0.3 si, = 0.0 st %Cpu2 : 5.0 us, 6.7 sy, 0.3 ni, 12.4 id, 63.2 wa, 1.0 hi, 11.4 si, = 0.0 st %Cpu3 : 7.5 us, 5.1 sy, 9.2 ni, 44.2 id, 31.5 wa, 1.4 hi, 1.0 si, = 0.0 st %Cpu4 : 13.3 us, 6.5 sy, 2.0 ni, 67.6 id, 9.9 wa, 0.7 hi, 0.0 si, = 0.0 st %Cpu5 : 37.4 us, 14.6 sy, 0.0 ni, 41.1 id, 6.0 wa, 0.7 hi, 0.3 si, = 0.0 st MiB Mem : 24080.5 total, 14283.7 free, 526.5 used, 9270.3 buff/cac= he MiB Swap: 4060.0 total, 4060.0 free, 0.0 used. 23105.9 avail Me= m PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ CO= MMAND 22409 root 15 -5 4282356 65240 2808 S 118.3 0.3 75:55.61 da= fileserver Here is the output of `fs getcacheparms` while both clients were copying th= e file to AFS: client1: AFS using 781060 of the cache's available 891289 1K byte blocks. client2: AFS using 0 of the cache's available 891289 1K byte blocks. *************************** Our environment *************************** We have our environment configuration documented below, and are hoping you = might give us some pointers as to what might be a performance bottleneck. Our testing environment: - OpenAFS Servers - OpenAFS 1.8.9 - DB servers (total of 3) - 1 master - Rocky Linux 8.8 - 2 CPU - 4GB RAM - 2 replicas, with each having: - Rocky Linux 8.8 - 2 CPU - 4GB RAM - FS servers (total of 3) - 3 fileservers, with each having: - Rocky Linux 8.8 - 6 CPU - 24GB RAM - /usr/afs/local/BosConfig: restrictmode 0 restarttime 16 0 0 0 0 checkbintime 3 0 5 0 0 bnode dafs dafs 1 parm /usr/afs/bin/dafileserver -L -cb 640000 -abortthreshold = 0 -vc 1000 parm /usr/afs/bin/davolserver -p 64 -log parm /usr/afs/bin/salvageserver parm /usr/afs/bin/dasalvager -parallel all32 end bnode simple upclientetc 1 parm /usr/afs/bin/upclient db1 /usr/afs/etc end bnode simple upclientbin 1 parm /usr/afs/bin/upclient db1 /usr/afs/bin end - OpenAFS Clients - client1 - openSUSE 15.1 - OpenAFS 1.8.7 - 6 CPUs - 16GB RAM - `fs getcacheparms` AFS using 12 of the cache's available 891289 1K byte blocks. - /etc/sysconfig/openafs-client: AFSD_ARGS=3D"-fakestat -stat 6000 -dcache 6000 -daemons 6 -volu= mes 256 -files 50000 -chunksize 17" - client2 - openSUSE 13.2 - OpenAFS 1.8.7 - 2 CPUs - 2GB RAM - `fs getcacheparms` AFS using 0 of the cache's available 891289 1K byte blocks. - /etc/sysconfig/afs OPTIONS=3D$XXLARGE (and XXLARGE=3D"-fakestat -stat 4000 -dcache 4000 -daemons 6 = -volumes 256 -afsdb") Thanks for the help!! Regards, Collin Collin Gros Staff Software Engineer RICOH Graphic Communications - DSBC Ricoh USA, Inc Phone: +1 720-663-3225 Email: collin.gros@ricoh-usa.com [cid:image001.png@01D9CEC4.5B423DC0] --_000_DM6PR17MB4181BC29946B9CDF07706998AC17ADM6PR17MB4181namp_ Content-Type: text/html; charset="us-ascii" Content-Transfer-Encoding: quoted-printable

Dear OpenAFS community,

 <= /p>

 <= /p>

We are administrators for= an OpenAFS environment of (what will be) about 400 users and are running i= nto some performance issues, for which we hope you might have some advice...

 <= /p>

1. Do you have any source= s we can look at that might help us in adjusting configuration to improve p= erformance? We read the man page for `dafileserver` and messed around a lot with our arguments to `dafileserver` (increasing t= hem past the values set for -L, or Large)... though we haven't noticed much= of an improvement in performance through our testing. See below for the co= nfiguration we currently have set for `dafileserver` on all of our OpenAFS file servers.

 <= /p>

2. Do you know what kind = of read/write speed we should expect for an enviroment/configuration of thi= s size? It would be helpful for us to know what we should be expecting in our environment as far as performance is concern= ed.

 <= /p>

=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=

Our performance test=

=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=

Here are results from our= testing with a binary file (7103053824 bytes in size, or 6.7GB), copying i= t from one client to AFS:

  client1: openSUSE = 15.1

  server: AFS file s= erver that hosts the AFS volumes used for our testing

 <= /p>

  `scp`: client1 (lo= cal) -> server (local): 102.2MB/s (66s)

  `cp`: client1 (loc= al) -> client1 (AFS file space): 19.2MB/s (352s)

  `cp`: client1 (AFS= file space) -> client1 (AFS file space): 19.46MB/s (348s)

 <= /p>

Here are results from our= testing with the same binary file (7103053824 bytes in size, or 6.7GB), co= pying it in parallel from two clients to the same AFS volume:

  client1 (local) -&= gt; server (AFS file space): 10.22MB/s (663s)

  client2 (local) -&= gt; server (AFS file space): 9.69MB/s (699s)

 <= /p>

  client1 (AFS file = space) -> client1 (AFS file space): 5.38MB/s (1258s)

  client2 (AFS file = space) -> client2 (AFS file space): 7MB/s (965s)

 <= /p>

  client1 (AFS file = space) -> client1 (local): 13.15MB/s (515s)

  client2 (AFS file = space) -> client2 (local): 15.57MB/s (435s)

 <= /p>

  client1 total time= taken: 2436s

  client2 total time= taken: 2099s

 <= /p>

 <= /p>

 <= /p>

Here is a snapshot of wha= t `top` looks like from the AFS file server while the copy is taking place:=

 <= /p>

  top - 16:14:14 up = 5 days,  7:29,  2 users,  load average: 1.06, 0.37, 0.26

  Tasks: 297 total,&= nbsp;  2 running, 294 sleeping,   1 stopped,   0 z= ombie

  %Cpu0  : 17.3= us,  6.5 sy,  0.0 ni, 69.4 id,  1.7 wa,  1.0 hi, = 4.1 si,  0.0 st

  %Cpu1  : 16.2= us,  4.1 sy,  0.0 ni, 65.5 id, 13.2 wa,  0.7 hi,  0.3 = si,  0.0 st

  %Cpu2  : = ; 5.0 us,  6.7 sy,  0.3 ni, 12.4 id, 63.2 wa,  1.0 hi, 11.4 = si,  0.0 st

  %Cpu3  : = ; 7.5 us,  5.1 sy,  9.2 ni, 44.2 id, 31.5 wa,  1.4 hi, = 1.0 si,  0.0 st

  %Cpu4  : 13.3= us,  6.5 sy,  2.0 ni, 67.6 id,  9.9 wa,  0.7 hi, = 0.0 si,  0.0 st

  %Cpu5  : 37.4= us, 14.6 sy,  0.0 ni, 41.1 id,  6.0 wa,  0.7 hi,  0.3 = si,  0.0 st

  MiB Mem :  24= 080.5 total,  14283.7 free,    526.5 used,   = 9270.3 buff/cache

  MiB Swap: &nb= sp; 4060.0 total,   4060.0 free,      0.= 0 used.  23105.9 avail Mem

 <= /p>

    &= nbsp; PID USER      PR  NI    = VIRT    RES    SHR S  %CPU  %MEM&nb= sp;    TIME+ COMMAND

    22409 = root      15  -5 4282356  65240 &nb= sp; 2808 S 118.3   0.3  75:55.61 dafileserver

 <= /p>

Here is the output of `fs= getcacheparms` while both clients were copying the file to AFS:=

 <= /p>

  client1: AFS using= 781060 of the cache's available 891289 1K byte blocks.

  client2: AFS using= 0 of the cache's available 891289 1K byte blocks.

 <= /p>

 <= /p>

*************************= **

Our environment

*************************= **

We have our environment c= onfiguration documented below, and are hoping you might give us some pointe= rs as to what might be a performance bottleneck.

 <= /p>

  Our testing enviro= nment:

    - Open= AFS Servers

    &= nbsp; - OpenAFS 1.8.9

    &= nbsp; - DB servers (total of 3)

    &= nbsp;   - 1 master

    &= nbsp;     - Rocky Linux 8.8

    &= nbsp;     - 2 CPU

    &= nbsp;     - 4GB RAM

    &= nbsp;   - 2 replicas, with each having:

    &= nbsp;     - Rocky Linux 8.8

    &= nbsp;     - 2 CPU

    &= nbsp;     - 4GB RAM

    &= nbsp; - FS servers (total of 3)

    &= nbsp;   - 3 fileservers, with each having:

    &= nbsp;     - Rocky Linux 8.8

    &= nbsp;     - 6 CPU

    &= nbsp;     - 24GB RAM

    &= nbsp;     - /usr/afs/local/BosConfig:=

    &= nbsp;         restrictmode 0

    &= nbsp;         restarttime 16 0 0 0 = 0

    &= nbsp;         checkbintime 3 0 5 0 = 0

    &= nbsp;         bnode dafs dafs 1

    &= nbsp;         parm /usr/afs/bin/daf= ileserver -L -cb 640000 -abortthreshold 0 -vc 1000

    &= nbsp;         parm /usr/afs/bin/dav= olserver -p 64 -log

    &= nbsp;         parm /usr/afs/bin/sal= vageserver

    &= nbsp;         parm /usr/afs/bin/das= alvager -parallel all32

    &= nbsp;         end=

    &= nbsp;         bnode simple upclient= etc 1

    &= nbsp;         parm /usr/afs/bin/upc= lient db1 /usr/afs/etc

    &= nbsp;         end=

    &= nbsp;         bnode simple upclient= bin 1

    &= nbsp;         parm /usr/afs/bin/upc= lient db1 /usr/afs/bin

    &= nbsp;         end=

    - Open= AFS Clients

    &= nbsp; - client1

    &= nbsp;   - openSUSE 15.1

    &= nbsp;   - OpenAFS 1.8.7

    &= nbsp;   - 6 CPUs

    &= nbsp;   - 16GB RAM

    &= nbsp;   - `fs getcacheparms`

    &= nbsp;       AFS using 12 of the cache's avail= able 891289 1K byte blocks.

    &= nbsp;   - /etc/sysconfig/openafs-client:

    &= nbsp;       AFSD_ARGS=3D"-fakestat -stat= 6000 -dcache 6000 -daemons 6 -volumes 256 -files 50000 -chunksize 17"=

    &= nbsp; - client2

    &= nbsp;   - openSUSE 13.2

    &= nbsp;   - OpenAFS 1.8.7

    &= nbsp;   - 2 CPUs

    &= nbsp;   - 2GB RAM

    &= nbsp;   - `fs getcacheparms`

    &= nbsp;       AFS using 0 of the cache's availa= ble 891289 1K byte blocks.

    &= nbsp;   - /etc/sysconfig/afs

    &= nbsp;       OPTIONS=3D$XXLARGE

    &= nbsp;         (and XXLARGE=3D"= -fakestat -stat 4000 -dcache 4000 -daemons 6 -volumes 256 -afsdb")

 <= /p>

 <= /p>

 <= /p>

Thanks for the help!!

 <= /p>

 <= /p>

Regards,

 <= /p>

Collin

 

Collin Gros

Staff Software Engi= neer

RICOH Graphic Commu= nications - DSBC

 =

Ricoh USA, Inc
Phone: +1 720-663-3225

Email: collin.gros@ric= oh-usa.com

 

--_000_DM6PR17MB4181BC29946B9CDF07706998AC17ADM6PR17MB4181namp_-- --_004_DM6PR17MB4181BC29946B9CDF07706998AC17ADM6PR17MB4181namp_ Content-Type: image/png; name="image001.png" Content-Description: image001.png Content-Disposition: inline; filename="image001.png"; size=2215; creation-date="Mon, 14 Aug 2023 22:29:56 GMT"; modification-date="Mon, 14 Aug 2023 22:29:56 GMT" Content-ID: Content-Transfer-Encoding: base64 iVBORw0KGgoAAAANSUhEUgAAAGQAAAAsCAMAAACkN+1nAAADAFBMVEW/wcPl5ueIio3gWmGUlpna IknDxcfiZ2nkdnTtq6WRk5X10syoqq333tngXGLjbm6HiYu0trh4enzyxL6MjpHywbr66eXlfnuF h4meoKLmgn6wsrTeTlvsoJpxcnXjcXCqrK7vtK7bKkyZm56Ymp3nhYHdQlWusLL10MrpmJLeUFzL zc+OkJKipKff4OHoioXnhoKBg4Xzx8HbM099foHfWWHcOVHBwsS8vsCho6XaJUrdRVb329X32NPk enjU1tfzycPniIOcnqH////rnZfplpHpko3dSVjkeHZ0dnjiaWrcNlDfVV7cPFLaH0n/+vjrnpj9 /f3ojYjvsKrhYmb8/Pz98++5u73FxsjcPlO3ubzbMU74+PmWmJvW19nX2Nr9+Pf98u+2uLrJy8zg X2TbL056fH/r7Oz39/i3ubuAgYT1z8l7fX/hYWaytLft7e6Ji4799vTso5309fXx8fLi4+T//Pzj 5OV/gYOmqKvY2dv19vbgYGXfV1//+/qlp6nMztC8vcD6+/v//f3Q0tPOz9HxvrfNz9B+gILu7u+j pajKy83T1Nbg4eLGyMrS09Xz9PTc3d744dzu7/DyxL3cP1SSlJeXmpzo6Ol3eXztrKakpqnqmpT9 9PL98e55e33sopvP0NLeU12bnaD76+jwt7Da3N3hZGf29veTlZiDhYjZ29z0zcfvtq/rm5X66ub4 4NvhY2b99/WDhIfq6+zoj4qPkZTurqfcNE/wuLLn6OnplY/bKEvv8PDs7e3vs63z8/OsrrGnqaz4 49/rn5n77+z0zcjp6erHycvh4uO4uryKjI7vsKnFx8n0zsj77erbLEz0zMX55eHV19j98/H76+f9 9fN0dXjR09T55N/rm5b219HtqKL98O27vb/p6uve3+DzysTspqDia2zvsav11M7wurT5+fryx7/w 8fHmg3/44t2eoaPokIt2d3pvcHPqmZTuraflgHzExsj77OjspZ765+P00Mj55uLql5L77uvgXWPo jIfplZDur6j///8bOZTjAAAFYklEQVRYhe2VaVRTRxTHQwygAUIoIihIKJSCGKAtIIoKprdqKxWI moUYEkgEQsq+C0UWQXbZEaQYFHCnat33fde6r3RfraV2X6xtjzPvkRAhydEvPa3H+TDzn3v/M783 705eKPAvNMozyDPI/xxy/68xgazPTU1NLW3vf3WNiE3pSE0N+7TfYHFuDivhwZrhM3pu9QeWe1s+ YMxy/dUslAyYhaV2fDaKkIt7xqaG9WQOgYRoNWOP53FsOdYckmcbrsk2LUGB+rtN6rkPy53w3Eba +RIhI/2QTnhPJySP4719TuCL2Iw3motjc3C2ewKx+yxbW0uPEIejAI3YFLCF423pg0T0B9gUhtT8 t4jdQhcgPS9UJ+Q2Ib90QJKlDVkSjcS+GRaLkR71RSOAOz7X7B68y7RV+OgHHx/yLqm9kXRIGYAc 98CPvXzAvMgSZ3aTk97Z2H75EcgiA5B3yKrhLdZkDkCI2gRqmd+fjwKz1HXdjtMjSEi0dfumTZu6 rjEMnaTrj6nWw/sXaSB38dijZR6BA+PUMxs8cyUhIROawnHbpx8y1szDoSkAiV3HMrUgLPy2TmiZ x6kfY+BcrH6IVtMDObYZFzTa9Z9viZgago9m3KhlJrZLVc+m4cd6nYwa5215GzcfQ6/LHdXYOJD8 RWkgf+PxBy3zEhy4O/jlEYW/BCkpKZHXHxgs/NEENE5ufARidhGNM+sHzL/h6xa+mZyk4BXRG5/o do2ah0RwtzYEOFgsmEI6r/8J0IEDrBV4uvg79bEe83dCXOHLw/CTndOGfEhcuOCXDlpbd4/LS+gC cMUBv50n3HcTPxnTC48PmU7I3sm4gujy/IhjxLerfe8urWvTgd5RWDBWzs64D+Acx6bXkLy4kYTg B02IHALxYzDydpLahsFgvLlmKtgMQ2MHGdu813RmsHFIwOxVw3uI11Q/fctMdHedJzM45PcR5now /NAqoiY/5TFWjVk0BMLn81eo0Vej+PwLURCKhijNpzTyltkr1tN6f9Esiaz//c6djRZXNXNk5/d/ Btqj+FHtgxlP0z/jIYWeTG6a3kUGUrohfSZ6MnHmehcZSOmGeG3Tk9mTpXfRwognhHwkKRLJO0e3 CNhCEMtotBqAquaKUwJYKQQToRuKwulX141X+6v7duTnHqZZsf0B/COaTaDcqnKZixHwrHbEiWog qYKuo8oUpoDHZFayY2RWrdVf++cU+rYYtTqdz2ZCWzO0vUGxv5eRvl5WqpSQ9rrn+krPp7k5ikQ3 csCkSq6SxKro49nrIFEltP8+DthHpHbCoZBkAZd5BgpVRQpVKaSVUG5IKL5FkGgH5hHguQwUqhp5 dlbLOifSXsHmot78LPDKKMCrESonlmcXgDy+NtkJ1lJF0mxJgwtdF4R3Vg4CO1D4VhkxRyfek8io XPh5PYZMWghi32KKKt+zs/96NLfh3twFuFtNDrFpsuzCJGUOjC87YCcCoMcVqDw9O0U6IclLQbCe uza+yo0GsY45BY4fgwsTV3dSPojji2/G1CIjL12MentVEuqPLAPey/JTSl5RGYJIoDK+iE4Do3iB V0w1SnPTa/VCYoIKYg57Oq7k0pSdVKoGUsmNoLr1ZcX6liB7A13ZtnC/E4YIvVT5Tq2FB5RSkMdw pb7sztY4qNjq1lcCzBcGQVa3QE4SnNwAPOk3UCzKqIsFcVCWjApGXlCdAVxpOdQuTRSmKYrLsT+3 IHGpOOsT4G6IhdWy1Rl7xFfSoUHKhdibJ9lnQFGa6J8EV+oGQYZeuPSgbXVlVkPjBtvpDfsnqjL0 JHVAGjpHJtvnPiFk5Ug2u0Rf8in6Cj+D/OcgDwGTViHhhXUGpgAAAABJRU5ErkJggg== --_004_DM6PR17MB4181BC29946B9CDF07706998AC17ADM6PR17MB4181namp_-- From andreas@MPA-Garching.MPG.DE Wed Aug 16 08:51:36 2023 From: andreas@MPA-Garching.MPG.DE (Andreas Breitfeld) Date: Wed, 16 Aug 2023 09:51:36 +0200 Subject: [OpenAFS] Advice regarding OpenAFS performance? In-Reply-To: References: Message-ID: Hi Collin, in case network traffic encryption is enabled for your AFS client (check with "fs getcrypt") a huge performance improvement can be achieved by switching it off immediately after the client daemon starts, for example in init script with "/usr/afsws/bin/fs setcrypt off" (specify full path to "fs" command in your environment). Thanks, Andreas On 15.08.23 00:29, Collin Gros wrote: > Dear OpenAFS community, > > We are administrators for an OpenAFS environment of (what will be) about > 400 users and are running into some performance issues, for which we > hope you might have some advice... > > 1. Do you have any sources we can look at that might help us in > adjusting configuration to improve performance? We read the man page for > `dafileserver` and messed around a lot with our arguments to > `dafileserver` (increasing them past the values set for -L, or Large)... > though we haven't noticed much of an improvement in performance through > our testing. See below for the configuration we currently have set for > `dafileserver` on all of our OpenAFS file servers. > > 2. Do you know what kind of read/write speed we should expect for an > enviroment/configuration of this size? It would be helpful for us to > know what we should be expecting in our environment as far as > performance is concerned. > > =========================== > > Our performance test > > =========================== > > Here are results from our testing with a binary file (7103053824 bytes > in size, or 6.7GB), copying it from one client to AFS: > >   client1: openSUSE 15.1 > >   server: AFS file server that hosts the AFS volumes used for our testing > >   `scp`: client1 (local) -> server (local): 102.2MB/s (66s) > >   `cp`: client1 (local) -> client1 (AFS file space): 19.2MB/s (352s) > >   `cp`: client1 (AFS file space) -> client1 (AFS file space): 19.46MB/s > (348s) > > Here are results from our testing with the same binary file (7103053824 > bytes in size, or 6.7GB), copying it in parallel from two clients to the > same AFS volume: > >   client1 (local) -> server (AFS file space): 10.22MB/s (663s) > >   client2 (local) -> server (AFS file space): 9.69MB/s (699s) > >   client1 (AFS file space) -> client1 (AFS file space): 5.38MB/s (1258s) > >   client2 (AFS file space) -> client2 (AFS file space): 7MB/s (965s) > >   client1 (AFS file space) -> client1 (local): 13.15MB/s (515s) > >   client2 (AFS file space) -> client2 (local): 15.57MB/s (435s) > >   client1 total time taken: 2436s > >   client2 total time taken: 2099s > > Here is a snapshot of what `top` looks like from the AFS file server > while the copy is taking place: > >   top - 16:14:14 up 5 days,  7:29,  2 users,  load average: 1.06, 0.37, > 0.26 > >   Tasks: 297 total,   2 running, 294 sleeping,   1 stopped,   0 zombie > >   %Cpu0  : 17.3 us,  6.5 sy,  0.0 ni, 69.4 id,  1.7 wa,  1.0 hi,  4.1 > si,  0.0 st > >   %Cpu1  : 16.2 us,  4.1 sy,  0.0 ni, 65.5 id, 13.2 wa,  0.7 hi,  0.3 > si,  0.0 st > >   %Cpu2  :  5.0 us,  6.7 sy,  0.3 ni, 12.4 id, 63.2 wa,  1.0 hi, 11.4 > si,  0.0 st > >   %Cpu3  :  7.5 us,  5.1 sy,  9.2 ni, 44.2 id, 31.5 wa,  1.4 hi,  1.0 > si,  0.0 st > >   %Cpu4  : 13.3 us,  6.5 sy,  2.0 ni, 67.6 id,  9.9 wa,  0.7 hi,  0.0 > si,  0.0 st > >   %Cpu5  : 37.4 us, 14.6 sy,  0.0 ni, 41.1 id,  6.0 wa,  0.7 hi,  0.3 > si,  0.0 st > >   MiB Mem :  24080.5 total,  14283.7 free,    526.5 used,   9270.3 > buff/cache > >   MiB Swap:   4060.0 total,   4060.0 free,      0.0 used.  23105.9 > avail Mem > >       PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM > TIME+ COMMAND > >     22409 root      15  -5 4282356  65240   2808 S 118.3   0.3 > 75:55.61 dafileserver > > Here is the output of `fs getcacheparms` while both clients were copying > the file to AFS: > >   client1: AFS using 781060 of the cache's available 891289 1K byte blocks. > >   client2: AFS using 0 of the cache's available 891289 1K byte blocks. > > *************************** > > Our environment > > *************************** > > We have our environment configuration documented below, and are hoping > you might give us some pointers as to what might be a performance > bottleneck. > >   Our testing environment: > >     - OpenAFS Servers > >       - OpenAFS 1.8.9 > >       - DB servers (total of 3) > >         - 1 master > >           - Rocky Linux 8.8 > >           - 2 CPU > >           - 4GB RAM > >         - 2 replicas, with each having: > >           - Rocky Linux 8.8 > >           - 2 CPU > >           - 4GB RAM > >       - FS servers (total of 3) > >         - 3 fileservers, with each having: > >           - Rocky Linux 8.8 > >           - 6 CPU > >           - 24GB RAM > >           - /usr/afs/local/BosConfig: > >               restrictmode 0 > >               restarttime 16 0 0 0 0 > >               checkbintime 3 0 5 0 0 > >               bnode dafs dafs 1 > >               parm /usr/afs/bin/dafileserver -L -cb 640000 > -abortthreshold 0 -vc 1000 > >               parm /usr/afs/bin/davolserver -p 64 -log > >               parm /usr/afs/bin/salvageserver > >               parm /usr/afs/bin/dasalvager -parallel all32 > >               end > >               bnode simple upclientetc 1 > >               parm /usr/afs/bin/upclient db1 /usr/afs/etc > >               end > >               bnode simple upclientbin 1 > >               parm /usr/afs/bin/upclient db1 /usr/afs/bin > >               end > >     - OpenAFS Clients > >       - client1 > >         - openSUSE 15.1 > >         - OpenAFS 1.8.7 > >         - 6 CPUs > >         - 16GB RAM > >         - `fs getcacheparms` > >             AFS using 12 of the cache's available 891289 1K byte blocks. > >         - /etc/sysconfig/openafs-client: > >             AFSD_ARGS="-fakestat -stat 6000 -dcache 6000 -daemons 6 > -volumes 256 -files 50000 -chunksize 17" > >       - client2 > >         - openSUSE 13.2 > >         - OpenAFS 1.8.7 > >         - 2 CPUs > >         - 2GB RAM > >         - `fs getcacheparms` > >             AFS using 0 of the cache's available 891289 1K byte blocks. > >         - /etc/sysconfig/afs > >             OPTIONS=$XXLARGE > >               (and XXLARGE="-fakestat -stat 4000 -dcache 4000 -daemons > 6 -volumes 256 -afsdb") > > Thanks for the help!! > > Regards, > > Collin > > *Collin Gros* > > *Staff Software Engineer* > > *RICOH Graphic Communications - DSBC* > > *Ricoh USA, Inc* > Phone: +1 720-663-3225 > > Email: collin.gros@ricoh-usa.com >