[OpenAFS] OpenAFS speed

Matthew N. Andrews matt@slackers.net
Wed, 25 Jun 2003 16:37:41 -0700


-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Here's some random food for thought.

In openafs, is each of the various client worker threads it's own process? IE
does it have it's own set of page tables etc. If so, is this necesary? It
seems silly to blow away your tlb, and potentially your processor cache(I
can't remember if L1/L2 caches use linear, or physical addresses) every time
one of these threads needs to run. I believe that if you make a system call,
and you switch to another linux kernel thread, linux will use the last
processes page tables, so that if that process immediately ends up being
chosen for scheduling again, the tlb is never flushed. I don't know if this
would realistically make a big difference or not, but if the openafs client
threads aren't using the user space section of the memory map(they might be I
don't remember) this might provide some performance benefit.

ideas?

- -Matt


Nathan Ward wrote:
> On Wed, 25 Jun 2003 08:53:47 +1200, Nathan Ward <nward@esphion.com> wrote:
>
>> On Tue, 24 Jun 2003 16:25:50 -0400 (EDT), Derrick J Brashear
>> <shadow@dementia.org> wrote:
>>
>>> On Wed, 25 Jun 2003, Nathan Ward wrote:
>>>
>>>> Several times I have mentioned this and gotten no useful response
>>>> that I
>>>> can remember.
>>>>
>>>> I am running OpenAFS on linux machines. Take a look at the context
>>>> switches
>>>> on the client and the server....
>>>>
>>>> (vmstat 1, look at the "cs" column)
>>>>
>>>> NFS solves this problem by having a fully kernel server and client.
>>>
>>>
>>> Our client is fully kernel, and yet it's the client that people seem to
>>> indicate is the big problem.
>>
>>
>> One moment please...
>>
>> AFS server on serv-1, client is serv-2. Gigabit ethernet between them...
>>
>> nward@serv-2:/$ vmstat 1
>> procs                      memory    swap          io
>> system         cpu
>> r  b  w   swpd   free   buff  cache  si  so    bi    bo   in    cs
>> us  sy id
>> 0  0  0    576 349032  37444 389620   0   0     1     2   18    11
>> 0   0 6
>> 0  0  0    576 282320  37512 456188   0   0     0     0  268  2707
>> 0  24 76
>> 0  0  0    576 282320  37512 456188   0   0     0     0  876 14909
>> 0  43 57
>> 0  0  0    576 282320  37512 456188   0   0     0     0  881 14884
>> 0  40 60
>> 0  0  0    576 282320  37512 456188   0   0     0     0  865 14917
>> 1  44 55
>> 0  0  0    576 282320  37512 456188   0   0     0     0  846 15008
>> 0  39 61
>> 1  0  0    576 254496  37596 483928   0   0     0   128  713 12035
>> 0  44 56
>> 0  0  2    576 238196  37612 500208   0   0     0     0  687 13162
>> 0  43 57
>> 0  0  2    576 238196  37612 500208   0   0     0     8  821 14754
>> 0  43 57
>> 0  0  1    576 238196  37612 500208   0   0     0     0  854 14737
>> 0  39 61
>> 0  0  1    576 238196  37612 500208   0   0     0     8  846 14696
>> 0  45 55
>> 0  0  0    576 238196  37612 500208   0   0     0     0  777 13821
>> 0  34 65
>> 0  0  0    576 238264  37612 500208   0   0     0     0  238  2982
>> 0   6 93
>> 0  0  0    576 238264  37612 500208   0   0     0     0  103   116
>> 0   0 100
>> 0  0  0    576 238264  37612 500208   0   0     0     0  103   114
>> 0   0 100
>>
>> serv-1:~# vmstat 1
>> procs -----------memory---------- ---swap-- -----io---- --system--
>> ---- cpu-- --
>> r  b   swpd   free   buff  cache   si   so    bi    bo   in    cs us
>> sy id wa
>> 0  0   4452 234396  84868 308172    0    0     1    14   10     6  0
>> 1 99 0
>> 0  0   4452 234388  84868 308172    0    0     0     0  120    45  0
>> 0 100  0
>> 0  0   4452 234388  84868 308172    0    0     0     0  125    56  0
>> 0 100  0
>> 0  0   4452 299904  84884 242636    0    0     0    40  114    54  0
>> 3 97 0
>> 1  0   4452 293592  84892 248844    0    0     0     0 2193  1944 40
>> 12 49 0
>> 1  0   4452 287040  84900 255388    0    0     0     0 2287  1994 45
>> 12 44 0
>> 1  0   4452 280476  84904 261948    0    0     0     0 2266  1838 47
>> 12 42 0
>> 1  0   4452 273932  84916 268476    0    0     0     0 2207  1891 41
>> 12 47 0
>> 1  0   4452 268280  84920 274124    0    0     0   260 2050  1724 37
>> 14 50 0
>> 1  0   4452 263988  84924 278412    0    0     0     0 1494  1231 28
>> 7 64 0
>> 1  0   4452 257436  84932 284956    0    0     0     0 2233  2050 47
>> 9 44 0
>> 2  0   4452 250884  84940 291500    0    0     0     0 2260  1823 48
>> 10 42 0
>> 1  0   4452 244320  84944 298060    0    0     0     0 2251  1922 43
>> 14 43 0
>> 1  0   4452 238168  84952 304204    0    0     0   168 2132  1857 41
>> 10 49 0
>> 0  0   4452 234196  84956 308172    0    0     0     0 1361  1150 25
>> 8 66 0
>> 0  0   4452 234196  84956 308172    0    0     0     0  127    60  0
>> 0 100  0
>>
>> nward@serv-2:/$ dd if=/dev/zero of=/afs/alb-nz/public/blah bs=256k
>> count=256 256+0 records in
>> 256+0 records out
>>
>
>
> With native memcache on on the client machine... (-stat 2000 -memcache -
> chunksize 14 -daemons 3 -volumes 50)
> It's a LOT faster, but there are not less context switches.. on a busy
> machine this could be a problem..
>
> nward@serv-2:/$ vmstat 1
>   procs                      memory    swap          io
> system         cpu
> r  b  w   swpd   free   buff  cache  si  so    bi    bo   in    cs  us
> sy id
> 1  0  0    400 189604  39152 496496   0   0     1     2   18    11   0
> 0  6
> 0  0  0    400 189604  39152 496496   0   0     0   128  137   138   0
> 0 100
> 0  0  1    400 221744  39152 464292   0   0     0     0  873 21399   0
> 37 63
> 2  0  0    400 205148  39152 480888   0   0     0     0 2378 56966   0
> 59 40
> 0  0  2    400 189536  39152 496496   0   0     0     0 2435 56344   0
> 59 41
> 0  0  0    400 189604  39152 496496   0   0     0     0  747 14851   0
> 10 90
> 0  0  0    400 189604  39152 496496   0   0     0   125  127   114   0
> 0 100
>
>

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.0.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQE++jJFpLF3UzlwZVgRAo1uAJ9Ly4Edav4EdBdqdrN9CFUyuGYNBACfXak+
QRqA0uSffFapzVka/Fapk44=
=tigz
-----END PGP SIGNATURE-----