[OpenAFS-devel] current cvs kernel module crash on 2.6.13-rc6
Martin MOKREJŠ
mmokrejs@ribosome.natur.cuni.cz
Wed, 17 Aug 2005 15:20:47 +0200
> Yes, this cache was meant to be used for memcache, so now I have:
>
> # cat /usr/vice/etc/cacheinfo
> /afs:/usr/vice/cache:36533485
> #
>
>
>>I tried to copy 17GB large file from local xfs partition to the ext2 based /vicepa.
>>Cache is also ext2, btw. The machine has serious problems with interactivity,
>>the mouse movements stop once in a second or so, probably as a result of the
>>high context switching? How can I improve the performance?
>
>
> When using the maximum 85% of the cachepartition size as cachesize, I got nice numbers
> out:
>
> # vmstat 1
> procs -----------memory---------- ---swap-- -----io---- --system-- ----cpu----
> r b swpd free buff cache si so bi bo in cs us sy id wa
> 0 0 0 2001004 88144 121404 0 0 1344 281 434 400 2 64 27 7
> 0 0 0 2001004 88144 121404 0 0 0 0 312 91 0 0 100 0
> 0 0 0 2001004 88144 121404 0 0 0 0 582 265 0 0 100 0
> 0 0 0 2001004 88144 121404 0 0 0 0 498 237 0 0 100 0
> 0 1 0 2000128 88444 121464 0 0 360 0 390 272 1 1 57 41
> 0 1 0 1998516 89152 121464 0 0 708 5 491 442 0 2 0 98
> 0 1 0 2000212 89852 121464 0 0 700 8 490 442 0 5 0 95
> 0 1 0 1998600 90560 121464 0 0 708 0 488 439 0 2 0 98
> 0 1 0 1999572 91128 121464 0 0 696 0 486 433 0 3 0 97
> 1 0 0 1906688 91300 215324 0 0 31504 0 526 521 1 61 0 38
> 2 0 0 1792884 91452 328608 0 0 37764 0 554 563 0 71 0 29
> 1 0 0 1683780 91592 435792 0 0 35712 0 526 502 2 66 0 32
> 1 1 0 1579488 91600 540952 0 0 35072 0 525 509 1 67 0 32
> 1 0 0 1472600 91740 647952 0 0 35584 0 536 525 0 69 0 31
> 1 0 0 1366096 91884 755312 0 0 35844 0 537 528 1 67 0 32
> 1 0 0 1252480 92032 868760 0 0 37760 0 545 542 0 71 0 29
> 0 1 0 1146880 92040 974872 0 0 35456 0 528 510 2 68 0 30
> 1 0 0 1037472 92188 1085296 0 0 36740 8240 560 538 0 71 0 29
> 1 0 0 928724 92328 1191584 0 0 35456 37008 613 492 2 67 0 31
> 0 1 0 828840 92408 1292056 0 0 33536 32896 606 482 0 67 0 33
> 0 1 0 722392 92420 1398808 0 0 35584 37016 621 512 0 69 0 31
> 1 0 0 635088 92540 1486840 0 0 29316 28784 569 432 1 59 0 40
> 1 0 0 528324 92676 1591248 0 0 34816 37008 613 499 0 68 0 32
> 2 0 0 427404 92684 1694704 0 0 34432 32896 613 489 1 68 0 31
> 1 0 0 321384 92816 1798560 0 0 34688 37008 619 505 0 68 0 32
> 0 1 0 242196 92928 1880348 0 0 27272 25112 643 434 1 58 0 41
> 1 0 0 132332 93068 1987708 0 0 35712 36864 596 509 0 70 0 30
> 1 0 0 24260 93208 2095540 0 0 35968 36864 597 523 1 69 0 30
> 1 0 0 57772 86368 2109936 0 0 23424 20492 527 892 0 82 0 18
> 1 0 0 79368 79172 2127128 0 0 24580 24652 533 913 1 74 0 25
> 1 0 0 79600 69952 2162700 0 0 29824 32900 584 1003 1 81 0 18
> 2 0 0 79836 59452 2199092 0 0 29184 28784 566 920 1 76 0 23
>
> and stably during the some part of the copy process (although not finished yet).
> The system was fully responsive. Later on, I see
>
> 0 1 216 84616 9472 2643144 0 0 13496 16592 479 8239 5 57 0 38
> 0 1 216 84492 9496 2642948 0 0 12496 12944 489 7958 6 58 0 36
> 0 1 216 84492 9520 2643036 0 0 14108 12444 499 8418 10 60 0 30
> 0 1 216 84616 9548 2642692 0 0 14396 16196 505 9315 8 62 0 30
> 0 1 216 84496 9552 2642968 0 0 12624 12560 478 7901 4 58 0 38
> 1 0 216 84248 9608 2643144 0 0 15008 12696 506 8984 4 65 0 31
> 2 0 216 84496 9652 2642776 0 0 13492 16636 481 8058 7 59 0 34
> 1 1 216 84372 9644 2642356 0 0 11596 15844 484 6968 9 55 0 36
> 1 0 216 84248 9660 2642688 0 0 14396 8536 492 9112 9 61 0 30
> 2 1 216 84356 9676 2641940 0 0 15448 16628 569 9253 5 67 0 28
> 1 0 216 84232 9724 2641828 0 0 13364 12508 509 8175 8 58 0 34
> 1 0 216 84852 9748 2641080 0 0 11308 11520 483 7119 9 50 0 41
> 1 1 216 84728 9768 2640456 0 0 3856 50332 445 2372 2 18 0 80
> 1 0 216 84356 9764 2641484 0 0 2828 36 413 2026 4 13 0 83
> 1 1 216 84604 9784 2641460 0 0 9516 19996 464 6275 6 43 0 51
> 1 0 216 84356 9824 2641436 0 0 18792 116 519 11410 10 78 0 13
> 2 0 216 84480 9860 2641216 0 0 20148 40 520 11664 10 83 0 7
> 1 2 216 84232 9880 2640652 0 0 4240 79508 446 2783 5 24 0 71
> 0 2 216 84480 9884 2640472 0 0 1316 27712 407 1138 4 6 0 90
> 0 1 216 84356 9892 2640416 0 0 996 27236 412 794 2 6 0 92
> 0 1 216 84604 9896 2640784 0 0 1060 0 392 899 1 9 0 90
> 2 1 216 84480 9900 2640756 0 0 10312 604 461 6284 4 45 0 51
> 0 2 216 84232 9896 2640460 0 0 2380 79340 428 1576 4 15 0 81
> 0 1 216 84356 9900 2640316 0 0 1412 23220 397 1117 3 8 0 89
> 0 1 216 84356 9900 2639980 0 0 288 59500 414 525 3 4 0 93
> 2 1 216 84728 9900 2639980 0 0 0 6112 437 206 5 2 0 93
> 1 0 216 84604 9932 2640500 0 0 8328 0 439 5576 5 38 0 57
> 0 1 216 84604 9964 2640472 0 0 6040 15200 449 4050 4 29 0 67
> procs -----------memory---------- ---swap-- -----io---- --system-- ----cpu----
> r b swpd free buff cache si so bi bo in cs us sy id wa
> 2 0 216 84480 10008 2640416 0 0 16064 0 489 10658 5 71 0 24
> 0 1 216 84604 10080 2639804 0 0 20180 0 521 13161 6 89 0 5
> 0 1 216 83860 10132 2639856 0 0 15196 65608 497 10012 6 65 0 29
> 1 0 216 84232 10132 2639856 0 0 0 4 431 105 0 4 0 96
> 1 1 216 84608 10200 2639768 0 0 18740 796 505 11053 4 80 0 16
> 0 1 216 84732 10268 2639308 0 0 20368 0 524 12574 7 87 0 6
> 0 1 216 84236 10352 2639620 0 0 19728 0 522 11593 9 85 0 6
> 0 1 216 83864 10380 2639108 0 0 6968 65608 471 4154 4 36 0 60
> 1 0 216 84608 10416 2639060 0 0 6012 4 454 4294 7 29 0 64
It started to be worse with context switches much later during the still running
copy process, but responsivity was fine:
procs -----------memory---------- ---swap-- -----io---- --system-- ----cpu----
r b swpd free buff cache si so bi bo in cs us sy id wa
1 1 216 84640 9880 2606164 0 0 4116 12 431 2717 1 19 0 80
0 1 216 84392 9888 2606344 0 0 20592 44 535 12848 3 91 0 6
0 1 216 84640 9900 2606132 0 0 19532 4 513 11761 5 85 0 10
1 0 216 84764 9928 2606104 0 0 20884 4 530 12169 4 92 0 4
0 1 216 83896 9928 2606040 0 0 672 65584 417 662 0 7 0 93
1 0 216 84392 9936 2606356 0 0 12336 4 479 8122 4 52 0 44
1 0 216 84392 9944 2606484 0 0 21172 40 532 13724 5 92 0 3
1 0 216 84640 9956 2606076 0 0 20304 0 513 13211 4 88 0 8
0 1 216 83896 9940 2606364 0 0 11980 65576 467 7915 2 55 0 43
0 1 216 84144 9940 2606364 0 0 0 0 402 80 0 2 0 98
1 0 216 84640 9948 2606484 0 0 17192 4 505 10224 4 73 0 23
0 1 216 84888 9952 2606068 0 0 20816 48 527 12490 5 89 0 6
1 0 216 84516 9956 2606384 0 0 19532 0 512 11278 4 87 0 9
0 1 216 84144 9968 2605936 0 0 8256 65572 446 4807 1 39 0 60
1 0 216 84392 9972 2606580 0 0 3952 4 438 2675 0 17 0 83
1 0 216 84392 9996 2606228 0 0 21332 0 523 13897 5 86 0 9
1 1 216 84392 10020 2606140 0 0 21040 104 535 13736 4 89 0 7
0 1 216 84640 10048 2606068 0 0 18940 0 500 12196 3 79 0 18
0 1 216 83648 10048 2606148 0 0 544 65564 391 606 0 6 0 94
0 1 216 84392 10052 2606232 0 0 4164 4 419 2900 2 17 0 81
1 0 216 84764 10036 2605888 0 0 21504 0 522 13859 2 87 0 11
1 1 216 84392 10048 2606108 0 0 20644 4 520 13516 4 86 0 10
0 1 216 83648 10036 2606004 0 0 19480 65608 515 12698 4 81 0 15
And when the process was finished successfully, I desperately to repeat the command.
The file was in cache so I don't why is the very beginning of the cp(1) so slow:
# vmstat 1
procs -----------memory---------- ---swap-- -----io---- --system-- ----cpu----
r b swpd free buff cache si so bi bo in cs us sy id wa
0 1 216 268456 14336 2389336 0 0 12889 12492 525 4368 4 60 11 25
0 1 216 267092 14952 2389336 0 0 616 0 467 402 0 3 0 97
0 1 216 265992 15560 2389336 0 0 608 0 472 387 0 2 0 98
0 1 216 264752 16164 2389336 0 0 604 0 467 382 0 3 0 97
0 1 216 263520 16764 2389336 0 0 600 0 465 402 0 4 0 96
0 1 216 262280 17396 2389336 0 0 632 0 470 404 0 1 0 99
0 1 216 261040 18016 2389336 0 0 620 0 467 397 0 3 0 97
0 1 216 259676 18656 2389336 0 0 640 0 472 400 0 2 0 98
1 1 216 258436 19280 2389336 0 0 624 0 467 390 0 3 0 97
0 1 216 257072 19892 2389336 0 0 612 0 469 399 0 1 0 99
0 1 216 255956 20460 2389336 0 0 568 0 456 369 0 3 0 97
0 1 216 254592 21096 2389336 0 0 636 0 471 403 0 5 0 95
0 1 216 253352 21736 2389336 0 0 640 17 477 409 0 3 0 97
0 1 216 251988 22352 2389336 0 0 616 0 467 393 0 2 0 98
0 1 216 250748 22964 2389336 0 0 612 8 465 400 0 2 0 98
0 1 216 249508 23584 2389336 0 0 620 0 466 396 0 0 0 100
0 1 216 248268 24204 2389336 0 0 620 0 466 392 0 3 0 97
0 1 216 247028 24800 2389336 0 0 596 0 462 384 0 2 0 98
0 1 216 245664 25436 2389336 0 0 636 0 470 400 0 1 0 99
0 1 216 244424 26052 2389336 0 0 616 0 466 401 0 3 0 97
0 1 216 243184 26664 2389336 0 0 612 0 464 390 0 1 0 99
0 1 216 241200 27296 2389336 0 0 632 0 471 403 0 2 0 98
0 1 216 239216 27932 2389336 0 0 636 0 470 401 0 4 0 96
1 1 216 237356 28548 2389336 0 0 616 0 466 388 0 3 0 97
0 2 216 235620 29148 2389336 0 0 600 0 460 541 0 4 0 96
0 1 216 233760 29700 2389336 0 0 552 184 496 501 0 2 0 98
0 1 216 231776 30320 2389336 0 0 620 0 466 396 0 2 0 98
0 1 216 230040 30916 2389336 0 0 596 0 461 380 0 2 0 98
0 1 216 228056 31532 2389336 0 0 616 0 465 387 0 3 0 97
0 2 216 226444 32108 2389336 0 0 576 0 456 529 0 1 0 99
0 1 216 224584 32736 2389336 0 0 628 164 482 522 0 3 0 97
the above probably shows that something was fetched from disk and compared to cache?
...
0 1 216 84808 9760 2598000 0 0 13060 12585 527 4013 4 58 10 29
0 1 216 84932 9764 2597808 0 0 29576 28680 609 967 1 59 0 40
1 0 216 84932 9748 2597612 0 0 35200 36872 641 1109 1 69 0 30
1 0 216 84684 9752 2598232 0 0 34560 32776 579 1026 1 66 0 33
1 0 216 84840 9760 2597800 0 0 32136 33532 634 995 1 62 0 37
1 0 216 84840 9780 2597864 0 0 31364 28672 605 941 1 58 0 41
0 1 216 84972 9768 2597996 0 0 35460 36864 578 1063 1 67 0 32
1 0 216 84848 9784 2597940 0 0 34308 32772 572 1024 1 65 0 34
1 0 216 85252 9772 2598068 0 0 35972 40960 594 1125 0 72 0 28
0 1 216 85128 9776 2598396 0 0 30856 28672 560 954 1 59 0 40
1 0 216 84888 9764 2598600 0 0 35972 36864 604 1103 1 69 0 30
0 1 216 84888 9784 2598992 0 0 36356 36864 604 1119 0 72 0 28
0 1 216 84904 9788 2599056 0 0 34184 32772 694 1098 1 64 0 35
0 1 216 84904 9768 2599128 0 0 34180 36892 643 1088 2 63 0 35
1 0 216 85028 9720 2599168 0 0 35588 32896 608 1119 0 66 0 34
1 0 216 84780 9696 2599468 0 0 35080 37008 614 1070 1 71 0 28
0 2 216 84904 9660 2599628 0 0 33288 32896 595 1027 0 66 0 34
1 0 216 84904 9648 2599576 0 0 34948 32896 620 1084 1 71 0 28
1 0 216 85028 9656 2599416 0 0 35208 37008 618 1101 1 71 0 28
0 1 216 84904 9676 2599496 0 0 34564 32896 700 1132 0 70 0 30
and now we really write the data again over the old file. Still, the 'cs' values
are higher then in the first pass, when mostly were in 600-800 range, with max values
around 1200.
0 1 216 84932 9712 2598464 0 0 13656 12288 510 655 0 33 0 67
0 1 216 84684 9712 2598544 0 0 12348 12336 458 544 0 29 0 71
0 1 216 85180 9720 2597996 0 0 15168 16384 469 584 2 35 0 63
1 0 216 84808 9720 2598252 0 0 12976 12300 441 527 0 28 0 72
0 1 216 85056 9712 2597928 0 0 15808 16384 467 593 1 36 0 63
0 1 216 84808 9708 2598196 0 0 14776 12288 463 566 1 33 0 66
0 1 216 84932 9696 2597984 0 0 14008 16384 457 535 0 34 0 66
1 0 216 84824 9708 2598372 0 0 14524 12288 451 559 0 32 0 68
0 1 216 84700 9716 2598732 0 0 14264 16384 460 557 0 35 0 65
0 1 216 85072 9704 2598324 0 0 15288 12288 453 569 0 38 0 62
1 0 216 85320 9712 2598060 0 0 14264 16384 464 559 0 33 0 67
0 1 216 85072 9700 2598228 0 0 14656 12288 450 543 1 30 0 69
After a while 'cs' settles to much lower values, probably because of the drop
in throughput.
And lastly, again switches to the 'batch' processing:
2 0 216 84852 11500 2596372 0 0 13880 12328 479 9151 4 57 0 39
0 1 216 85100 11492 2596432 0 0 12600 12456 497 8250 3 55 0 42
1 0 216 84976 11492 2596440 0 0 12724 12316 471 8432 3 55 0 42
1 0 216 84976 11488 2596488 0 0 13752 12308 489 9132 5 57 0 38
2 0 216 84852 11488 2596632 0 0 12472 11984 684 8592 3 55 0 42
0 1 216 84944 11488 2596504 0 0 12724 16600 660 8270 6 59 0 35
0 1 216 84820 11484 2596508 0 0 13880 12444 615 8778 5 59 0 36
0 1 216 84820 11468 2596288 0 0 15036 16548 532 8933 4 64 0 32
0 1 216 84944 11488 2595952 0 0 12212 12380 477 7173 1 53 0 46
0 1 216 84944 11472 2595736 0 0 13560 11976 498 8690 4 59 0 37
1 1 216 85068 11476 2595496 0 0 14588 12444 538 8552 2 64 0 34
1 0 216 84960 11472 2595452 0 0 12720 13424 465 7601 2 53 0 45
1 1 216 84836 11468 2595636 0 0 16836 16592 511 9920 5 71 0 25
2 0 216 84836 11472 2595656 0 0 14264 12444 593 8815 5 59 0 36
3 0 216 84960 11468 2595296 0 0 14524 15872 500 9020 6 60 0 34
2 0 216 84960 11452 2595052 0 0 13364 12448 529 7928 2 57 0 41
1 2 216 84836 11464 2594292 0 0 5012 74692 425 3015 2 22 0 76
0 2 216 84836 11448 2594200 0 0 772 11676 418 629 0 5 0 95
0 1 216 85084 11452 2594364 0 0 1444 56 467 1202 0 8 0 92
0 1 216 85084 11464 2594592 0 0 10284 0 459 6481 4 41 0 55
1 0 216 84960 11452 2594552 0 0 17320 0 487 10275 3 75 0 22
0 2 216 84092 11444 2594524 0 0 4272 85936 444 2700 1 21 0 78
0 1 216 85084 11448 2593396 0 0 1512 31144 408 1024 0 8 0 92
0 1 216 84836 11452 2594072 0 0 1576 0 395 1285 0 10 0 90
0 1 216 85208 11440 2593984 0 0 8580 0 428 5533 2 37 0 61
1 0 216 84836 11436 2594424 0 0 20172 0 511 13203 6 87 0 7
0 2 216 84960 11432 2593596 0 0 5916 71889 425 3912 2 26 0 72
0 1 216 85084 11436 2593852 0 0 900 4096 390 715 1 7 0 92
1 0 216 85084 11432 2594556 0 0 5912 0 413 3919 1 27 0 72
1 0 216 84836 11412 2594744 0 0 20816 0 518 13557 3 88 0 9
0 1 216 84340 11412 2594680 0 0 416 65608 436 498 0 6 0 94
1 0 216 84588 11420 2594220 0 0 616 51156 407 466 0 6 0 94
2 0 216 84832 10700 2589380 0 0 20948 0 525 12435 5 87 0 8
0 1 216 84088 10688 2589048 0 0 18664 65608 511 10805 5 82 0 13
0 1 216 84336 10688 2589056 0 0 8 12 424 94 0 4 0 96
1 0 216 84956 10696 2589448 0 0 17188 4 504 11292 3 74 0 23
2 0 216 84956 10680 2589500 0 0 21204 0 526 13835 5 91 0 4
1 0 216 85204 10680 2589184 0 0 21460 0 523 13961 5 92 0 3
0 1 216 84088 10664 2589396 0 0 5940 65608 461 4063 0 30 0 70
1 0 216 84956 10668 2589356 0 0 9864 4 473 6517 1 45 0 54
1 0 216 85080 10668 2589068 0 0 20980 0 522 13669 4 85 0 11
2 0 216 84956 10668 2589392 0 0 21556 0 709 12748 5 90 0 5
0 1 216 84212 10640 2589048 0 0 13396 65608 492 8158 2 59 0 39
Well, the machine lives its own life. I'll stop spoiling your mailboxes.
Maybe nothing is wrong with this behaviour. I was just curious if the process
will pick up the file from cache and perform faster but it doesn't seem so
when studying the numbers above. Yes, I know, I copied the file from local
partition and that should differ when copying data primarily housed on
remote machines, sure. But here is seems there is no reason to have a cache
then if I just work on data housed on a local partition.
Martin
>
> but interactive resposniveness is still ok. At the very moment I see:
>
> # fs getcacheparms
> AFS using 16691389 of the cache's available 36533485 1K byte blocks.
> aquarius ~ # df -h
> Filesystem Size Used Avail Use% Mounted on
> /dev/sda2 261G 210G 51G 81% /
> udev 1.5G 216K 1.5G 1% /dev
> /dev/sda3 12G 2.8G 8.7G 25% /usr/portage
> /dev/sdb1 37G 17G 19G 47% /usr/vice/cache
> /dev/sdb2 37G 2.9G 32G 9% /vicepa
> none 1.5G 0 1.5G 0% /dev/shm
> AFS 8.6G 0 8.6G 0% /afs
> #
>
> I don't see a reason why the change of behaviour. There is enough space in the cache.
> Does afsd check that the file to be copied can fit into the cache at all? I know,
> when reading from pipe one has no clue of the size of the dataset, but when
> reading a file ... we could take advatange of that and skip using the cache
> for the process.
>
>
>>Anyway, so here's what was going on when copying the 17GB file.
>>
>># vmstat 1
>>procs -----------memory---------- ---swap-- -----io---- --system-- ----cpu----
>> r b swpd free buff cache si so bi bo in cs us sy id wa
>> 0 1 0 2135700 8276 680648 0 0 499 25 373 236 2 2 93 3
>> 1 0 0 2082504 8352 732800 0 0 28288 21 484 462 1 54 0 45
>> 1 0 0 2041948 8384 772112 0 0 4736 0 502 22645 4 92 0 4
>> 1 0 0 1939648 8416 872464 0 0 33280 0 923 23195 1 87 0 12
>> 1 0 0 1837100 8448 972816 0 0 33280 0 905 23292 4 86 0 10
>> 1 0 0 1734552 8480 1073168 0 0 33280 0 914 23345 4 85 0 11
>> 0 1 0 1632872 8516 1172520 0 0 33284 24856 972 22643 3 87 0 11
>> 1 0 0 1529208 8552 1273872 0 0 33280 37340 1020 24338 3 86 0 10
>> 1 0 0 1426784 8584 1374224 0 0 33280 33192 990 22727 3 87 0 11
>> 1 0 0 1324236 8616 1474576 0 0 33280 37348 1010 23562 2 88 0 10
>> 0 1 0 1226896 8648 1569700 0 0 33284 24892 908 19647 3 85 0 11
>> 1 0 0 1164524 8652 1631048 0 0 29568 12452 538 4215 1 82 0 17
>> 1 0 0 1118644 8684 1675280 0 0 3712 34200 740 23079 5 94 0 1
>> 1 0 0 1016220 8716 1775632 0 0 33280 53884 1039 23539 3 86 0 11
>> 1 1 0 913176 8748 1875984 0 0 33280 52188 1021 23543 3 86 0 11
>> 1 0 0 864444 8748 1924368 0 0 24320 1000 665 419 0 50 0 50
>> 0 1 0 820920 8776 1966884 0 0 8964 12436 462 16368 2 93 0 5
>> 2 1 0 770800 8788 2015048 0 0 21120 20728 488 7154 5 81 0 14
>> 1 0 0 708180 8820 2076688 0 0 12160 33160 682 23184 4 92 0 4
>> 1 0 0 605384 8852 2177040 0 0 33280 33812 1166 23540 3 87 0 11
>>...
>>procs -----------memory---------- ---swap-- -----io---- --system-- ----cpu----
>> r b swpd free buff cache si so bi bo in cs us sy id wa
>> 1 0 0 85160 7340 2669712 0 0 16672 785 759 1160 21 50 0 29
>> 3 0 124 85284 7368 2668900 0 0 16644 33160 1093 23053 3 93 0 4
>> 0 1 192 85408 7336 2669040 0 0 26496 8292 518 765 20 57 0 23
>> 3 1 196 85316 7340 2669176 0 0 6784 29584 1219 22213 3 95 0 2
>> 1 0 196 85564 7312 2669468 0 0 32256 200 816 975 3 64 0 33
>> 6 0 196 85564 7344 2669364 0 0 1024 33160 773 22118 4 95 0 0
>> 1 0 196 85284 7308 2670060 0 0 25304 4 473 834 21 53 0 26
>> 4 0 196 85160 7328 2669984 0 0 8064 33160 1369 22442 3 94 0 3
>> 2 0 196 85440 7296 2669644 0 0 32768 0 611 1113 6 69 0 25
>> 3 1 196 85904 7304 2669276 0 0 772 33172 1365 22705 9 90 0 0
>> 1 0 196 92724 7300 2662068 0 0 33024 0 675 1054 3 64 0 33
>> 4 0 196 85284 7328 2669592 0 0 24 24872 1073 22393 4 96 0 0
>> 5 0 196 85160 7300 2669924 0 0 31104 8292 612 978 5 63 0 32
>> 4 0 196 85532 7328 2669528 0 0 2416 29025 1325 22574 5 92 0 3
>> 1 1 196 85532 7304 2669328 0 0 13460 4148 693 732 38 34 0 28
>> 2 1 196 83828 7324 2669924 0 0 20236 59703 1112 21674 3 90 0 7
>> 2 1 196 85688 7304 2669516 0 0 18432 8 541 2049 2 52 0 46
>> 2 0 196 85472 7332 2669592 0 0 14848 29048 926 22828 2 94 0 4
>> 2 1 196 86960 7324 2668268 0 0 33280 12444 539 772 1 66 0 33
>> 1 0 196 85100 7352 2669812 0 0 8 25412 721 22495 3 97 0 0