[OmniOS-discuss] write amplification zvol
anthony omnios
icoomnios at gmail.com
Wed Sep 27 15:12:25 UTC 2017
thanks, this is the result of the test:
./iscsisvrtop 1 30 >> /tmp/iscsisvrtop.txt
more /tmp/iscsisvrtop.txt :
Tracing... Please wait.
2017 Sep 27 17:01:48 load: 0.22 read_KB: 345 write_KB: 56
client ops reads writes nops rd_bw wr_bw ard_sz
awr_sz rd_t wr_t nop_t align%
1.1.193.247 4 0 0 0 0 0
0 0 0 0 0 0
1.1.193.250 105 91 1 0 345 56
3 56 4 756 0 100
all 109 91 1 0 345 56
3 56 4 756 0 0
2017 Sep 27 17:01:49 load: 0.22 read_KB: 163 write_KB: 41
client ops reads writes nops rd_bw wr_bw ard_sz
awr_sz rd_t wr_t nop_t align%
1.1.193.247 32 26 2 0 117 41
4 20 6 417 0 100
1.1.193.250 42 34 0 0 46 0
1 0 7 0 0 0
all 74 60 2 0 163 41
2 20 7 417 0 0
2017 Sep 27 17:01:50 load: 0.22 read_KB: 499 write_KB: 232
client ops reads writes nops rd_bw wr_bw ard_sz
awr_sz rd_t wr_t nop_t align%
1.1.193.247 45 40 3 0 210 196
5 65 5 763 0 100
1.1.193.250 77 65 2 0 288 36
4 18 4 439 0 100
all 122 105 5 0 499 232
4 46 4 634 0 0
2017 Sep 27 17:01:51 load: 0.22 read_KB: 314 write_KB: 84
client ops reads writes nops rd_bw wr_bw ard_sz
awr_sz rd_t wr_t nop_t align%
1.1.193.247 3 1 0 0 0 0
0 0 2 0 0 0
1.1.193.250 100 88 4 0 313 84
3 21 4 396 0 100
all 103 89 4 0 314 84
3 21 4 396 0 0
2017 Sep 27 17:01:52 load: 0.22 read_KB: 184 write_KB: 104
client ops reads writes nops rd_bw wr_bw ard_sz
awr_sz rd_t wr_t nop_t align%
1.1.193.247 23 17 1 0 59 88
3 88 5 871 0 100
1.1.193.250 50 44 1 0 125 16
2 16 8 445 0 100
all 73 61 2 0 184 104
3 52 7 658 0 0
2017 Sep 27 17:01:53 load: 0.22 read_KB: 250 write_KB: 1920
client ops reads writes nops rd_bw wr_bw ard_sz
awr_sz rd_t wr_t nop_t align%
1.1.193.247 7 6 0 0 12 0
2 0 6 0 0 0
1.1.193.250 71 44 16 0 263 1920 5
120 6 2531 0 100
all 78 50 16 0 276 1920 5
120 6 2531 0 0
2017 Sep 27 17:01:54 load: 0.22 read_KB: 93 write_KB: 0
client ops reads writes nops rd_bw wr_bw ard_sz
awr_sz rd_t wr_t nop_t align%
1.1.193.247 7 0 0 0 0 0
0 0 0 0 0 0
1.1.193.250 38 28 0 0 70 0
2 0 6 0 0 0
all 45 28 0 0 70 0
2 0 6 0 0 0
2017 Sep 27 17:01:55 load: 0.22 read_KB: 467 write_KB: 156
client ops reads writes nops rd_bw wr_bw ard_sz
awr_sz rd_t wr_t nop_t align%
1.1.193.247 23 21 0 0 23 0
1 0 6 0 0 0
1.1.193.250 115 106 4 0 441 156
4 39 5 538 0 100
all 138 127 4 0 464 156
3 39 5 538 0 0
2017 Sep 27 17:01:56 load: 0.22 read_KB: 485 write_KB: 152
client ops reads writes nops rd_bw wr_bw ard_sz
awr_sz rd_t wr_t nop_t align%
1.1.193.247 16 13 0 0 22 0
1 0 2 0 0 0
1.1.193.250 133 119 4 0 462 152
3 38 4 427 0 100
all 149 132 4 0 485 152
3 38 3 427 0 0
2017 Sep 27 17:01:57 load: 0.22 read_KB: 804 write_KB: 248
client ops reads writes nops rd_bw wr_bw ard_sz
awr_sz rd_t wr_t nop_t align%
1.1.193.247 36 33 1 0 137 104 4
104 6 1064 0 100
1.1.193.250 133 131 2 0 667 144
5 72 5 885 0 100
all 169 164 3 0 804 248
4 82 5 945 0 0
2017 Sep 27 17:01:58 load: 0.22 read_KB: 631 write_KB: 36
client ops reads writes nops rd_bw wr_bw ard_sz
awr_sz rd_t wr_t nop_t align%
1.1.193.247 91 87 0 0 373 0
4 0 2 0 0 0
1.1.193.250 93 75 2 0 257 36
3 18 4 252 0 100
all 184 162 2 0 631 36
3 18 3 252 0 0
2017 Sep 27 17:01:59 load: 0.21 read_KB: 1472 write_KB: 764
client ops reads writes nops rd_bw wr_bw ard_sz
awr_sz rd_t wr_t nop_t align%
1.1.193.250 76 68 6 0 281 636 4
106 4 803 0 100
1.1.193.247 265 262 2 0 1191 128
4 64 3 482 0 100
all 341 330 8 0 1472 764
4 95 3 723 0 0
2017 Sep 27 17:02:00 load: 0.21 read_KB: 3559 write_KB: 376
client ops reads writes nops rd_bw wr_bw ard_sz
awr_sz rd_t wr_t nop_t align%
1.1.193.247 83 82 0 0 270 0
3 0 1 0 0 0
1.1.193.250 541 524 8 0 3289 376
6 47 6 359 0 100
all 624 606 8 0 3559 376
5 47 5 359 0 0
2017 Sep 27 17:02:01 load: 0.21 read_KB: 2079 write_KB: 232
client ops reads writes nops rd_bw wr_bw ard_sz
awr_sz rd_t wr_t nop_t align%
1.1.193.247 120 118 0 0 612 0
5 0 2 0 0 0
1.1.193.250 418 416 2 0 1476 232 3
116 4 765 0 100
all 538 534 2 0 2088 232 3
116 4 765 0 0
2017 Sep 27 17:02:02 load: 0.21 read_KB: 2123 write_KB: 168
client ops reads writes nops rd_bw wr_bw ard_sz
awr_sz rd_t wr_t nop_t align%
1.1.193.247 84 80 3 0 317 168
3 56 4 292 0 100
1.1.193.250 367 366 0 0 1812 0
4 0 6 0 0 0
all 451 446 3 0 2129 168
4 56 6 292 0 0
2017 Sep 27 17:02:03 load: 0.21 read_KB: 307 write_KB: 484
client ops reads writes nops rd_bw wr_bw ard_sz
awr_sz rd_t wr_t nop_t align%
1.1.193.247 14 14 0 0 19 0
1 0 5 0 0 0
1.1.193.250 90 85 5 0 273 484
3 96 1 302 0 100
all 104 99 5 0 292 484
2 96 2 302 0 0
2017 Sep 27 17:02:04 load: 0.22 read_KB: 298 write_KB: 0
client ops reads writes nops rd_bw wr_bw ard_sz
awr_sz rd_t wr_t nop_t align%
1.1.193.247 10 4 0 0 2 0
0 0 9 0 0 0
1.1.193.250 85 70 0 0 296 0
4 0 5 0 0 0
all 95 74 0 0 298 0
4 0 5 0 0 0
2017 Sep 27 17:02:05 load: 0.22 read_KB: 296 write_KB: 420
client ops reads writes nops rd_bw wr_bw ard_sz
awr_sz rd_t wr_t nop_t align%
1.1.193.247 1 0 0 0 0 0
0 0 0 0 0 0
1.1.193.250 86 76 5 0 306 420
4 84 6 739 0 100
all 87 76 5 0 306 420
4 84 6 739 0 0
2017 Sep 27 17:02:06 load: 0.22 read_KB: 1149 write_KB: 379
client ops reads writes nops rd_bw wr_bw ard_sz
awr_sz rd_t wr_t nop_t align%
1.1.193.247 66 56 3 0 310 75
5 25 4 538 0 100
1.1.193.250 182 166 5 0 828 304
4 60 4 581 0 100
all 248 222 8 0 1138 379
5 47 4 565 0 0
2017 Sep 27 17:02:07 load: 0.23 read_KB: 615 write_KB: 164
client ops reads writes nops rd_bw wr_bw ard_sz
awr_sz rd_t wr_t nop_t align%
1.1.193.247 77 75 2 0 374 28
4 14 4 399 0 100
1.1.193.250 89 82 2 0 241 136
2 68 3 266 0 100
all 166 157 4 0 615 164
3 41 3 333 0 0
2017 Sep 27 17:02:08 load: 0.23 read_KB: 1505 write_KB: 9712
client ops reads writes nops rd_bw wr_bw ard_sz
awr_sz rd_t wr_t nop_t align%
1.1.193.247 9 6 0 0 7 0
1 0 5 0 0 0
1.1.193.250 302 166 124 0 1978 14288 11
115 3 2037 0 100
all 311 172 124 0 1985 14288 11
115 11 2037 0 0
2017 Sep 27 17:02:09 load: 0.23 read_KB: 4267 write_KB: 61980
client ops reads writes nops rd_bw wr_bw ard_sz
awr_sz rd_t wr_t nop_t align%
1.1.193.247 12 7 0 0 15 0
2 0 4 0 0 0
1.1.193.250 644 156 484 0 3772 57404 24
118 1 1728 0 100
all 656 163 484 0 3787 57404 23
118 2 1728 0 0
2017 Sep 27 17:02:10 load: 0.24 read_KB: 610 write_KB: 48
client ops reads writes nops rd_bw wr_bw ard_sz
awr_sz rd_t wr_t nop_t align%
1.1.193.247 13 10 1 0 95 16
9 16 4 374 0 100
1.1.193.250 116 107 1 0 514 32
4 32 6 495 0 100
all 129 117 2 0 610 48
5 24 5 435 0 0
2017 Sep 27 17:02:11 load: 0.24 read_KB: 684 write_KB: 68
client ops reads writes nops rd_bw wr_bw ard_sz
awr_sz rd_t wr_t nop_t align%
1.1.193.247 26 20 1 0 59 32
2 32 5 545 0 100
1.1.193.250 169 158 2 0 624 36
3 18 4 451 0 100
all 195 178 3 0 684 68
3 22 4 482 0 0
2017 Sep 27 17:02:12 load: 0.24 read_KB: 154 write_KB: 176
client ops reads writes nops rd_bw wr_bw ard_sz
awr_sz rd_t wr_t nop_t align%
1.1.193.247 14 12 1 0 46 96
3 96 5 854 0 100
1.1.193.250 43 35 1 0 492 80
14 80 28 947 0 100
all 57 47 2 0 538 176
11 88 22 900 0 0
2017 Sep 27 17:02:13 load: 0.25 read_KB: 1134 write_KB: 12
client ops reads writes nops rd_bw wr_bw ard_sz
awr_sz rd_t wr_t nop_t align%
1.1.193.250 36 24 1 0 191 12
7 12 16 469 0 100
1.1.193.247 122 117 0 0 558 0
4 0 5 0 0 0
all 158 141 1 0 750 12
5 12 7 469 0 0
2017 Sep 27 17:02:14 load: 0.25 read_KB: 6357 write_KB: 90908
client ops reads writes nops rd_bw wr_bw ard_sz
awr_sz rd_t wr_t nop_t align%
1.1.193.247 5 2 0 0 1 0
0 0 9 0 0 0
1.1.193.250 1003 233 762 0 6357 90908 27
119 14 4844 0 100
all 1008 235 762 0 6358 90908 27
119 14 4844 0 0
2017 Sep 27 17:02:15 load: 0.25 read_KB: 243 write_KB: 0
client ops reads writes nops rd_bw wr_bw ard_sz
awr_sz rd_t wr_t nop_t align%
1.1.193.247 23 18 0 0 37 0
2 0 2 0 0 0
1.1.193.250 70 58 0 0 207 0
3 0 4 0 0 0
all 93 76 0 0 244 0
3 0 3 0 0 0
2017 Sep 27 17:02:16 load: 0.25 read_KB: 382 write_KB: 16
client ops reads writes nops rd_bw wr_bw ard_sz
awr_sz rd_t wr_t nop_t align%
1.1.193.247 42 38 0 0 191 0
5 0 5 0 0 0
1.1.193.250 59 50 1 0 189 16
3 16 8 427 0 100
all 101 88 1 0 381 16
4 16 7 427 0 0
2017 Sep 27 17:02:17 load: 0.25 read_KB: 23 write_KB: 0
client ops reads writes nops rd_bw wr_bw ard_sz
awr_sz rd_t wr_t nop_t align%
1.1.193.247 6 3 0 0 1 0
0 0 7 0 0 0
1.1.193.250 21 13 0 0 21 0
1 0 5 0 0 0
all 27 16 0 0 23 0
1 0 6 0 0 0
How can i have 2MB/s network traffic (verified on omnios filer and also in
kvm host) and write on disk many more than that ?
2017-09-27 12:56 GMT+02:00 Artem Penner <apenner.it at gmail.com>:
> Use https://github.com/richardelling/tools/blob/master/iscsisvrtop to
> observe iscsi I/O
>
>
> ср, 27 сент. 2017 г. в 11:06, anthony omnios <icoomnios at gmail.com>:
>
>> Hi,
>>
>> i have a problem, i used many ISCSI zvol (for each vm), network traffic
>> is 2MB/s between kvm host and filer but i write on disks many more than
>> that. I used a pool with separated mirror zil (intel s3710) and 8 ssd
>> samsung 850 evo 1To
>>
>> zpool status
>> pool: filervm2
>> state: ONLINE
>> scan: resilvered 406G in 0h22m with 0 errors on Wed Sep 20 15:45:48 2017
>> config:
>>
>> NAME STATE READ WRITE CKSUM
>> filervm2 ONLINE 0 0 0
>> mirror-0 ONLINE 0 0 0
>> c7t5002538D41657AAFd0 ONLINE 0 0 0
>> c7t5002538D41F85C0Dd0 ONLINE 0 0 0
>> mirror-2 ONLINE 0 0 0
>> c7t5002538D41CC7105d0 ONLINE 0 0 0
>> c7t5002538D41CC7127d0 ONLINE 0 0 0
>> mirror-3 ONLINE 0 0 0
>> c7t5002538D41CD7F7Ed0 ONLINE 0 0 0
>> c7t5002538D41CD83FDd0 ONLINE 0 0 0
>> mirror-4 ONLINE 0 0 0
>> c7t5002538D41CD7F7Ad0 ONLINE 0 0 0
>> c7t5002538D41CD7F7Dd0 ONLINE 0 0 0
>> logs
>> mirror-1 ONLINE 0 0 0
>> c4t2d0 ONLINE 0 0 0
>> c4t4d0 ONLINE 0 0 0
>>
>> i used correct ashift of 13 for samsung 850 evo
>> zdb|grep ashift :
>>
>> ashift: 13
>> ashift: 13
>> ashift: 13
>> ashift: 13
>> ashift: 13
>>
>> But i write a lot on ssd every 5 seconds (many more than the network
>> traffic of 2 MB/s)
>>
>> iostat -xn -d 1 :
>>
>> r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device
>> 11.0 3067.5 288.3 153457.4 6.8 0.5 2.2 0.2 5 14 filervm2
>> 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 rpool
>> 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c4t0d0
>> 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 c4t1d0
>> 0.0 552.6 0.0 17284.0 0.0 0.1 0.0 0.2 0 8 c4t2d0
>> 0.0 552.6 0.0 17284.0 0.0 0.1 0.0 0.2 0 8 c4t4d0
>> 1.0 233.3 48.1 10051.6 0.0 0.0 0.0 0.1 0 3
>> c7t5002538D41657AAFd0
>> 5.0 250.3 144.2 13207.3 0.0 0.0 0.0 0.1 0 3
>> c7t5002538D41CC7127d0
>> 2.0 254.3 24.0 13207.3 0.0 0.0 0.0 0.1 0 4
>> c7t5002538D41CC7105d0
>> 3.0 235.3 72.1 10051.6 0.0 0.0 0.0 0.1 0 3
>> c7t5002538D41F85C0Dd0
>> 0.0 228.3 0.0 16178.7 0.0 0.0 0.0 0.2 0 4
>> c7t5002538D41CD83FDd0
>> 0.0 225.3 0.0 16210.7 0.0 0.0 0.0 0.2 0 4
>> c7t5002538D41CD7F7Ed0
>> 0.0 282.3 0.0 19991.1 0.0 0.0 0.0 0.2 0 5
>> c7t5002538D41CD7F7Dd0
>> 0.0 280.3 0.0 19871.0 0.0 0.0 0.0 0.2 0 5
>> c7t5002538D41CD7F7Ad0
>>
>> I used zvol of 64k, i try with 8k and problem is the same.
>>
>> zfs get all filervm2/hdd-110022a :
>>
>> NAME PROPERTY VALUE SOURCE
>> filervm2/hdd-110022a type volume -
>> filervm2/hdd-110022a creation Tue May 16 10:24 2017 -
>> filervm2/hdd-110022a used 5.26G -
>> filervm2/hdd-110022a available 2.90T -
>> filervm2/hdd-110022a referenced 5.24G -
>> filervm2/hdd-110022a compressratio 3.99x -
>> filervm2/hdd-110022a reservation none default
>> filervm2/hdd-110022a volsize 25G local
>> filervm2/hdd-110022a volblocksize 64K -
>> filervm2/hdd-110022a checksum on default
>> filervm2/hdd-110022a compression lz4 local
>> filervm2/hdd-110022a readonly off default
>> filervm2/hdd-110022a copies 1 default
>> filervm2/hdd-110022a refreservation none default
>> filervm2/hdd-110022a primarycache all default
>> filervm2/hdd-110022a secondarycache all default
>> filervm2/hdd-110022a usedbysnapshots 15.4M -
>> filervm2/hdd-110022a usedbydataset 5.24G -
>> filervm2/hdd-110022a usedbychildren 0 -
>> filervm2/hdd-110022a usedbyrefreservation 0 -
>> filervm2/hdd-110022a logbias latency default
>> filervm2/hdd-110022a dedup off default
>> filervm2/hdd-110022a mlslabel none default
>> filervm2/hdd-110022a sync standard local
>> filervm2/hdd-110022a refcompressratio 3.99x -
>> filervm2/hdd-110022a written 216K -
>> filervm2/hdd-110022a logicalused 20.9G -
>> filervm2/hdd-110022a logicalreferenced 20.9G -
>> filervm2/hdd-110022a snapshot_limit none default
>> filervm2/hdd-110022a snapshot_count none default
>> filervm2/hdd-110022a redundant_metadata all default
>>
>> Sorry for my bad english.
>>
>> What can be the problem ? thanks
>>
>> Best regards,
>>
>> Anthony
>>
>>
>> _______________________________________________
>> OmniOS-discuss mailing list
>> OmniOS-discuss at lists.omniti.com
>> http://lists.omniti.com/mailman/listinfo/omnios-discuss
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://omniosce.org/ml-archive/attachments/20170927/b4c0e55f/attachment-0001.html>
More information about the OmniOS-discuss
mailing list