[OmniOS-discuss] Slow performance with ZeusRAM?
Matej Zerovnik
matej at zunaj.si
Fri Oct 23 11:55:03 UTC 2015
> On 22 Oct 2015, at 23:51, Bob Friesenhahn <bfriesen at simple.dallas.tx.us> wrote:
>
> On Thu, 22 Oct 2015, Matej Zerovnik wrote:
>> Bob:
>> Yes, my ZFS is ashift=12, since all my drives report 4k blocks (is that what you ment?). The pool is completly empty, so there is enough place for
>> writes, so write speed should not be limited because of COW. Looking at iostat, there are no reads on the drives at all.
>> I’m not sure where fio gets its data, probably from /dev/zero or somewhere?
>
> To be clear, zfs does not overwrite blocks. Instead zfs modifies (in memory) any prior data from a block, and then it writes the block data to a new location. This is called "copy on write". If the prior data would not be entirely overwritten and is not already cached in memory, then it needs to be read from underlying disk.
>
> It is interesting that you say there are no reads on the drives at all.
I think I got it. I think in my case, there are no reads because the whole pool is empty and fresh data are written to pool. So there is no rewrite, just pure writes...
I did some more testing with recordsize=4k (need to repeat them with 128k as well) and it looks like I can get up to 48k IOPS when doing sequential 4k writing running 6x dd (dd if=/dev/zero of=/pool/folder/file1 bs=4k count=100000).
When I switch to random writing (this time I tried iozone instead of fio), I can only get up to 58mb/s (which translates to around 14.500 IOPS (although iostat is showing higher values).
iostat during random write:
r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device
0.0 29805.4 0.0 119209.6 0.0 3.0 0.0 0.1 3 83 c9t5000A72A300B3D9Dd0
0.0 29807.4 0.0 119217.6 0.0 4.7 0.0 0.2 4 83 c10t5000A72A300B3D7Ed0
0.0 589.7 0.0 62794.9 0.0 0.7 0.0 1.2 1 62 c10t5000C500837549F9d0
0.0 622.6 0.0 66173.8 0.0 0.6 0.0 1.0 1 54 c10t5000C50083759089d0
0.0 609.6 0.0 66173.9 0.0 0.6 0.0 1.0 1 54 c10t5000C500837557EDd0
How come I can see 30k IOPS flowing to ZeusRAM, but I only see 58MB/s being written to hard drive?
I tried running scripts from http://dtrace.org/blogs/ahl/2014/08/31/openzfs-tuning: txd flush takes around 0,3s and average txd usage is:
3 12645 txg_sync_thread:txg-syncing 742MB of 4096MB used
15 12645 txg_sync_thread:txg-syncing 1848MB of 4096MB used
21 12645 txg_sync_thread:txg-syncing 1467MB of 4096MB used
20 12645 txg_sync_thread:txg-syncing 2231MB of 4096MB used
16 12645 txg_sync_thread:txg-syncing 1237MB of 4096MB used
14 12645 txg_sync_thread:txg-syncing 1624MB of 4096MB used
1 12645 txg_sync_thread:txg-syncing 1130MB of 4096MB used
9 12645 txg_sync_thread:txg-syncing 1750MB of 4096MB used
9 12645 txg_sync_thread:txg-syncing 1300MB of 4096MB used
18 12645 txg_sync_thread:txg-syncing 2396MB of 4096MB used
If I understand that correctly, spindles have no problem writing data from write cache to disks.
I have a feling I have a real problem with understanding how things work in ZFS:) I always used a simple explanation: as fast as system can write to ZIL, that is the speed that the program can write to fs. I guess not…
Matej
-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 3468 bytes
Desc: not available
URL: <https://omniosce.org/ml-archive/attachments/20151023/22e4e8a1/attachment.bin>
More information about the OmniOS-discuss
mailing list