[OmniOS-discuss] [Omnios-discuss] Zvol rewrite slow?

Jim Klimov jimklimov at cos.ru
Sun Apr 19 09:27:15 UTC 2015


19 апреля 2015 г. 10:24:05 CEST, "邓伟权" <dwq at xmweixun.com> пишет:
>Hello all,
>
>       I used OmniOS 151006 Or 151014 found  Zvol rewrite from 212MB to
>44.4MB.Details are as follows.
>
> 
>
>zpool crerate  WX raidz c8t5000C5003AEADC65d0 c8t5000C5003AE7BAD9d0
>c8t5000C5003AC4E3CDd0
>
>zfs create -V 20g WX/test01
>
>stmfadm create-lu -p wcd=false /dev/zvol/rdsk/WX/test01
>
>stmfadm add-view #add view to client host
>
> 
>
>client:redhat 5.8 X86
>
>pvcreate /dev/sdb
>
>vgcreate /dev/vgtest /dev/sdb
>
>lvcreate –L xxx –n lvtest /dev/vgtest
>
>mkfs /dev/vgtest/lvtest
>
>mkdir /test
>
>mount /dev/vgtest/lvtest /test
>
>[root at localhost test]# dd if=/dev/zero of=/test/1 bs=4096k count=100000
>
>dd: writing `/test/1': No space left on device
>
>5023+0 records in
>
>5022+0 records out
>
>21066981376 bytes (21 GB) copied, 99.443 seconds, 212 MB/s
>
>[root at localhost test]# ls
>
> 
>
>[root at localhost test]# dd if=/dev/zero of=/test/1 bs=4096k count=100000
>
>dd: writing `/test/1': No space left on device
>
>5023+0 records in
>
>5022+0 records out
>
>21066981376 bytes (21 GB) copied, 474.468 seconds, 44.4 MB/s
>
> 
>
>Thanks.
>
> 
>
>Best Regards,
>Deng Wei Quan
>
>Mob: +86 13906055059
>
>Mail:  <mailto:dwq at xmweixun.com> dwq at xmweixun.com
>
> 
>
>
>
>------------------------------------------------------------------------
>
>_______________________________________________
>OmniOS-discuss mailing list
>OmniOS-discuss at lists.omniti.com
>http://lists.omniti.com/mailman/listinfo/omnios-discuss

OTOH, it may be because blocks to be freed are first put on a death list, and then released asynchronously as the zfs pool processes its structures. This was found to be a preferable and reliable way to do it, instead of a sync operation which may require more resources than a particular system has. So as you rewrite your data, writes may be waiting for some free space imminent to appear (in the sync approach, you might have a longer wait for all the space to become free at once), and maybe some space is served from the pool's internal reserves (1/64th of size).

I think this is configurable as a per-zpool @feature.

Also, searching for available 'holes' in the allocation, where zfs might put your new writes, this itself is slower on a full pool than on a new empty one. Search for 'zfs free space fragmentation' to get some history on this subject and solutions, and in particular be aware that there is (or at least was recently) a write-performance collapse when the pool got too full (numbers vary per installation and usage patterns, from 50% to 90+% full, for lags to become noticeable; rule of thumb is to use up to 80% of the pool and try not to exceed that - e.g. pre-reserve a 20% sparse volume or another dataset with a reservation).

HTH, Jim
--
Typos courtesy of K-9 Mail on my Samsung Android


More information about the OmniOS-discuss mailing list