[OmniOS-discuss] Poor performance on writes Zraid

Dain Bentley dain.bentley at gmail.com
Tue May 19 11:09:50 UTC 2015


Thanks for the help guys.  Integrated CIFS.  Reads are fast.  The pool is
about 60% full only.

Thanks for the tips!  I'll try iostat to sniff this out

On Tuesday, May 19, 2015, Jim Klimov <jimklimov at cos.ru> wrote:

> 18 мая 2015 г. 23:18:15 CEST, Dain Bentley <dain.bentley at gmail.com
> <javascript:;>> пишет:
> >Hello all,  I have a RaidZ setup with 5 disks and rad performance is
> >good.
> >I have no ZIL pool and 8 GB or ECC Ram.  Writes are like 2 MB a second
> >with
> >a 1GB network.  I'm pulling faster writes on a similar drive in a
> >windows
> >VM over CIFS on VMware.  My OmniOS box is bare metal.  Any tips on
> >speeding
> >this up?
> >
> >
> >------------------------------------------------------------------------
> >
> >_______________________________________________
> >OmniOS-discuss mailing list
> >OmniOS-discuss at lists.omniti.com <javascript:;>
> >http://lists.omniti.com/mailman/listinfo/omnios-discuss
>
> Do you have dedup enabled? (This is pretty slow, and needs lots of
> metadata reads to make each write, and little RAM and no L2ARC is very bad
> with this)
>
> Also, very full pools (vague definition based on history of the writes -
> generally 80% as a rule of thumb, though pathologies can be after 50% for
> some and 95%+ for others) - these can have very fragmented and small
> 'holes' in free space, which impacts write speeds (more random, and it
> takes more time to find the available location for a block).
>
> You can also look at 'iostat -Xnz 1' output to see the i/o values per
> active device. Younare interested in reads/sec+writes/sec (hdds can serve
> about 200ops/sec total, unless they happen to be small requests to
> sequentially placed sector numbers - in theory you might be lucky to see
> even 20000iops in such favorable case, in practice about 500 is not
> uncommon since related block locations in zfs are often coalesced). In
> iostat you'd also worry about %b(usy), %w(rite-wait) to see if some disks
> have a very different performance than others (e.g. one has internal
> problems and sector relocations to spare areas, or flaky cabling and many
> protocol re-requests involved in succesful ops). svct (service times) and
> queue lengths can also be useful.
>
> You can get similar info with 'zpool iostat -v 1' as well, though
> interactions between pool io's and component vdev io's may be tricky to
> compare between raidz and mirror for example. You might be more interested
> in averaged differences (maybe across larger time ranges) between these two
> iostats - e.g. if you have some other io's that those from the pool (say, a
> raw swap partition).
>
> Finally, consider dtrace-toolkit's and Richard Elling's scripts to sniff
> what logical (file/vdev) operations you have - and see how these numbers
> compare to those in pool i/o's at least on the order of magnitude. The
> difference can be metadata ops, or something else.
>
> Hooe this helps get you started,
> Jim Klimov
> --
> Typos courtesy of K-9 Mail on my Samsung Android
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://omniosce.org/ml-archive/attachments/20150519/8095249c/attachment.html>


More information about the OmniOS-discuss mailing list