[OmniOS-discuss] Slow performance with ZeusRAM?

Schweiss, Chip chip at innovates.com
Thu Oct 22 19:47:53 UTC 2015


The ZIL on log devices suffer a bit from not filling queues well.   In
order to get the queues to fill more, try running your test to several zfs
folders on the pool simultaneously and measure your total I/O.

As I understand it, ff you're writing to only one zfs folder, your queue
depth will stay at 1 on the log device and you be come latency bound.

-Chip

On Thu, Oct 22, 2015 at 2:02 PM, Matej Zerovnik <matej at zunaj.si> wrote:

> Hello,
>
> I'm building a new system and I'm having a bit of a performance problem.
> Well, its either that or I'm not getting the whole ZIL idea:)
>
> My system is following:
> - IBM xServer 3550 M4 server (dual CPU with 160GB memory)
> - LSI 9207 HBA (P19 firmware)
> - Supermicro JBOD with SAS expander
> - 4TB SAS3 drives
> - ZeusRAM for ZIL
> - LTS Omnios (all patches applied)
>
> If I benchmark ZeusRAM on its own with random 4k sync writes, I can get
> 48k IOPS out of it, no problem there.
>
> If I create a new raidz2 pool with 10 hard drives, mirrored ZeusRAMs for
> ZIL and set sync=always, I can only squeeze 14k IOPS out of the system.
> Is that normal or should I be getting 48k IOPS on the 2nd pool as well,
> since this is the performance ZeusRAM can deliver?
>
> I'm testing with fio:
> fio --filename=/pool0/test01 --size=5g --rw=randwrite --refill_buffers
> --norandommap --randrepeat=0 --ioengine=solarisaio --bs=4k --iodepth=16
> --numjobs=16 --runtime=60 --group_reporting --name=4ktest
>
> thanks, Matej
> _______________________________________________
> OmniOS-discuss mailing list
> OmniOS-discuss at lists.omniti.com
> http://lists.omniti.com/mailman/listinfo/omnios-discuss
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://omniosce.org/ml-archive/attachments/20151022/bde569ed/attachment.html>


More information about the OmniOS-discuss mailing list