[OmniOS-discuss] Slow performance with ZeusRAM?

Richard Elling richard.elling at richardelling.com
Fri Oct 23 16:42:10 UTC 2015


additional insight below...

> On Oct 22, 2015, at 12:02 PM, Matej Zerovnik <matej at zunaj.si> wrote:
> 
> Hello,
> 
> I'm building a new system and I'm having a bit of a performance problem. Well, its either that or I'm not getting the whole ZIL idea:)
> 
> My system is following:
> - IBM xServer 3550 M4 server (dual CPU with 160GB memory)
> - LSI 9207 HBA (P19 firmware)
> - Supermicro JBOD with SAS expander
> - 4TB SAS3 drives
> - ZeusRAM for ZIL
> - LTS Omnios (all patches applied)
> 
> If I benchmark ZeusRAM on its own with random 4k sync writes, I can get 48k IOPS out of it, no problem there.

Do not assume writes to the slog for 4k random write workload are only 4k in size.
You'll want to measure to be sure, but the worst case here is 8k written to slog:
   4k data + 4k chain pointer = 8k physical write

There are cases where multiple 4k data gets coalesced, so the above is worst case.
Measure to be sure. A quick back-of-the-napkin measurement can be done from
iostat -x output. More detailed measurements can be done wil zilstat or other specific
dtracing.
 -- richard

> 
> If I create a new raidz2 pool with 10 hard drives, mirrored ZeusRAMs for ZIL and set sync=always, I can only squeeze 14k IOPS out of the system.
> Is that normal or should I be getting 48k IOPS on the 2nd pool as well, since this is the performance ZeusRAM can deliver?
> 
> I'm testing with fio:
> fio --filename=/pool0/test01 --size=5g --rw=randwrite --refill_buffers --norandommap --randrepeat=0 --ioengine=solarisaio --bs=4k --iodepth=16 --numjobs=16 --runtime=60 --group_reporting --name=4ktest
> 
> thanks, Matej_______________________________________________
> OmniOS-discuss mailing list
> OmniOS-discuss at lists.omniti.com
> http://lists.omniti.com/mailman/listinfo/omnios-discuss



More information about the OmniOS-discuss mailing list