[OmniOS-discuss] Slow performance with ZeusRAM?

Matej Zerovnik matej at zunaj.si
Thu Oct 22 21:28:49 UTC 2015


Chip:
I tried running fio on multiple folders and it’s a little better

When I run 7x fio (iodepth=4 threads=4), I get 28k IOPS on average
When I run 7x fio (iodepth=4 threads=16), I get 35k IOPS on average. iostat shows that there is transfer rate of 140-220MB/s with average request size of 35kB
When I run 7x fio (iodepth=1 threads=1), I get 24k IOPS on average

There are still at least 10k IOPS left to use I guess:)

Bob:
Yes, my ZFS is ashift=12, since all my drives report 4k blocks (is that what you ment?). The pool is completly empty, so there is enough place for writes, so write speed should not be limited because of COW. Looking at iostat, there are no reads on the drives at all.
I’m not sure where fio gets its data, probably from /dev/zero or somewhere?

I will try sync engine instead of solarisaio to see if there is any difference.

I don’t have compression enabled, since I want to test raw performance. I also disabled ARC (primarycache=metadata), just so my read tests are also as real as possible (so I don’t need to run tests with 1TB test file).

> Try umounting and re-mounting your zfs filesystem (or 'zfs destroy' followed by 'zfs create') to see how performance differs on a freshly mounted filesystem.  The zfs ARC caching will be purged when the filesystem is unmounted.


If I understand you correctly, you are saying I should destroy my folders, set recordsize=4k on my pool and then create zfs folders?

thanks, Matej


> On 22 Oct 2015, at 21:47, Schweiss, Chip <chip at innovates.com> wrote:
> 
> The ZIL on log devices suffer a bit from not filling queues well.   In order to get the queues to fill more, try running your test to several zfs folders on the pool simultaneously and measure your total I/O.  
> 
> As I understand it, ff you're writing to only one zfs folder, your queue depth will stay at 1 on the log device and you be come latency bound.
> 
> -Chip
> 
> On Thu, Oct 22, 2015 at 2:02 PM, Matej Zerovnik <matej at zunaj.si <mailto:matej at zunaj.si>> wrote:
> Hello,
> 
> I'm building a new system and I'm having a bit of a performance problem. Well, its either that or I'm not getting the whole ZIL idea:)
> 
> My system is following:
> - IBM xServer 3550 M4 server (dual CPU with 160GB memory)
> - LSI 9207 HBA (P19 firmware)
> - Supermicro JBOD with SAS expander
> - 4TB SAS3 drives
> - ZeusRAM for ZIL
> - LTS Omnios (all patches applied)
> 
> If I benchmark ZeusRAM on its own with random 4k sync writes, I can get 48k IOPS out of it, no problem there.
> 
> If I create a new raidz2 pool with 10 hard drives, mirrored ZeusRAMs for ZIL and set sync=always, I can only squeeze 14k IOPS out of the system.
> Is that normal or should I be getting 48k IOPS on the 2nd pool as well, since this is the performance ZeusRAM can deliver?
> 
> I'm testing with fio:
> fio --filename=/pool0/test01 --size=5g --rw=randwrite --refill_buffers --norandommap --randrepeat=0 --ioengine=solarisaio --bs=4k --iodepth=16 --numjobs=16 --runtime=60 --group_reporting --name=4ktest
> 
> thanks, Matej
> _______________________________________________
> OmniOS-discuss mailing list
> OmniOS-discuss at lists.omniti.com <mailto:OmniOS-discuss at lists.omniti.com>
> http://lists.omniti.com/mailman/listinfo/omnios-discuss <http://lists.omniti.com/mailman/listinfo/omnios-discuss>
> 
> 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://omniosce.org/ml-archive/attachments/20151022/2ecb4112/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 3468 bytes
Desc: not available
URL: <https://omniosce.org/ml-archive/attachments/20151022/2ecb4112/attachment.bin>


More information about the OmniOS-discuss mailing list