[OmniOS-discuss] ZFS Slog - force all writes to go to Slog

Martin Dojcak martin at dojcak.sk
Wed Feb 18 22:34:08 UTC 2015


Hi,

zil_slog_limit from illumos-gate source code is really 1024*1024 bytes for one write operation. There is nice self explain thread on zfs on linux github (https://github.com/zfsonlinux/zfs/issues/1012) when a write should be logged in indirect mode or in immediate mode. Application rarely use write > 1MB. In the most cases 1MB is enaugh, but it would be nice to have a option to configure zil slog limit (like zfs on linux module).


--
Martin Dojčák

mail: martin at dojcak.sk
jabber: martindojcak at jabbim.sk

pgp: http://pgp.dojcak.sk/



<div>-------- Pôvodná správa --------</div><div>Od: Rune Tipsmark <rt at steait.net> </div><div>Dátum:18. 02. 2015  21:04  (GMT+01:00) </div><div>Komu: omnios-discuss at lists.omniti.com </div><div>Predmet: [OmniOS-discuss] ZFS Slog - force all writes to go to Slog </div><div>
</div>hi all,
 
I found an entry about zil_slog_limit here: http://utcc.utoronto.ca/~cks/space/blog/solaris/ZFSWritesAndZILII
it basically explains how writes larger than 1MB per default hits the main pool rather than my Slog device - I could not find much further information nor the equivalent setting in OmniOS. I also read http://nex7.blogspot.ca/2013/04/zfs-intent-log.html but it didn't truly help me understand just how I can force every written byte to my ZFS box to go the ZIL regardless of size, I never ever want anything to go directly to my man pool ever.
 
I have sync=always and disabled write back cache on my volume based LU's.
 
Testing with zfs_txg_timeout set to 30 or 60 seconds seems to make no difference if I write large files to my LU's - I don't seem the write speed being consistent with the performance of the Slog devices. It looks as if it goes straight to disk and hence the performance is less than great to say the least.
 
How do I ensure 100% that all writes always goes to my Slog devices - no exceptions.
 
br,
Rune  
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://omniosce.org/ml-archive/attachments/20150218/4ef0f687/attachment.html>


More information about the OmniOS-discuss mailing list