[OmniOS-discuss] No I/O on ZIL logs?
Richard Elling
richard.elling at richardelling.com
Mon Jan 7 22:15:19 EST 2013
You might also look at zilstat, which is more interesting than the zpool iostat
output :-)
http://www.richardelling.com/Home/scripts-and-programs-1/zilstat
https://github.com/richardelling/tools/
-- richard
On Jan 7, 2013, at 7:58 AM, steve at linuxsuite.org wrote:
> Howdy!
>
> I am doing some testing of zfs on OmniOS. It seems that the ZIL logs
> never get any I/O. I have a zpool as follows
>
> root at live-dfs-1:~# zpool status data1
> pool: data1
> state: ONLINE
> scan: resilvered 937G in 17h53m with 0 errors on Sat Jan 5 08:52:32 2013
> config:
>
> NAME STATE READ WRITE CKSUM
> data1 ONLINE 0 0 0
> raidz2-0 ONLINE 0 0 0
> c5t5000C500525BFB31d0s0 ONLINE 0 0 0
> c5t5000C500525C2F91d0s0 ONLINE 0 0 0
> c5t5000C500525C72B1d0s0 ONLINE 0 0 0
> c5t5000C500525C9673d0s0 ONLINE 0 0 0
> c5t5000C500489956C1d0s0 ONLINE 0 0 0
> c5t5000C500525EB2B9d0s0 ONLINE 0 0 0
> c5t5000C500525F2297d0s0 ONLINE 0 0 0
> c5t5000C50045561CEAd0s0 ONLINE 0 0 0
> c5t5000C50048990BE6d0s0 ONLINE 0 0 0
> c5t5000C500489947A8d0s0 ONLINE 0 0 0
> logs
> mirror-1 ONLINE 0 0 0
> c1t2d0 ONLINE 0 0 0
> c1t4d0 ONLINE 0 0 0
> cache
> c1t5d0 ONLINE 0 0 0
>
> errors: No known data errors
>
> Then I simply have a script that does some writing to the zpool.
>
> If I run
>
> zpool iostat -v data1 5
>
> I get output like this ( see below). Why is there never any I/O to the
> logs? I understood that these logs would be written first before
> the data is commited to the zpool. I can't find any evidence that this is
> happening. Lights on hardware don't blink either to indicate activity. I
> have verified that the disks used for the ZIL are writable.
> Why is this not happening? How can I make it so?
>
> thanx - steve
>
> root at live-dfs-1:~# zpool iostat -v data1 5
>
> capacity operations bandwidth
> pool alloc free read write read write
> --------------------------- ----- ----- ----- ----- ----- -----
> data1 9.98T 16.8T 0 3.14K 0 113M
> raidz2 9.98T 16.8T 0 3.14K 0 113M
> c5t5000C500525BFB31d0s0 - - 0 146 0 16.6M
> c5t5000C500525C2F91d0s0 - - 0 147 0 16.6M
> c5t5000C500525C72B1d0s0 - - 0 146 0 16.6M
> c5t5000C500525C9673d0s0 - - 0 147 0 16.6M
> c5t5000C500489956C1d0s0 - - 0 146 0 16.6M
> c5t5000C500525EB2B9d0s0 - - 0 146 0 16.6M
> c5t5000C500525F2297d0s0 - - 0 146 0 16.6M
> c5t5000C50045561CEAd0s0 - - 0 146 0 16.6M
> c5t5000C50048990BE6d0s0 - - 0 146 0 16.6M
> c5t5000C500489947A8d0s0 - - 0 146 0 16.6M
> logs - - - - - -
> mirror 0 148G 0 0 0 0
> c1t2d0 - - 0 0 0 0
> c1t4d0 - - 0 0 0 0
> cache - - - - - -
> c1t5d0 111G 7.51M 0 179 0 18.6M
> --------------------------- ----- ----- ----- ----- ----- -----
>
>
> capacity operations bandwidth
> pool alloc free read write read write
> --------------------------- ----- ----- ----- ----- ----- -----
> data1 9.99T 16.8T 1 3.56K 10.3K 105M
> raidz2 9.99T 16.8T 1 3.56K 10.3K 105M
> c5t5000C500525BFB31d0s0 - - 0 158 1.58K 15.9M
> c5t5000C500525C2F91d0s0 - - 0 168 2.38K 15.9M
> c5t5000C500525C72B1d0s0 - - 0 163 2.38K 15.9M
> c5t5000C500525C9673d0s0 - - 0 154 810 15.9M
> c5t5000C500489956C1d0s0 - - 0 163 1.58K 15.9M
> c5t5000C500525EB2B9d0s0 - - 0 161 1.58K 15.9M
> c5t5000C500525F2297d0s0 - - 0 158 0 15.9M
> c5t5000C50045561CEAd0s0 - - 0 165 0 15.9M
> c5t5000C50048990BE6d0s0 - - 0 149 0 15.9M
> c5t5000C500489947A8d0s0 - - 0 153 0 15.9M
> logs - - - - - -
> mirror 0 148G 0 0 0 0
> c1t2d0 - - 0 0 0 0
> c1t4d0 - - 0 0 0 0
> cache - - - - - -
> c1t5d0 111G 8M 1 260 1.38K 26.0M
>
>
>
> _______________________________________________
> OmniOS-discuss mailing list
> OmniOS-discuss at lists.omniti.com
> http://lists.omniti.com/mailman/listinfo/omnios-discuss
--
Richard.Elling at RichardElling.com
+1-760-896-4422
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://omniosce.org/ml-archive/attachments/20130107/2e8e4296/attachment-0001.html>
More information about the OmniOS-discuss
mailing list