[OmniOS-discuss] L2ARC actual size and zpool iostat -v output discrepancy?
wuffers
moo at wuffers.net
Wed Feb 12 15:31:17 UTC 2014
So I only upgraded to r151008 recently, and was wondering whether the new
L2ARC compression was working. After getting an updated arcstat script
which added the l2asize option (which returned a 0), and a few rounds in
IRC which lead me to the correct kstat (zfs:0:arcstats:l2_asize), and an
even more updated arcstat to fix the 0 result..
Now both kstat and arcstat are outputting the same info:
zfs:0:arcstats:l2_asize 864682956800
zfs:0:arcstats:l2_size 1374605708288
arcstat:
read hits miss hit% l2read l2hits l2miss l2hit% arcsz l2size
l2asize
2.7K 2.6K 53 98 53 44 9 83 229G 1.3T
806G
5.1K 4.8K 282 94 282 17 265 6 229G 1.3T
806G
7.3K 7.3K 10 99 10 4 6 40 229G 1.3T
806G
...
But.. why is zpool iostat -v showing me my cache devices using up ~1.25T
(314Gx4), which is close to the 1.3T l2size?
capacity operations bandwidth
pool alloc free read write read write
------------------------- ----- ----- ----- ----- ----- -----
[snip]
cache - - - - - -
c2t500117310015D579d0 313G 59.4G 19 15 711K 833K
c2t50011731001631FDd0 314G 58.1G 18 15 712K 836K
c12t500117310015D59Ed0 314G 58.8G 19 15 710K 835K
c12t500117310015D54Ed0 313G 59.7G 18 15 709K 832K
------------------------- ----- ----- ----- ----- ----- -----
What's with the discrepancy? Is zpool iostat calculating the free capacity
incorrectly now (my cache drives are 400GB)?
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://omniosce.org/ml-archive/attachments/20140212/72742471/attachment.html>
More information about the OmniOS-discuss
mailing list