[OmniOS-discuss] No I/O on ZIL logs?

steve at linuxsuite.org steve at linuxsuite.org
Mon Jan 7 12:21:55 EST 2013


> On Mon, Jan 7, 2013 at 12:08 PM,  <steve at linuxsuite.org> wrote:
>>        zfs set sync=always data1
>>
>>       default is sync=standard
>
> Note that this will cause a substantial performance penalty, even when
> a slog device is present.  You're effectively turning *all* I/O to the
> data1 pool into synchronous I/O, even if the application didn't ask
> for it.

        Yes I am aware of this.

        We have a special requirement that demands data integrity
and are willing to trade performance for it.

     thanx - steve

       Here are some numbers

                                capacity     operations    bandwidth
pool                         alloc   free   read  write   read  write
---------------------------  -----  -----  -----  -----  -----  -----
data1                        10.2T  16.6T      0  4.74K      0  79.8M
  raidz2                     10.2T  16.6T      0  1.60K      0  35.7M
    c5t5000C500525BFB31d0s0      -      -      0     55      0  5.74M
    c5t5000C500525C2F91d0s0      -      -      0     55      0  5.73M
    c5t5000C500525C72B1d0s0      -      -      0     55      0  5.73M
    c5t5000C500525C9673d0s0      -      -      0     55      0  5.73M
    c5t5000C500489956C1d0s0      -      -      0     55      0  5.73M
    c5t5000C500525EB2B9d0s0      -      -      0     54      0  5.73M
    c5t5000C500525F2297d0s0      -      -      0     54      0  5.73M
    c5t5000C50045561CEAd0s0      -      -      0     55      0  5.73M
    c5t5000C50048990BE6d0s0      -      -      0     55      0  5.73M
    c5t5000C500489947A8d0s0      -      -      0     54      0  5.74M
logs                             -      -      -      -      -      -
  mirror                     1.53G   146G      0  3.14K      0  44.1M
    c1t2d0                       -      -      0  3.14K      0  44.1M
    c1t4d0                       -      -      0  3.14K      0  44.1M
cache                            -      -      -      -      -      -
  c1t5d0                      111G  48.5K      0    297      0  30.7M
---------------------------  -----  -----  -----  -----  -----  -----

                                capacity     operations    bandwidth
pool                         alloc   free   read  write   read  write
---------------------------  -----  -----  -----  -----  -----  -----
data1                        10.2T  16.6T      0  5.77K      0  83.0M
  raidz2                     10.2T  16.6T      0  2.04K      0  39.4M
    c5t5000C500525BFB31d0s0      -      -      0     70      0  6.60M
    c5t5000C500525C2F91d0s0      -      -      0     70      0  6.60M
    c5t5000C500525C72B1d0s0      -      -      0     70      0  6.60M
    c5t5000C500525C9673d0s0      -      -      0     69      0  6.60M
    c5t5000C500489956C1d0s0      -      -      0     69      0  6.60M
    c5t5000C500525EB2B9d0s0      -      -      0     70      0  6.59M
    c5t5000C500525F2297d0s0      -      -      0     70      0  6.60M
    c5t5000C50045561CEAd0s0      -      -      0     70      0  6.59M
    c5t5000C50048990BE6d0s0      -      -      0     69      0  6.59M
    c5t5000C500489947A8d0s0      -      -      0     69      0  6.59M
logs                             -      -      -      -      -      -
  mirror                     2.40G   146G      0  3.72K      0  43.6M
    c1t2d0                       -      -      0  3.72K      0  43.6M
    c1t4d0                       -      -      0  3.72K      0  43.6M
cache                            -      -      -      -      -      -
  c1t5d0                      111G    99K      0    264  4.13K  26.0M
---------------------------  -----  -----  -----  -----  -----  -----

                                capacity     operations    bandwidth
pool                         alloc   free   read  write   read  write
---------------------------  -----  -----  -----  -----  -----  -----
data1                        10.2T  16.6T      0  4.48K  1.58K  90.1M
  raidz2                     10.2T  16.6T      0  1.80K  1.58K  38.5M
    c5t5000C500525BFB31d0s0      -      -      0     63      0  6.28M
    c5t5000C500525C2F91d0s0      -      -      0     62    808  6.28M
    c5t5000C500525C72B1d0s0      -      -      0     62      0  6.29M
    c5t5000C500525C9673d0s0      -      -      0     62    808  6.29M
    c5t5000C500489956C1d0s0      -      -      0     62      0  6.28M
    c5t5000C500525EB2B9d0s0      -      -      0     62      0  6.28M
    c5t5000C500525F2297d0s0      -      -      0     61      0  6.29M
    c5t5000C50045561CEAd0s0      -      -      0     62      0  6.29M
    c5t5000C50048990BE6d0s0      -      -      0     62      0  6.29M
    c5t5000C500489947A8d0s0      -      -      0     63      0  6.29M
logs                             -      -      -      -      -      -
  mirror                     2.61G   145G      0  2.68K      0  51.6M
    c1t2d0                       -      -      0  2.68K      0  51.6M
    c1t4d0                       -      -      0  2.68K      0  51.6M
cache                            -      -      -      -      -      -
  c1t5d0                      111G    80K      0    292    303  31.6M
---------------------------  -----  -----  -----  -----  -----  -----


>
> A better solution to test ZIL would be to leave the sync property as
> default but run a test that does synchronous I/O.
>
> Eric
>




More information about the OmniOS-discuss mailing list