[OmniOS-discuss] 8TB Seagates under load = PANIC?

Artem Penner apenner.it at gmail.com
Fri Mar 17 11:14:57 UTC 2017


After reboot pool is ONLINE state and it continue data resilvering

root at zns2-n2:/root# zpool status pool5
  pool: pool5
 state: ONLINE
status: One or more devices is currently being resilvered.  The pool will
        continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
  scan: resilver in progress since Fri Mar 17 02:04:45 2017
    213G scanned out of 5.15T at 42.4M/s, 33h55m to go
    7.15G resilvered, 4.04% done
config:

        NAME                         STATE     READ WRITE CKSUM
        pool5                        ONLINE       0     0     0
          raidz2-0                   ONLINE       0     0     0
            c9t5000CCA031039F18d0    ONLINE       0     0     0
            c9t5000CCA03103A054d0    ONLINE       0     0     0
            c9t5000CCA031048B88d0    ONLINE       0     0     0
            c9t5000CCA031039E80d0    ONLINE       0     0     0
            c9t5000CCA031048C18d0    ONLINE       0     0     0
            c9t5000CCA031039F50d0    ONLINE       0     0     0
            c9t5000CCA03104AA40d0    ONLINE       0     0     0
            c9t5000CCA03104AD44d0    ONLINE       0     0     0
            c9t5000CCA03103B274d0    ONLINE       0     0     0
            c9t5000CCA03103B2C8d0    ONLINE       0     0     0
          raidz2-1                   ONLINE       0     0     0
            c9t5000CCA03103B34Cd0    ONLINE       0     0     0
            c9t5000CCA03104AE18d0    ONLINE       0     0     0
            c9t5000CCA03104AE54d0    ONLINE       0     0     0
            spare-3                  ONLINE       0     0     0
              c9t5000CCA03103B2D4d0  ONLINE       0     0     0
              c9t5000CCA03104AD84d0  ONLINE       0     0     0
 (resilvering)
            c9t5000CCA03103B26Cd0    ONLINE       0     0     0
            c9t5000CCA03103B3ACd0    ONLINE       0     0     0
            c9t5000CCA03103B338d0    ONLINE       0     0     0
            c9t5000CCA03104DFACd0    ONLINE       0     0     0
            c9t5000CCA03104ABA8d0    ONLINE       0     0     0
            c9t5000CCA03103A10Cd0    ONLINE       0     0     0
          raidz2-2                   ONLINE       0     0     0
            c9t5000CCA02C413D04d0    ONLINE       0     0     0
            c9t5000CCA03104D40Cd0    ONLINE       0     0     0
            c9t5000CCA03104D7A8d0    ONLINE       0     0     0
            c9t5000CCA03104D768d0    ONLINE       0     0     0
            c9t5000CCA031046494d0    ONLINE       0     0     0
            c9t5000CCA03104D458d0    ONLINE       0     0     0
            c9t5000CCA03104D408d0    ONLINE       0     0     0
            c9t5000CCA03104D798d0    ONLINE       0     0     0
            c9t5000CCA03103D28Cd0    ONLINE       0     0     0
            c9t5000CCA031047A30d0    ONLINE       0     0     0
          raidz2-3                   ONLINE       0     0     0
            spare-0                  ONLINE       0     0     0
              c9t5000CCA03104D2ECd0  ONLINE       0     0     0
              c9t5000CCA02C414524d0  ONLINE       0     0     0
 (resilvering)
            c9t5000CCA03104D370d0    ONLINE       0     0     0
            c9t5000CCA031038248d0    ONLINE       0     0     0
            c9t5000CCA03104D448d0    ONLINE       0     0     0
            c9t5000CCA03104D81Cd0    ONLINE       0     0     0
            c9t5000CCA031038470d0    ONLINE       0     0     0
            c9t5000CCA03104D400d0    ONLINE       0     0     0
            c9t5000CCA03104D330d0    ONLINE       0     0     0
            c9t5000CCA03104D7F8d0    ONLINE       0     0     0
            c9t5000CCA031046C68d0    ONLINE       0     0     0
        logs
          mirror-4                   ONLINE       0     0     0
            c9t500003979C88FF91d0    ONLINE       0     0     0
            c9t500003973C88B6F5d0    ONLINE       0     0     0
          mirror-5                   ONLINE       0     0     0
            c9t500003979C88FF5Dd0    ONLINE       0     0     0
            c9t500003979C890019d0    ONLINE       0     0     0
        cache
          c9t5000CCA04B12952Cd0      ONLINE       0     0     0
          c9t5000CCA04B12A1D0d0      ONLINE       0     0     0
        spares
          c9t5000CCA03104AD84d0      INUSE     currently in use
          c9t5000CCA02C414524d0      INUSE     currently in use

errors: No known data errors

But filesystem pool5/fs1, which I used for *performance / crash*
testing is *not
available *

root at zns2-n2:/root# zfs list pool5/fs1
NAME        USED  AVAIL  REFER  MOUNTPOINT
pool5/fs1  3.90T  44.1T  3.90T  /pool5/fs1

root at zns2-n2:/root# zfs mount pool5/fs1
cannot mount 'pool5/fs1': mountpoint or dataset is busy
root at zns2-n2:/root# zfs umount pool5/fs1
cannot unmount 'pool5/fs1': not currently mounted

root at zns2-n2:/root# df -h
Filesystem                      Size  Used Avail Use% Mounted on
/usr/lib/libc/libc_hwcap1.so.1  470G  2.8G  467G   1% /lib/libc.so.1
swap                            225G  272K  225G   1% /etc/svc/volatile
swap                            225G     0  225G   0% /tmp
swap                            225G   24K  225G   1% /var/run
rpool/export                    467G   96K  467G   1% /export
rpool/export/home               467G   96K  467G   1% /export/home
pool5                            45T  220K   45T   1% /pool5




пт, 17 мар. 2017 г. в 11:44, Artem Penner <apenner.it at gmail.com>:

> Hi,
> we have similar problem, we create pool in following configuration:
> 4 x Toshiba PX04SHB020 for zfs slog in mirror
> 2 x HGST HUSMH8040BSS204 for L2ARC
> 4 group in radz2(8D + 2P)
>
> and when we perform crash test ( remove disks ), some times OmniOS going
> into kernel panic:
> [image: pasted1]
>
> Any idias?
>
> -------------------------------------------------
> My configuration is:
> *FILE: /etc/system*
>
> * Disable cpu c-state
> * Change scheduler frequency
> set idle_cpu_prefer_mwait = 0
> set idle_cpu_no_deep_c = 1
> set hires_tick=1
>
> * Scanning for iBFT may conflict with devices which use memory
> * in 640-1024KB of physical address space. The iBFT
> * specification suggests use of low RAM method - scanning
> * physical memory 512-1024 KB for iBFT table.
> * Disable low RAM method.
> set ibft_noprobe=1
>
> * SECURITY PARAMETERS
> set noexec_user_stack=1
> set noexec_user_stack_log=1
>
> * IP / TCP Parameters
> set ip:ip_squeue_fanout=1
> set ndd:tcp_wscale_always = 1
> set ndd:tstamp_if_wscale = 1
> set ndd:tcp_max_buf = 166777216
>
> * APIC
> set pcplusmp:apic_panic_on_nmi=1
> set apix:apic_panic_on_nmi=1
>
> * Kernel DUMP
> set dump_plat_mincpu=0
> set dump_bzip2_level=1
> set dump_metrics_on=1
>
> * SATA
> set sata:sata_auto_online=1
>
> * SAS
> set sd:sd_io_time=10
>
> * Max open files by single process
> set rlim_fd_max = 131072
> set rlim_fd_cur = 65536
>
> * Maximum size of physical I/O requests
> set maxphys=1048576
>
> **
> *  NFS Tunning
> **
> set nfs:nfs_allow_preepoch_time = 1
>
> * Maximum io threads. client.
> set nfs:nfs3_max_threads = 256
> set nfs:nfs4_max_threads = 256
>
> * Read ahead operations by the NFS client.
> set nfs:nfs3_nra = 32
> set nfs:nfs4_nra = 32
>
> * Logical block size used by the NFS client
> set nfs:nfs3_bsize = 1048576
> set nfs:nfs4_bsize = 1048576
>
> * Controls the maximum size of the data portion of an NFS client
> set nfs3:max_transfer_size = 1048576
> set nfs4:max_transfer_size = 1048576
>
> * Controls the mix of asynchronous requests that are generated by the NFS
> version 4 client
> set nfs:nfs4_async_clusters = 256
>
> * RPC Tunning
> * Sets the size of the kernel stack for kernel RPC service threads
> set rpcmod:svc_default_stksize=0x6000
>
> * Controls the size of the duplicate request cache that detects RPC
> *   level retransmissions on connection oriented transports
> set rpcmod:cotsmaxdupreqs = 4096
>
> * Controls the size of the duplicate request cache that detect RPC
> *   level retransmissions on connectionless transports
> set rpcmod:maxdupreqs = 4096
> * Controls the number of TCP connections that the NFS client uses when
> communicating with each
> *   NFS server. The kernel RPC is constructed so that it can multiplex
> RPCs over a single connection
> set rpcmod:clnt_max_conns = 8
>
> * ZFS Tunning
> set zfs:zfs_dirty_data_max = 0x300000000
> set zfs:zfs_txg_timeout = 0xa
> set zfs:zfs_dirty_data_sync = 0x200000000
> set zfs:zfs_arc_max = 0x3480000000
> set zfs:zfs_arc_shrink_shift=11
> set zfs:l2arc_write_max = 0x9600000
> set zfs:l2arc_write_boost = 0xC800000
> set zfs:l2arc_headroom = 12
> set zfs:l2arc_norw=0
> set zfs:zfs_arc_shrink_shift=11
>
> -----------------------------------------------------
> *FILE: /kernel/drv/ixgbe.conf*
> flow_control = 0;
> tx_ring_size = 4096;
> rx_ring_size = 4096;
> tx_queue_number = 16;
> rx_queue_number = 16;
> rx_limit_per_intr = 1024;
> tx_copy_threshold = 1024;
> rx_copy_threshold = 512;
> rx_group_number = 8;
> mr_enable = 0;
> intr_throttling = 0;
>
> -----------------------------------------------------
> sd-config-list from* /kernel/drv/sd.conf*
> sd-config-list=
>     "",
> "retries-timeout:1,retries-busy:1,retries-reset:1,retries-victim:2",
>     "DELL    PERC H710", "cache-nonvolatile:true",
>     "DELL    PERC H700", "cache-nonvolatile:true",
>     "DELL    PERC/6i", "cache-nonvolatile:true",
>     "ATA     Samsung SSD 830", "physical-block-size:4096",
>     "ATA     Samsung SSD 840", "physical-block-size:4096",
>     "ATA     Samsung SSD 850", "physical-block-size:4096",
>     "HGST    HUH721008AL4204", "physical-block-size:4096",
>     "HGST    HUH721010AL5204", "physical-block-size:4096",
>     "HGST    HUH721008AL5204", "physical-block-size:4096",
>     "HGST    HUH721010AL4204", "physical-block-size:4096",
>     "HGST    HUH728080AL4204", "physical-block-size:4096",
>     "HGST    HUH728060AL4204", "physical-block-size:4096",
>     "HGST    HUH728080AL5204", "physical-block-size:4096",
>     "HGST    HUH728060AL5204", "physical-block-size:4096",
>     "HGST    HUS724030ALS640", "physical-block-size:4096",
>     "HGST    HMH7210A0AL4604", "physical-block-size:4096",
>     "HGST    HUS726060AL4214", "physical-block-size:4096",
>     "HGST    HUS726050AL4214", "physical-block-size:4096",
>     "HGST    HUS726040AL4214", "physical-block-size:4096",
>     "HGST    HUS726020AL4214", "physical-block-size:4096",
>     "HGST    HUS726060AL5214", "physical-block-size:4096",
>     "HGST    HUS726050AL5214", "physical-block-size:4096",
>     "HGST    HUS726040AL5214", "physical-block-size:4096",
>     "HGST    HUS726020AL5214", "physical-block-size:4096",
>     "HGST    HUS726060ALS214", "physical-block-size:4096",
>     "HGST    HUS726050ALS214", "physical-block-size:4096",
>     "HGST    HUS726040ALS214", "physical-block-size:4096",
>     "HGST    HUS726020ALS214", "physical-block-size:4096",
>     "HGST    HUC101818CS4204", "physical-block-size:4096",
>     "HGST    HUC101812CS4204", "physical-block-size:4096",
>     "HGST    HUC101890CS4204", "physical-block-size:4096",
>     "HGST    HUC101860CS4204", "physical-block-size:4096",
>     "HGST    HUC101845CS4204", "physical-block-size:4096",
>     "HGST    HUC156060CS4204", "physical-block-size:4096",
>     "HGST    HUC156045CS4204", "physical-block-size:4096",
>     "HGST    HUC156030CS4204", "physical-block-size:4096",
>     "HGST    HUSMH8010BSS204",
> "throttle-max:32,disksort:false,cache-nonvolatile:true,power-condition:false,physical-block-size:4096",
>     "HGST    HUSMH8020BSS204",
> "throttle-max:32,disksort:false,cache-nonvolatile:true,power-condition:false,physical-block-size:4096",
>     "HGST    HUSMH8040BSS204",
> "throttle-max:32,disksort:false,cache-nonvolatile:true,power-condition:false,physical-block-size:4096",
>     "HGST    HUSMH8080BSS204",
> "throttle-max:32,disksort:false,cache-nonvolatile:true,power-condition:false,physical-block-size:4096",
>     "HGST    HUSMM1616ASS204",
> "throttle-max:32,disksort:false,cache-nonvolatile:true,power-condition:false,physical-block-size:4096",
>     "HGST    HUSMM1680ASS204",
> "throttle-max:32,disksort:false,cache-nonvolatile:true,power-condition:false,physical-block-size:4096",
>     "HGST    HUSMM1640ASS204",
> "throttle-max:32,disksort:false,cache-nonvolatile:true,power-condition:false,physical-block-size:4096",
>     "HGST    HUSMM1620ASS204",
> "throttle-max:32,disksort:false,cache-nonvolatile:true,power-condition:false,physical-block-size:4096",
>     "TOSHIBA PX04SHB160",
> "throttle-max:32,disksort:false,cache-nonvolatile:true,power-condition:false,physical-block-size:4096",
>     "TOSHIBA PX04SHB080",
> "throttle-max:32,disksort:false,cache-nonvolatile:true,power-condition:false,physical-block-size:4096",
>     "TOSHIBA PX04SHB040",
> "throttle-max:32,disksort:false,cache-nonvolatile:true,power-condition:false,physical-block-size:4096",
>     "TOSHIBA PX04SHB020",
> "throttle-max:32,disksort:false,cache-nonvolatile:true,power-condition:false,physical-block-size:4096";
>
> -----------------------------------------------------
> scsi-vhci-failover-override from */kernel/drv/scsi_vhci.conf*
> scsi-vhci-failover-override =
>         "HGST    HUS724030ALS640",      "f_sym",
>         "HGST    HUSMH8010BSS204",      "f_sym",
>         "HGST    HUSMH8020BSS204",      "f_sym",
>         "HGST    HUSMH8040BSS204",      "f_sym",
>         "HGST    HUSMH8080BSS204",      "f_sym",
>         "HGST    HUSMM1616ASS204",      "f_sym",
>         "HGST    HUSMM1680ASS204",      "f_sym",
>         "HGST    HUSMM1640ASS204",      "f_sym",
>         "HGST    HUSMM1620ASS204",      "f_sym",
>         "HGST    HUH721008AL4204",      "f_sym",
>         "HGST    HUH721010AL5204",      "f_sym",
>         "HGST    HUH721008AL5204",      "f_sym",
>         "HGST    HUH721010AL4204",      "f_sym",
>         "HGST    HUH728080AL4204",      "f_sym",
>         "HGST    HUH728060AL4204",      "f_sym",
>         "HGST    HUH728080AL5204",      "f_sym",
>         "HGST    HUH728060AL5204",      "f_sym",
>         "HGST    HMH7210A0AL4604",      "f_sym",
>         "HGST    HUS726060AL4214",      "f_sym",
>         "HGST    HUS726050AL4214",      "f_sym",
>         "HGST    HUS726040AL4214",      "f_sym",
>         "HGST    HUS726020AL4214",      "f_sym",
>         "HGST    HUS726060AL5214",      "f_sym",
>         "HGST    HUS726050AL5214",      "f_sym",
>         "HGST    HUS726040AL5214",      "f_sym",
>         "HGST    HUS726020AL5214",      "f_sym",
>         "HGST    HUS726060ALS214",      "f_sym",
>         "HGST    HUS726050ALS214",      "f_sym",
>         "HGST    HUS726040ALS214",      "f_sym",
>         "HGST    HUS726020ALS214",      "f_sym",
>         "HGST    HUC101818CS4204",      "f_sym",
>         "HGST    HUC101812CS4204",      "f_sym",
>         "HGST    HUC101890CS4204",      "f_sym",
>         "HGST    HUC101860CS4204",      "f_sym",
>         "HGST    HUC101845CS4204",      "f_sym",
>         "HGST    HUC156060CS4204",      "f_sym",
>         "HGST    HUC156045CS4204",      "f_sym",
>         "HGST    HUC156030CS4204",      "f_sym",
>         "TOSHIBA PX04SHB160",           "f_sym",
>         "TOSHIBA PX04SHB080",           "f_sym",
>         "TOSHIBA PX04SHB040",           "f_sym",
>         "TOSHIBA PX04SHB020",           "f_sym";
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://omniosce.org/ml-archive/attachments/20170317/5ae7f858/attachment-0001.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: pasted1
Type: image/png
Size: 310722 bytes
Desc: not available
URL: <https://omniosce.org/ml-archive/attachments/20170317/5ae7f858/attachment-0001.png>


More information about the OmniOS-discuss mailing list