[OmniOS-discuss] Zpool export while resilvering?
Schweiss, Chip
chip at innovates.com
Tue Jun 9 20:25:13 UTC 2015
I went through this problem a while back. There are some gotchas in
getting them back online and firmware upgraded. The is will not talk to
the drive until it has its firmware upgraded or cleared from the fault
database.
This drives will not flash with multipath enabled either.
I ended up clearing the fault manager's database, disabling it and
disconnecting half the SAS cables to get them flashed.
-Chip
On Jun 9, 2015 2:32 PM, "Robert A. Brock" <Robert.Brock at 2hoffshore.com>
wrote:
> They are failed as far as OmniOS is concerned, from what I can tell:
>
>
>
> Jun 08 01:08:54 710768e8-2f2b-4b3d-9d4b-a85ef5617219 DISK-8000-12 Major
>
>
>
> Host : 2hus291
>
> Platform : S5500BC Chassis_id : ............
>
> Product_sn :
>
>
>
> Fault class : fault.io.disk.over-temperature
>
> Affects : dev:///:devid=id1,sd@n5000c5007242271f
> //scsi_vhci/disk at g5000c5007242271f
>
> faulted and taken out of service
>
> FRU : "Slot 21"
> (hc://:product-id=LSI-SAS2X36:server-id=:chassis-id=500304800033213f:serial=S1Z02A8M0000K4361NF4:part=SEAGATE-ST4000NM0023:revision=0003/ses-enclosure=1/bay=20/disk=0)
>
> faulty
>
>
>
> Description : A disk's temperature exceeded the limits established by
>
> its manufacturer.
>
> Refer to http://illumos.org/msg/DISK-8000-12 for more
>
> information.
>
>
>
> root at 2hus291:/root# zpool status pool0
>
> pool: pool0
>
> state: DEGRADED
>
> status: One or more devices is currently being resilvered. The pool will
>
> continue to function, possibly in a degraded state.
>
> action: Wait for the resilver to complete.
>
> scan: resilver in progress since Tue Jun 9 11:11:16 2015
>
> 18.8T scanned out of 91.7T at 667M/s, 31h48m to go
>
> 591G resilvered, 20.55% done
>
> config:
>
>
>
> NAME STATE READ WRITE CKSUM
>
> pool0 DEGRADED 0 0 0
>
> raidz2-0 DEGRADED 0 0 0
>
> c0t5000C50055ECA49Bd0 ONLINE 0 0 0
>
> c0t5000C50055ECA4B3d0 ONLINE 0 0 0
>
> c0t5000C50055ECA587d0 ONLINE 0 0 0
>
> c0t5000C50055ECA6CFd0 ONLINE 0 0 0
>
> c0t5000C50055ECA7F3d0 ONLINE 0 0 0
>
> spare-5 REMOVED 0 0 0
>
> c0t5000C5007242271Fd0 REMOVED 0 0 0
>
> c0t5000C50055EF8A6Fd0 ONLINE 0 0 0
> (resilvering)
>
> c0t5000C50055ECAB23d0 ONLINE 0 0 0
>
> c0t5000C50055ECABABd0 ONLINE 0 0 0
>
> raidz2-1 ONLINE 0 0 0
>
> c0t5000C50055EE9D87d0 ONLINE 0 0 0
>
> c0t5000C50055EE9E43d0 ONLINE 0 0 0
>
> c0t5000C50055EEA5ABd0 ONLINE 0 0 0
>
> c0t5000C50055EEBA5Fd0 ONLINE 0 0 0
>
> c0t5000C50055EEC1E3d0 ONLINE 0 0 0
>
> c0t5000C500636670BFd0 ONLINE 0 0 0
>
> c0t5000C50055EF8CBBd0 ONLINE 0 0 0
>
> c0t5000C50055EF8D33d0 ONLINE 0 0 0
>
> raidz2-2 ONLINE 0 0 0
>
> c0t5000C50055F7942Fd0 ONLINE 0 0 0
>
> c0t5000C50055F79E03d0 ONLINE 0 0 0
>
> c0t5000C50055F7A8DFd0 ONLINE 0 0 0
>
> c0t5000C50055F81C1Bd0 ONLINE 0 0 0
>
> c0t5000C5005604A42Bd0 ONLINE 0 0 0
>
> c0t5000C5005604A487d0 ONLINE 0 0 0
>
> c0t5000C5005604A74Bd0 ONLINE 0 0 0
>
> c0t5000C5005604A91Bd0 ONLINE 0 0 0
>
> raidz2-4 DEGRADED 0 0 0
>
> c0t5000C500562ED6A3d0 ONLINE 0 0 0
>
> c0t5000C500562F8DEFd0 ONLINE 0 0 0
>
> c0t5000C500562F92D7d0 ONLINE 0 0 0
>
> c0t5000C500562FA0DFd0 ONLINE 0 0 0
>
> c0t5000C500636679EBd0 ONLINE 0 0 0
>
> spare-5 DEGRADED 0 0 14
>
> c0t5000C50057FBB127d0 REMOVED 0 0 0
>
> c0t5000C5006366906Bd0 ONLINE 0 0 0
>
> c0t5000C5006366808Fd0 ONLINE 0 0 0
>
> spare-7 REMOVED 0 0 0
>
> c0t5000C50057FC84F3d0 REMOVED 0 0 0
>
> c0t5000C50063669937d0 ONLINE 0 0 0
>
> logs
>
> mirror-3 ONLINE 0 0 0
>
> c13t5003048000308398d0 ONLINE 0 0 0
>
> c13t5003048000308399d0 ONLINE 0 0 0
>
> cache
>
> c13t5E83A97000005BC3d0 ONLINE 0 0 0
>
> spares
>
> c0t5000C5006366906Bd0 INUSE currently in use
>
> c0t5000C50063669937d0 INUSE currently in use
>
> c0t5000C50055EF8A6Fd0 INUSE currently in use
>
> c0t5000C5006366994Bd0 AVAIL
>
>
>
> Seems to be this that’s got me:
>
>
>
>
> http://www.bigdatajunkie.com/index.php/10-hardware/19-seagate-constellation-es-3-firmware-0003
>
>
>
> Can’t interact with them unless I bring them ‘out of retirement’:
>
>
>
> root at 2hus291:/root# cat /etc/devices/retire_store
>
> ▒ܱY^P"/scsi_vhci/disk at g5000c500724eacb70rio-store-version(rio-store-magic
> ▒▒`(rio-store-flagsP"/scsi_vhci/disk at g5000c50057fbc1c30rio-store-version
> (rio-store-magic▒▒
> `(rio-store-flagsP"/scsi_vhci/disk at g5000c50057fbaf030rio-store-version
> (rio-store-magic▒▒
> `(rio-store-flagsP"/scsi_vhci/disk at g5000c50057fbb1270rio-store-version
> (rio-store-magic▒▒
> `(rio-store-flagsP"/scsi_vhci/disk at g5000c50057fc84f30rio-store-version
> (rio-store-magic▒▒
> `(rio-store-flagsP"/scsi_vhci/disk at g5000c5007242271f0rio-store-version
> (rio-store-magic▒▒`(rio-store-flags
>
>
>
> Which, as I understand it, will necessitate a reboot. I’m going to try
> fwflash to see if I can get the firmware upgraded without having to pull
> all the disks and hook them up to a windows machine – has anyone else tried
> fwflash and Seagate disk firmware with any success? Fwflash seems to be
> able to see the disks, so I’m hopeful:
>
>
>
> root at 2hus291:/root# fwflash -l | grep ST4000NM0023
>
> Product : ST4000NM0023
>
> Product : ST4000NM0023
>
> Product : ST4000NM0023
>
> Product : ST4000NM0023
>
> Product : ST4000NM0023
>
> Product : ST4000NM0023
>
> Product : ST4000NM0023
>
> Product : ST4000NM0023
>
> Product : ST4000NM0023
>
> Product : ST4000NM0023
>
> Product : ST4000NM0023
>
> Product : ST4000NM0023
>
> Product : ST4000NM0023
>
> Product : ST4000NM0023
>
> Product : ST4000NM0023
>
> Product : ST4000NM0023
>
> Product : ST4000NM0023
>
> Product : ST4000NM0023
>
> Product : ST4000NM0023
>
> Product : ST4000NM0023
>
> Product : ST4000NM0023
>
> Product : ST4000NM0023
>
> Product : ST4000NM0023
>
> Product : ST4000NM0023
>
> Product : ST4000NM0023
>
> Product : ST4000NM0023
>
> Product : ST4000NM0023
>
> Product : ST4000NM0023
>
> Product : ST4000NM0023
>
> Product : ST4000NM0023
>
> Product : ST4000NM0023
>
> Product : ST4000NM0023
>
> Product : ST4000NM0023
>
>
>
>
>
> *From:* OmniOS-discuss [mailto:omnios-discuss-bounces at lists.omniti.com] *On
> Behalf Of *Richard Elling
> *Sent:* 09 June 2015 20:17
> *To:* Narayan Desai
> *Cc:* omnios-discuss at lists.omniti.com
> *Subject:* Re: [OmniOS-discuss] Zpool export while resilvering?
>
>
>
>
>
> On Jun 9, 2015, at 12:00 PM, Narayan Desai <narayan.desai at gmail.com>
> wrote:
>
>
>
> You might also crank up the priority on your resilver, particularly if it
> is getting tripped all of the time:
>
>
> http://broken.net/uncategorized/zfs-performance-tuning-for-scrubs-and-resilvers/
>
> -nld
>
>
>
> In general, yes this is a very good post. However, for more recent ZFS and
> certainly
>
> the lastest OmniOS something-14 release, the write throttle has been
> completely
>
> rewritten, positively impacting resilvers. And, with that rewrite, there
> is a few more
>
> tunables at your disposal, while the old ones fade to the bucket of bad
> memories :-)
>
>
>
> In most cases, resilver is capped by the time to write to the resilvering
> device.
>
> You can see this in iostat "-x" as the device that is 100% busy with write
> workload.
>
> That said, for this specific case, the drives are not actually failed,
> just taken offline,
>
> so you could have a short resilver session, once they are brought back
> online.
>
> -- richard
>
>
>
> >This is probably a silly question, but I¹ve honestly never tried this
> and
> >don¹t have a test machine handy at the moment can a pool be safely
> >exported and re-imported later if it is currently resilvering?
> >
>
>
>
> _______________________________________________
> OmniOS-discuss mailing list
> OmniOS-discuss at lists.omniti.com
> http://lists.omniti.com/mailman/listinfo/omnios-discuss
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://omniosce.org/ml-archive/attachments/20150609/4e2946f7/attachment-0001.html>
More information about the OmniOS-discuss
mailing list