[OmniOS-discuss] ZFS Questions
"Daniel D. Gonçalves"
daniel at dgnetwork.com.br
Thu Aug 22 20:20:02 UTC 2013
Thanks Saso,
To stop RESILVER, which device I to set to OFFLINE?
I do not know how the device "c17t33d1" was placed in the
MIRROR-11/REPLACING-1, how do I remove it from there?
Daniel
Em 22/08/2013 16:17, Saso Kiselkov escreveu:
> On 8/22/13 8:00 PM, "Daniel D. Gonçalves" wrote:
>> Hi all,
>>
>> I have many questions about the ZFS on OmniOS:
> Hi Daniel,
>
>> 1) It is possible to stop the RESILVER?
> Offline the device.
>
>> 2) I can not remove the device FAULTED:
>>
>> " c17t22d1 FAULTED 0 0 0 corrupted data"
>>
>> root at storage01:~# zpool offline STORAGE01 c17t22d1
>> cannot offline c17t22d1: no valid replicas
>> root at storage01:~# zpool detach STORAGE01 c17t22d1
>> cannot detach c17t22d1: no valid replicas
>> root at storage01:~# zpool remove STORAGE01 c17t22d1
>> cannot remove c17t22d1: only inactive hot spares, cache, top-level, or
>> log devices can be removed
>> root at storage01:~#
> Detach only works on vdevs in a mirror. If your disk is in a raidz or is
> in no redundancy group at all (aka top-level vdev), you have to replace
> it. ZFS doesn't at the moment allow for removing drives in ways that
> reduces the total capacity of the pool.
>
>> 3) The STATUS "Replacing-1" never changes, how do?
>>
>> Below, the status of my POOL:
>>
>> root at storage01:~# zpool status
>> pool: STORAGE01
>> state: DEGRADED
>> status: One or more devices is currently being resilvered. The pool will
>> continue to function, possibly in a degraded state.
>> action: Wait for the resilver to complete.
>> scan: resilver in progress since Thu Aug 22 14:28:34 2013
>> 696G scanned out of 18.3T at 113M/s, 45h31m to go
>> 166G resilvered, 3.71% done
>> config:
>>
>> NAME STATE READ WRITE CKSUM
>> STORAGE01 DEGRADED 0 0 0
>> mirror-0 ONLINE 0 0 0
>> c17t36d1 ONLINE 0 0 0
>> c17t19d1 ONLINE 0 0 0
>> mirror-1 DEGRADED 0 0 0
>> c17t24d1 ONLINE 0 0 0 (resilvering)
>> replacing-1 DEGRADED 0 0 0
>> c17t22d1 FAULTED 0 0 0 corrupted data
>> c17t33d1 ONLINE 0 0 0 (resilvering)
>> c17t21d1 ONLINE 0 0 0 (resilvering)
>> mirror-2 ONLINE 0 0 0
>> c17t18d1 ONLINE 0 0 0
>> c17t17d1 ONLINE 0 0 0
>> mirror-3 ONLINE 0 0 0
>> c17t20d1 ONLINE 0 0 0
>> c17t22d1 ONLINE 0 0 0
>> mirror-5 ONLINE 0 0 0
>> c17t25d1 ONLINE 0 0 0
>> c17t27d1 ONLINE 0 0 0
>> mirror-6 ONLINE 0 0 0
>> c17t26d1 ONLINE 0 0 0
>> c17t28d1 ONLINE 0 0 0
>> mirror-7 ONLINE 0 0 0
>> c17t29d1 ONLINE 0 0 0
>> c17t31d1 ONLINE 0 0 0
>> mirror-8 ONLINE 0 0 0
>> c17t32d1 ONLINE 0 0 0
>> c17t30d1 ONLINE 0 0 0
>> mirror-9 ONLINE 0 0 0
>> c17t23d1 ONLINE 0 0 0
>> c17t34d1 ONLINE 0 0 0
>> logs
>> mirror-4 ONLINE 0 0 0
>> c14t1d0 ONLINE 0 0 0
>> c14t3d0 ONLINE 0 0 0
>> cache
>> c14t4d0 ONLINE 0 0 0
>>
>> errors: 11 data errors, use '-v' for a list
>>
>> pool: rpool
>> state: ONLINE
>> scan: none requested
>> config:
>>
>> NAME STATE READ WRITE CKSUM
>> rpool ONLINE 0 0 0
>> c14t5d0s0 ONLINE 0 0 0
>>
>> errors: No known data errors
>> root at storage01:~#
> You somehow managed to damage your pool by shuffling devices around too
> liberally, leaving you with both halves of the mirror in a bad state.
> It's quite difficult to tell at this stage what's going on. Let it run
> for a while and you'll see if its making any progress. By the looks of
> it it should take another 45 hours to complete (large pools take a long
> time to resilver, that's just a fact of life).
>
More information about the OmniOS-discuss
mailing list