[OmniOS-discuss] zpool import strange informations
Richard Elling
richard.elling at richardelling.com
Wed Dec 3 17:02:08 UTC 2014
On Dec 3, 2014, at 6:53 AM, Filip Marvan <filip.marvan at aira.cz> wrote:
> Hi,
>
> thank you for clarification. Yes, it is possible, that these disk was used
> in some ZFS pool before, they aren't new and I don't know their history.
>
> So I checked both, c3t2d0 and c3t5d0 disks (disks, that are "online" in
> importable "tank" pool) and I found nothing strange.
>
> Partition Status Type Start End Length %
> ========= ====== ============ ===== === ====== ===
> 1 EFI 0 60800 60801 100
>
> There is only one partition on both disk, which should be online and used by
> my "raid_10" pool (as I posted before). So do you have any idea, where on
> disks are stored information, that they are part of some another pool and
> how can get rid of it?
Try this instead:
for i in /dev/dsk/c3t2d0*
do
echo $i
zdb -l $i
done
This will show you the ZFS labels on each and every slice/partition for the disk.
Ideally, you'll only see one set of ZFS labels for each disk.
-- richard
>
> Thank you very much,
> Filip
>
>
> -----Original Message-----
> From: Richard Elling [mailto:richard.elling at richardelling.com]
> Sent: Tuesday, December 02, 2014 6:38 PM
> To: Filip Marvan
> Cc: Zach Malone; omnios-discuss at lists.omniti.com
> Subject: Re: [OmniOS-discuss] zpool import strange informations
>
>
> On Dec 2, 2014, at 7:20 AM, Filip Marvan <filip.marvan at aira.cz> wrote:
>
>> Yes, you are probably right. But I'm thinking, why that information
>> wasn't overwriten when I add whole c3t2d0 to a new mirror?
>
> p0 is an fdisk partition for the whole disk.
> c3t2d0 is really a shorthand notation for c3t2d0s0.
> c3t2d0 is also in a fdisk partition of type "Solaris2"
>
> So you'll need to check your partitioning and see where exactly the blocks
> are allocated to solve the mystery.
>
>> Is it possible to use
>> c3t2d0 in one pool, and c3t2d0p0 in another?
>
> In general, no. The p0 fdisk partition overlaps whatever fdisk partition is
> being used for c3t2d0 -- an invitation to data corruption.
>
> In general, it is considered bad practice to use p0 (or any p*) for ZFS pool
> creation.
> There is a darn good reason the ZFS docs refer to disks as c3t2d0 -- if you
> do this consistently, then you'll never have partition nightmares.
>
> HTH
> -- richard
>
>>
>> Thank you,
>> Filip
>>
>> -----Original Message-----
>> From: Zach Malone [mailto:zmalone at omniti.com]
>> Sent: Tuesday, December 02, 2014 4:12 PM
>> To: Filip Marvan
>> Cc: omnios-discuss at lists.omniti.com
>> Subject: Re: [OmniOS-discuss] zpool import strange informations
>>
>> If I was to guess, c3t2d0p0 was once a part of another pool called
>> "tank", and when you do an import, it's attempting to reassemble the
>> rest of tank, and sees that a drive named c3t5d0 used to be in the pool it
> knew as tank.
>>
>> Tank is a pretty standard pool name, so I'm guessing the drive used to
>> be in another system, or the system itself used to be running a different
> OS.
>> --Zach
>>
>> On Tue, Dec 2, 2014 at 9:31 AM, Filip Marvan <filip.marvan at aira.cz> wrote:
>>> Hi,
>>>
>>>
>>>
>>> I have one storage server on OmniOS 12 with one data pool:
>>>
>>>
>>>
>>> pool: raid_10
>>>
>>> state: ONLINE
>>>
>>> scan: none requested
>>>
>>> config:
>>>
>>>
>>>
>>> NAME STATE READ WRITE CKSUM
>>>
>>> raid_10 ONLINE 0 0 0
>>>
>>> mirror-0 ONLINE 0 0 0
>>>
>>> c3t2d0 ONLINE 0 0 0
>>>
>>> c3t3d0 ONLINE 0 0 0
>>>
>>> mirror-1 ONLINE 0 0 0
>>>
>>> c3t4d0 ONLINE 0 0 0
>>>
>>> c3t5d0 ONLINE 0 0 0
>>>
>>> logs
>>>
>>> c3t0d0 ONLINE 0 0 0
>>>
>>>
>>>
>>> errors: No known data errors
>>>
>>>
>>>
>>> pool: rpool
>>>
>>> state: ONLINE
>>>
>>> scan: none requested
>>>
>>> config:
>>>
>>>
>>>
>>> NAME STATE READ WRITE CKSUM
>>>
>>> rpool ONLINE 0 0 0
>>>
>>> c3t1d0s0 ONLINE 0 0 0
>>>
>>>
>>>
>>> errors: No known data errors
>>>
>>>
>>>
>>> There are no other disks in this storage server. Bud if I write
>>> command zpool import, I can see:
>>>
>>> pool: tank
>>>
>>> id: 4916680739993977334
>>>
>>> state: UNAVAIL
>>>
>>> status: The pool was last accessed by another system.
>>>
>>> action: The pool cannot be imported due to damaged devices or data.
>>>
>>> see: http://illumos.org/msg/ZFS-8000-EY
>>>
>>> config:
>>>
>>>
>>>
>>> tank UNAVAIL insufficient replicas
>>>
>>> raidz2-1 UNAVAIL insufficient replicas
>>>
>>> /dev/label/mfront8 UNAVAIL cannot open
>>>
>>> /dev/label/mfront9 UNAVAIL cannot open
>>>
>>> /dev/label/mfront10 UNAVAIL cannot open
>>>
>>> c3t2d0p0 ONLINE
>>>
>>> /dev/label/mfront12 UNAVAIL cannot open
>>>
>>> /dev/label/mfront13 UNAVAIL cannot open
>>>
>>> /dev/label/mfront14 UNAVAIL cannot open
>>>
>>> c3t5d0 ONLINE
>>>
>>>
>>>
>>> I never created such "tank" pool. Where are stored informations about
>>> this (un)importable pool, how can I delete them? Why c3t5d0 seems to
>>> be as part of two different pools at the same time?
>>>
>>>
>>>
>>> Thank you for any help.
>>>
>>> Filip Marvan
>>>
>>>
>>> _______________________________________________
>>> OmniOS-discuss mailing list
>>> OmniOS-discuss at lists.omniti.com
>>> http://lists.omniti.com/mailman/listinfo/omnios-discuss
>>>
>> _______________________________________________
>> OmniOS-discuss mailing list
>> OmniOS-discuss at lists.omniti.com
>> http://lists.omniti.com/mailman/listinfo/omnios-discuss
>
More information about the OmniOS-discuss
mailing list