[OmniOS-discuss] zpool import strange informations
Richard Elling
richard.elling at richardelling.com
Tue Dec 2 17:38:23 UTC 2014
On Dec 2, 2014, at 7:20 AM, Filip Marvan <filip.marvan at aira.cz> wrote:
> Yes, you are probably right. But I'm thinking, why that information wasn't
> overwriten when I add whole c3t2d0 to a new mirror?
p0 is an fdisk partition for the whole disk.
c3t2d0 is really a shorthand notation for c3t2d0s0.
c3t2d0 is also in a fdisk partition of type "Solaris2"
So you'll need to check your partitioning and see where exactly the blocks are
allocated to solve the mystery.
> Is it possible to use
> c3t2d0 in one pool, and c3t2d0p0 in another?
In general, no. The p0 fdisk partition overlaps whatever fdisk partition is being used
for c3t2d0 -- an invitation to data corruption.
In general, it is considered bad practice to use p0 (or any p*) for ZFS pool creation.
There is a darn good reason the ZFS docs refer to disks as c3t2d0 -- if you do this
consistently, then you'll never have partition nightmares.
HTH
-- richard
>
> Thank you,
> Filip
>
> -----Original Message-----
> From: Zach Malone [mailto:zmalone at omniti.com]
> Sent: Tuesday, December 02, 2014 4:12 PM
> To: Filip Marvan
> Cc: omnios-discuss at lists.omniti.com
> Subject: Re: [OmniOS-discuss] zpool import strange informations
>
> If I was to guess, c3t2d0p0 was once a part of another pool called "tank",
> and when you do an import, it's attempting to reassemble the rest of tank, and
> sees that a drive named c3t5d0 used to be in the pool it knew as tank.
>
> Tank is a pretty standard pool name, so I'm guessing the drive used to be in
> another system, or the system itself used to be running a different OS.
> --Zach
>
> On Tue, Dec 2, 2014 at 9:31 AM, Filip Marvan <filip.marvan at aira.cz> wrote:
>> Hi,
>>
>>
>>
>> I have one storage server on OmniOS 12 with one data pool:
>>
>>
>>
>> pool: raid_10
>>
>> state: ONLINE
>>
>> scan: none requested
>>
>> config:
>>
>>
>>
>> NAME STATE READ WRITE CKSUM
>>
>> raid_10 ONLINE 0 0 0
>>
>> mirror-0 ONLINE 0 0 0
>>
>> c3t2d0 ONLINE 0 0 0
>>
>> c3t3d0 ONLINE 0 0 0
>>
>> mirror-1 ONLINE 0 0 0
>>
>> c3t4d0 ONLINE 0 0 0
>>
>> c3t5d0 ONLINE 0 0 0
>>
>> logs
>>
>> c3t0d0 ONLINE 0 0 0
>>
>>
>>
>> errors: No known data errors
>>
>>
>>
>> pool: rpool
>>
>> state: ONLINE
>>
>> scan: none requested
>>
>> config:
>>
>>
>>
>> NAME STATE READ WRITE CKSUM
>>
>> rpool ONLINE 0 0 0
>>
>> c3t1d0s0 ONLINE 0 0 0
>>
>>
>>
>> errors: No known data errors
>>
>>
>>
>> There are no other disks in this storage server. Bud if I write
>> command zpool import, I can see:
>>
>> pool: tank
>>
>> id: 4916680739993977334
>>
>> state: UNAVAIL
>>
>> status: The pool was last accessed by another system.
>>
>> action: The pool cannot be imported due to damaged devices or data.
>>
>> see: http://illumos.org/msg/ZFS-8000-EY
>>
>> config:
>>
>>
>>
>> tank UNAVAIL insufficient replicas
>>
>> raidz2-1 UNAVAIL insufficient replicas
>>
>> /dev/label/mfront8 UNAVAIL cannot open
>>
>> /dev/label/mfront9 UNAVAIL cannot open
>>
>> /dev/label/mfront10 UNAVAIL cannot open
>>
>> c3t2d0p0 ONLINE
>>
>> /dev/label/mfront12 UNAVAIL cannot open
>>
>> /dev/label/mfront13 UNAVAIL cannot open
>>
>> /dev/label/mfront14 UNAVAIL cannot open
>>
>> c3t5d0 ONLINE
>>
>>
>>
>> I never created such "tank" pool. Where are stored informations about
>> this (un)importable pool, how can I delete them? Why c3t5d0 seems to
>> be as part of two different pools at the same time?
>>
>>
>>
>> Thank you for any help.
>>
>> Filip Marvan
>>
>>
>> _______________________________________________
>> OmniOS-discuss mailing list
>> OmniOS-discuss at lists.omniti.com
>> http://lists.omniti.com/mailman/listinfo/omnios-discuss
>>
> _______________________________________________
> OmniOS-discuss mailing list
> OmniOS-discuss at lists.omniti.com
> http://lists.omniti.com/mailman/listinfo/omnios-discuss
More information about the OmniOS-discuss
mailing list