[OmniOS-discuss] 2x acctual disk quantity
Hafiz Rafibeyli
rafibeyli at gmail.com
Mon Dec 2 14:19:50 UTC 2013
Saso ,I do not see my new added 4 disks in mpathadm list lu query.
Here is my output,
~# mpathadm list lu
/dev/rdsk/c1t5000C5005600A6B3d0s2
Total Path Count: 2
Operational Path Count: 2
/dev/rdsk/c1t5001517BB2747BE7d0s2
Total Path Count: 2
Operational Path Count: 2
/dev/rdsk/c1t5001517959627219d0s2
Total Path Count: 2
Operational Path Count: 2
/dev/rdsk/c1t5000C50055A607E3d0s2
Total Path Count: 2
Operational Path Count: 2
/dev/rdsk/c1t5001517BB2AFB592d0s2
Total Path Count: 1
Operational Path Count: 1
/dev/rdsk/c1t5000C50041E9D9A7d0s2
Total Path Count: 2
Operational Path Count: 2
/dev/rdsk/c1t5000C5004253FF87d0s2
Total Path Count: 2
Operational Path Count: 2
/dev/rdsk/c1t5000C50055A62F57d0s2
Total Path Count: 2
Operational Path Count: 2
/dev/rdsk/c1t5001517803D007D8d0s2
Total Path Count: 1
Operational Path Count: 1
/dev/rdsk/c1t5000C5005600B43Bd0s2
Total Path Count: 2
Operational Path Count: 2
/dev/rdsk/c1t5000C50041F1A5EFd0s2
Total Path Count: 2
Operational Path Count: 2
/dev/rdsk/c1t5000C50055A628EFd0s2
Total Path Count: 2
Operational Path Count: 2
These disks are from my 1st zfs pool which is now in production:
NAME STATE READ WRITE CKSUM CAP Product
zpool1 ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
c1t5000C50041E9D9A7d0 ONLINE 0 0 0 3 TB ST33000650SS
c1t5000C50041F1A5EFd0 ONLINE 0 0 0 3 TB ST33000650SS
c1t5000C5004253FF87d0 ONLINE 0 0 0 3 TB ST33000650SS
c1t5000C50055A607E3d0 ONLINE 0 0 0 3 TB ST33000650SS
c1t5000C50055A628EFd0 ONLINE 0 0 0 3 TB ST33000650SS
c1t5000C50055A62F57d0 ONLINE 0 0 0 3 TB ST33000650SS
logs
mirror-1 ONLINE 0 0 0
c1t5001517959627219d0 ONLINE 0 0 0 32 GB SSDSA2SH032G1GN
c1t5001517BB2747BE7d0 ONLINE 0 0 0 32 GB SSDSA2SH032G1GN
cache
c1t5001517803D007D8d0 ONLINE 0 0 0 120 GB INTEL SSDSC2CW12
c1t5001517BB2AFB592d0 ONLINE 0 0 0 120 GB INTEL SSDSC2CW12
spares
c1t5000C5005600A6B3d0 AVAIL 3 TB ST33000650SS
c1t5000C5005600B43Bd0 AVAIL 3 TB ST33000650SS
Is multipath'ing disabled on my system?and after enable multipath'ing will I lose my pool?because disk naming will change?
Will zfs import work with new disk naming to get back my zfs pool and data?
regards
Hafiz.
You should use neither. The underlying disk paths are automatically
managed by the scsi_vhci driver. You need to consult "mpathadm list lu"
and use the device names printed there as your ZFS device nodes, for
example (using the example from
http://docs.oracle.com/cd/E19253-01/820-1931/agkax/index.html):
# mpathadm list lu
/dev/rdsk/c4t60020F20000035AF4267CCCB0002CEE2d0s2
Total Path Count: 2
Operational Path Count: 2
...
Here, c4t60020F20000035AF4267CCCB0002CEE2d0 is the correct disk node, so
you should use that. If you are getting "Total Path Count: 1" and twice
the number of nodes in "mpathadm list lu", then scsi_vhci is not
detecting your disks as multi-pathed and you need to tell it to do that.
See http://docs.oracle.com/cd/E19253-01/820-1931/gfpva/index.html on how
to do that. My guess is, however, that this should not be needed, since
you're saying you've got two LSI 9211-8i HBAs, which use the mpt_sas
driver, which is automatically multipath-enabled.
Cheers,
--
Saso
More information about the OmniOS-discuss
mailing list