[OmniOS-discuss] [zfs] an interesting survey -- the zpool with most disks you have ever built
Fred Liu
fred.fliu at gmail.com
Mon Mar 7 05:30:18 UTC 2016
2016-03-05 0:01 GMT+08:00 Freddie Cash <fjwcash at gmail.com>:
> On Mar 4, 2016 2:05 AM, "Fred Liu" <fred.fliu at gmail.com> wrote:
> > 2016-03-04 13:47 GMT+08:00 Freddie Cash <fjwcash at gmail.com>:
> >>
> >> Currently, I just use a simple coordinate system. Columns are letters,
> rows are numbers.
> >> "smartos-discuss at lists.smartos.org" <smartos-discuss at lists.smartos.org
> >、
>
> developer <developer at open-zfs.org>、
>
> illumos-developer <developer at lists.illumos.org>、
>
> omnios-discuss <omnios-discuss at lists.omniti.com>、
>
> Discussion list for OpenIndiana <openindiana-discuss at openindiana.org>、
>
> illumos-zfs <zfs at lists.illumos.org>、
>
> "zfs-discuss at list.zfsonlinux.org" <zfs-discuss at list.zfsonlinux.org>、
>
> "freebsd-fs at FreeBSD.org" <freebsd-fs at freebsd.org>、
>
> "zfs-devel at freebsd.org" <zfs-devel at freebsd.org>
>
> >> Each disk is partitioned using GPT with the first (only) partition
> starting at 1 MB and covering the whole disk, and labelled with the
> column/row where it is located (disk-a1, disk-g6, disk-p3, etc).
> >
> > [Fred]: So you manually pull off all the drives one by one to locate
> them?
>
> When putting the system together for the first time, I insert each disk
> one at a time, wait for it to be detected, partition it, then label it
> based on physical location. Then do the next one. It's just part of the
> normal server build process, whether it has 2 drives, 20 drives, or 200
> drives.
>
> We build all our own servers from off-the-shelf parts; we don't buy
> anything pre-built from any of the large OEMs.
>
[Fred]: Gotcha!
> >> The pool is created using the GPT labels, so the label shows in "zpool
> list" output.
> >
> > [Fred]: What will the output look like?
>
> From our smaller backups server, with just 24 drive bays:
>
> $ zpool status storage
>
> pool: storage
>
> state: ONLINE
>
> status: Some supported features are not enabled on the pool. The pool can
>
> still be used, but some features are unavailable.
>
> action: Enable all features using 'zpool upgrade'. Once this is done,
>
> the pool may no longer be accessible by software that does not support
>
> the features. See zpool-features(7) for details.
>
> scan: scrub canceled on Wed Feb 17 12:02:20 2016
>
> config:
>
>
> NAME STATE READ WRITE CKSUM
>
> storage ONLINE 0 0 0
>
> raidz2-0 ONLINE 0 0 0
>
> gpt/disk-a1 ONLINE 0 0 0
>
> gpt/disk-a2 ONLINE 0 0 0
>
> gpt/disk-a3 ONLINE 0 0 0
>
> gpt/disk-a4 ONLINE 0 0 0
>
> gpt/disk-a5 ONLINE 0 0 0
>
> gpt/disk-a6 ONLINE 0 0 0
>
> raidz2-1 ONLINE 0 0 0
>
> gpt/disk-b1 ONLINE 0 0 0
>
> gpt/disk-b2 ONLINE 0 0 0
>
> gpt/disk-b3 ONLINE 0 0 0
>
> gpt/disk-b4 ONLINE 0 0 0
>
> gpt/disk-b5 ONLINE 0 0 0
>
> gpt/disk-b6 ONLINE 0 0 0
>
> raidz2-2 ONLINE 0 0 0
>
> gpt/disk-c1 ONLINE 0 0 0
>
> gpt/disk-c2 ONLINE 0 0 0
>
> gpt/disk-c3 ONLINE 0 0 0
>
> gpt/disk-c4 ONLINE 0 0 0
>
> gpt/disk-c5 ONLINE 0 0 0
>
> gpt/disk-c6 ONLINE 0 0 0
>
> raidz2-3 ONLINE 0 0 0
>
> gpt/disk-d1 ONLINE 0 0 0
>
> gpt/disk-d2 ONLINE 0 0 0
>
> gpt/disk-d3 ONLINE 0 0 0
>
> gpt/disk-d4 ONLINE 0 0 0
>
> gpt/disk-d5 ONLINE 0 0 0
>
> gpt/disk-d6 ONLINE 0 0 0
>
> cache
>
> gpt/cache0 ONLINE 0 0 0
>
> gpt/cache1 ONLINE 0 0 0
>
>
> errors: No known data errors
>
> The 90-bay systems look the same, just that the letters go all the way to
> p (so disk-p1 through disk-p6). And there's one vdev that uses 3 drives
> from each chassis (7x 6-disk vdev only uses 42 drives of the 45-bay
> chassis, so there's lots of spares if using a single chassis; using two
> chassis, there's enough drives to add an extra 6-disk vdev).
>
[Fred]: It looks like the gpt label shown in "zpool status" only works in
FreeBSD/FreeNAS. Are you using FreeBSD/FreeNAS? I can't find the similar
possibilities in Illumos/Linux.
Thanks,
Fred
>
> *illumos-zfs* | Archives
> <https://www.listbox.com/member/archive/182191/=now>
> <https://www.listbox.com/member/archive/rss/182191/22147814-d504851f> |
> Modify
> <https://www.listbox.com/member/?member_id=22147814&id_secret=22147814-a72bcb8a>
> Your Subscription <http://www.listbox.com>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://omniosce.org/ml-archive/attachments/20160307/15044d0e/attachment-0001.html>
More information about the OmniOS-discuss
mailing list