[OmniOS-discuss] slices & zfs
Jim Klimov
jimklimov at cos.ru
Wed Jun 8 16:45:42 UTC 2016
8 июня 2016 г. 16:37:50 CEST, Martijn Fennis <martijn at fennis.tk> пишет:
>Hi Dale and Jim,
>
>Thanks for your time.
>
>The reason is to store VMs on the Quick-pool and downloads on the
>Slow-pool.
>
>I would personally not assign the Slow-pool any of my memory, perhaps
>meta-data.
>
>That’s why i would like to assign the inner part to the slow-pool.
>
>
>
>Also i’ve read about a maximum of 2 TB per slice, anyone?
>
>
>But maybe i should not do the slicing at all?
>
>
>Thanks in advance
>
>
>Van: Dale Ghent<mailto:daleg at omniti.com>
>Verzonden: woensdag 8 juni 2016 08:21
>Aan: Martijn Fennis<mailto:martijn at fennis.tk>
>CC:
>omnios-discuss at lists.omniti.com<mailto:omnios-discuss at lists.omniti.com>
>Onderwerp: Re: [OmniOS-discuss] slices & zfs
>
>
>> On Jun 8, 2016, at 1:00 AM, Martijn Fennis <martijn at fennis.tk> wrote:
>>
>> Hi,
>>
>>
>> Does someone have some info about setting up ZFS on top of slices ?
>>
>> Having 10 drives which i would like to have the outer ring configured
>@ raid 10 for about 1 TB with dedup, the rest of the drives in Raid 50.
>>
>>
>> What i find on google is to use complete devs and difficult to
>interpret slicing info..
>
>ZFS is designed to operate with full control of the drive - this means
>no slicing. Yes, one can use ZFS with slices, but you must first
>understand why that is not optimal, and what you give up when such a
>configuration is used.
>
>When consuming a partition on a disk, ZFS can no-longer assume that it
>has complete control over the entire disk and cannot enable (and
>proactively manage) the disk's own write caching capabilities. This
>will incur a performance penalty, the magnitude of which depends on
>your specific IO patterns.
>
>I am curious why you think you need to slice up your drives like that
>in such a scheme... the mixing of RAID levels across the same devices
>seem unwieldy given that ZFS was designed to avoid this kind of
>scenario, and ZFS will compete with itself because it now is operating
>2 pools across the same physical devices, which is absolutely terrible
>for IO scheduling.
>
>/dale
>
>
>------------------------------------------------------------------------
>
>_______________________________________________
>OmniOS-discuss mailing list
>OmniOS-discuss at lists.omniti.com
>http://lists.omniti.com/mailman/listinfo/omnios-discuss
2tb limit - not true. This pool worked back in SXCE:
[root at thumper ~]# format c1t2d0s0
selecting c1t2d0s0
[disk formatted]
/dev/dsk/c1t2d0s0 is part of active ZFS pool temp. Please see zpool(1M).
/dev/dsk/c1t2d0s1 is part of active ZFS pool pond. Please see zpool(1M).
partition> p
Current partition table (original):
Total disk sectors available: 5860516717 + 16384 (reserved sectors)
Part Tag Flag First Sector Size Last Sector
0 usr wm 256 2.50TB 5372126207
1 usr wm 5372126400 232.87GB 5860500351
2 unassigned wm 0 0 0
3 unassigned wm 0 0 0
4 unassigned wm 0 0 0
5 unassigned wm 0 0 0
6 usr wm 5860500367 8.00MB 5860516750
partition>
There is an issue about 2TB for boot disks though - grub doesn't see more, so you must ensure rpool is below 2tb and maybe add compatibility MBR partitioning along with real EFI layout with same partition offsets and sizes.
Jim
--
Typos courtesy of K-9 Mail on my Samsung Android
More information about the OmniOS-discuss
mailing list