[OmniOS-discuss] slices & zfs

Jim Klimov jimklimov at cos.ru
Wed Jun 8 09:40:43 UTC 2016


8 июня 2016 г. 8:20:42 CEST, Dale Ghent <daleg at omniti.com> пишет:
>
>> On Jun 8, 2016, at 1:00 AM, Martijn Fennis <martijn at fennis.tk> wrote:
>> 
>> Hi,
>> 
>> 
>> Does someone have some info about setting up ZFS on top of slices ?
>> 
>> Having 10 drives which i would like to have the outer ring configured
>@ raid 10 for about 1 TB with dedup, the rest of the drives in Raid 50.
>> 
>> 
>> What i find on google is to use complete devs and difficult to
>interpret slicing info..
>
>ZFS is designed to operate with full control of the drive - this means
>no slicing. Yes, one can use ZFS with slices, but you must first
>understand why that is not optimal, and what you give up when such a
>configuration is used.
>
>When consuming a partition on a disk, ZFS can no-longer assume that it
>has complete control over the entire disk and cannot enable (and
>proactively manage) the disk's own write caching capabilities. This
>will incur a performance penalty, the magnitude of which depends on
>your specific IO patterns.
>
>I am curious why you think you need to slice up your drives like that
>in such a scheme... the mixing of RAID levels across the same devices
>seem unwieldy given that ZFS was designed to avoid this kind of
>scenario, and ZFS will compete with itself because it now is operating
>2 pools across the same physical devices, which is absolutely terrible
>for IO scheduling.
>
>/dale
>
>
>------------------------------------------------------------------------
>
>_______________________________________________
>OmniOS-discuss mailing list
>OmniOS-discuss at lists.omniti.com
>http://lists.omniti.com/mailman/listinfo/omnios-discuss

This setup sort of makes sense on servers with limited amount of disks (ok, 10 is not that - rather the older/smaller 4-disk ones). There you have a low-traffic rpool mirror (4-way, or 2-way rpool and 2-way swap of same size) and a raidzN or raid10 in the bigger partitions/slices for zones and data. I did it in Sol10 from the get-go. Rpools suffices to be minimal, like 4gb there. With IPS and its package caches i'd recommend no less that 16gb for an rpool now, more if you want substantial GZ software and/or multiple versions to roll back to.

Caches you can enable/disable manually too, perhaps automated in an init-script (iirc i posted an example in wiki or git somewhere).

Queuing commands is in a way a headache for the disks, assuming scsi/sas or (e)sata. It may make sense to keep a fast-tracked scratch area for bursty fast io as a separate pool...

Jim
--
Typos courtesy of K-9 Mail on my Samsung Android


More information about the OmniOS-discuss mailing list