[OmniOS-discuss] slices & zfs

Martijn Fennis martijn at fennis.tk
Wed Jun 8 14:37:50 UTC 2016


Hi Dale and Jim,

Thanks for your time.

The reason is to store VMs on the Quick-pool and downloads on the Slow-pool.

I would personally not assign the Slow-pool any of my memory, perhaps meta-data.

That’s why i would like to assign the inner part to the slow-pool.



Also i’ve read about a maximum of 2 TB per slice, anyone?


But maybe i should not do the slicing at all?


Thanks in advance


Van: Dale Ghent<mailto:daleg at omniti.com>
Verzonden: woensdag 8 juni 2016 08:21
Aan: Martijn Fennis<mailto:martijn at fennis.tk>
CC: omnios-discuss at lists.omniti.com<mailto:omnios-discuss at lists.omniti.com>
Onderwerp: Re: [OmniOS-discuss] slices & zfs


> On Jun 8, 2016, at 1:00 AM, Martijn Fennis <martijn at fennis.tk> wrote:
>
> Hi,
>
>
> Does someone have some info about setting up ZFS on top of slices ?
>
> Having 10 drives which i would like to have the outer ring configured @ raid 10 for about 1 TB with dedup, the rest of the drives in Raid 50.
>
>
> What i find on google is to use complete devs and difficult to interpret slicing info..

ZFS is designed to operate with full control of the drive - this means no slicing. Yes, one can use ZFS with slices, but you must first understand why that is not optimal, and what you give up when such a configuration is used.

When consuming a partition on a disk, ZFS can no-longer assume that it has complete control over the entire disk and cannot enable (and proactively manage) the disk's own write caching capabilities. This will incur a performance penalty, the magnitude of which depends on your specific IO patterns.

I am curious why you think you need to slice up your drives like that in such a scheme... the mixing of RAID levels across the same devices seem unwieldy given that ZFS was designed to avoid this kind of scenario, and ZFS will compete with itself because it now is operating 2 pools across the same physical devices, which is absolutely terrible for IO scheduling.

/dale
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://omniosce.org/ml-archive/attachments/20160608/97134973/attachment-0001.html>


More information about the OmniOS-discuss mailing list