[OmniOS-discuss] Slow scrub on SSD-only pool
Richard Elling
richard.elling at richardelling.com
Fri Apr 22 17:13:18 UTC 2016
> On Apr 22, 2016, at 5:00 AM, Stephan Budach <stephan.budach at jvm.de> wrote:
>
> Am 21.04.16 um 18:36 schrieb Richard Elling:
>>> On Apr 21, 2016, at 7:47 AM, Chris Siebenmann <cks at cs.toronto.edu> wrote:
>>>
>>> [About ZFS scrub tunables:]
>>>> Interesting read - and it surely works. If you set the tunable before
>>>> you start the scrub you can immediately see the thoughput being much
>>>> higher than with the standard setting. [...]
>>> It's perhaps worth noting here that the scrub rate shown in 'zpool
>>> status' is a cumulative one, ie the average scrub rate since the scrub
>>> started. As far as I know the only way to get the current scrub rate is
>>> run 'zpool status' twice with some time in between and then look at how
>>> much progress the scrub's made during that time.
>> Scrub rate measured in IOPS or bandwidth is not useful. Neither is a reflection
>> of the work being performed in ZFS nor the drives.
>>
>>> As such, increasing the scrub speed in the middle of what had been a
>>> slow scrub up to that point probably won't make a massive or immediate
>>> difference in the reported scrub rate. You should see it rising over
>>> time, especially if you drastically speeded it up, but it's not any sort
>>> of instant jump.
>>>
>>> (You can always monitor iostat, but that mixes in other pool IO. There's
>>> probably something clever that can be done with DTrace.)
>> I've got some dtrace that will show progress. However, it is only marginally
>> useful when you've got multiple datasets.
>>
>>> This may already be obvious and well known to people, but I figured
>>> I'd mention it just in case.
>> People fret about scrubs and resilvers, when they really shouldn't. In ZFS
>> accessing data also checks and does recovery, so anything they regularly
>> access will be unaffected by the subsequent scan. Over the years, I've tried
>> several ways to approach teaching people about failures and scrubs/resilvers,
>> but with limited success: some people just like to be afraid... Hollywood makes
>> a lot of money on them :-)
>> -- richard
>>
>>
> No… not afraid, but I actually do think, that I can judge whether or not I want to speed scrubs up and trade in some performance for that. As long as I can do that, I am fine with it. And the same applies for resilvers, I guess.
For current OmniOS the priority scheduler can be adjusted using mdb to change
the priority for scrubs vs other types of I/O. There is no userland interface. See Adam's
blog for more details.
http://dtrace.org/blogs/ahl/2014/08/31/openzfs-tuning/ <http://dtrace.org/blogs/ahl/2014/08/31/openzfs-tuning/>
If you're running Solaris 11 or pre-2015 OmniOS, then the old write throttle is impossible
to control and you'll chase your tail trying to balance scrubs/resilvers against any other
workload. From a control theory perspective, it is unstable.
> If you need to resilver one half of a mirrored zpool, most people will want that to run as fast as feasible, don't they?
It depends. I've had customers on both sides of the fence and one customer for whom we
cron'ed the priority changes to match their peak. Suffice to say, nobody seems to want
resilvers to dominate real work.
-- richard
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://omniosce.org/ml-archive/attachments/20160422/937cdd4c/attachment.html>
More information about the OmniOS-discuss
mailing list