[OmniOS-discuss] background snapshot destruction impact

Sean Doran smd at mis.use.net
Sat Dec 21 23:20:08 UTC 2013


On 21 Dec, 2013, at 13:15, Tobias Oetiker <tobi at oetiker.ch> wrote:

> But now as background destruction is progressing, the
> system remains very slugish when doing io on the pool where
> the destruction is taking place.

Each dataset block to be freed requires an update to the deduplication data table, so will involve several reads from and writes to the pool.   

> a) would it be faster to send/receive the content of the
>   deduplicated filesystem to a new non deduplicated
>   and then destroy the entire filesystem (not the pool).

Destroying the pool would be faster, yes, but destroying a zfs dataset still requires visiting the deduplication data linked within it.

> b) is there a way to monitor progress on the background
>   destruction?

zpool get freeing poolname

> c) shouldn't the smarter write throttle change
>   https://github.com/illumos/illumos-gate/commit/69962b5647e4a8b9b14998733b765925381b727e
>   have helped with this by makeing zfs do its internal things
>   with a lower priority.

It does.  things are much worse on pools without it; not only would you need more patience, so would anything else hoping to get IOPS to and from of the pool.

Finally, a really big L2ARC for the pool helps, although it takes a while to heat up.
Even hundreds of gigabytes of properly USB 3 keys will make a palpable difference, as the deduplication data is spread into a table with many many blocks which get visited and revisited in an effectively unpredictable fashion.    L2ARC compression is great for DDT data - I've seen approaching half a terabyte on a 128 GiB vdev.    Persisting that across reboots will make lots of people happier and make deduplication and snapshot removal (and other write-intensive jobs) less time-consuming.

	Sean.


More information about the OmniOS-discuss mailing list