[OmniOS-discuss] Fragmentation

Guenther Alka alka at hfg-gmuend.de
Fri Jun 23 15:25:25 UTC 2017


Yes, but
If you increase your pool by adding a new vdev, your current data are 
not auto-rebalanced. This will only happen over time with new or 
modified data.

If you want the best performance then, you must copy over current data 
ex by renaming a filesystem, replicate it to the former name and delete 
it then.

Gea


Am 23.06.2017 um 17:19 schrieb Artyom Zhandarovsky:
> So basically i need to add just more drives... ?
>
> 2017-06-23 18:09 GMT+03:00 Guenther Alka <alka at hfg-gmuend.de 
> <mailto:alka at hfg-gmuend.de>>:
>
>     The fragmentation info does not describe the fragmentation of the
>     data on pool but the fragmentation of the free space.  A high
>     fragmentation value will result in high data fragmentation only
>     when you write or modify data.
>
>     https://utcc.utoronto.ca/~cks/space/blog/solaris/ZFSZpoolFragmentationMeaning
>     <https://utcc.utoronto.ca/%7Ecks/space/blog/solaris/ZFSZpoolFragmentationMeaning>
>     So the best and only way to reduce data fragmentation is not to
>     fill up a pool say over 70-80%.
>
>     You should also know that CopyOnWrite filesystems where a complete
>     datablock ex 128k is written newly even if you change a "house" to
>     a "mouse" in a textfile are more vulnerable to fragmentation than
>     older filesystems. This is the price for the crash resitency where
>     a power outage during a write cannot lead to a corrupted
>     filesystem like with older filesystems where it can happen that
>     the data is modified "infile" while the according metadata update
>     is not happening. ZFS over-compensates this with its advanced
>     rambased read and write caches. A "defrag tool" is not available
>     for ZFS.
>
>     Gea
>
>
>     Am 23.06.2017 um 16:13 schrieb Artyom Zhandarovsky:
>
>         there any way to decrease fragmentation of dr_tank ?
>
>
>     -- 
>
>     _______________________________________________
>     OmniOS-discuss mailing list
>     OmniOS-discuss at lists.omniti.com
>     <mailto:OmniOS-discuss at lists.omniti.com>
>     http://lists.omniti.com/mailman/listinfo/omnios-discuss
>     <http://lists.omniti.com/mailman/listinfo/omnios-discuss>
>
>
-- 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://omniosce.org/ml-archive/attachments/20170623/22344e2f/attachment.html>


More information about the OmniOS-discuss mailing list