[OmniOS-discuss] ZFS Volumes and vSphere Disks - Storage vMotion Speed
Richard Elling
richard.elling at richardelling.com
Mon Jan 19 12:57:28 UTC 2015
> On Jan 19, 2015, at 3:55 AM, Rune Tipsmark <rt at steait.net> wrote:
>
> hi all,
>
> just in case there are other people out there using their ZFS box against vSphere 5.1 or later... I found my storage vmotion were slow... really slow... not much info available and so after a while of trial and error I found a nice combo that works very well in terms of performance, latency as well as throughput and storage vMotion.
>
> - Use ZFS volumes instead of thin provisioned LU's - Volumes support two of the VAAI features
>
AFAIK, ZFS is not available in VMware. Do you mean run iSCSI to connect the ESX box to
the server running ZFS? If so...
> - Use thick provisioning disks, lazy zeroed disks in my case reduced storage vMotion by 90% or so - machine 1 dropped from 8½ minutes to 23 seconds and machine 2 dropped from ~7 minutes to 54 seconds... a rather nice improvement simply by changing from thin to thick provisioning.
>
This makes no difference in ZFS. The "thick provisioned" volume is simply a volume with a reservation.
All allocations are copy-on-write. So the only difference between a "thick" and "thin" volume occurs when
you run out of space in the pool.
> - I dropped my Qlogic HBA max queue depth from default 64 to 16 on all ESXi hosts and now I see an average latency of less than 1ms per data store (on 8G fibre channel). Of course there are spikes when doing storage vMotion at these speeds but its well worth it.
>
I usually see storage vmotion running at wire speed for well configured systems. When you get
into the 2GByte/sec range this can get tricky, because maintaining that flow through the RAM
and disks requires nontrivial amounts of hardware.
More likely, you're seeing the effects of caching, which is very useful for storage vmotion and
allows you to hit line rate.
>
> I am getting to the point where I am almost happy with my ZFS backend for vSphere.
>
excellent!
-- richard
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://omniosce.org/ml-archive/attachments/20150119/bd2a08fa/attachment.html>
More information about the OmniOS-discuss
mailing list