[OmniOS-discuss] Windows crashes my ZFS box

Rune Tipsmark rt at steait.net
Mon Feb 2 00:21:46 UTC 2015


hi all,



I got some major problems... when using Windows and Fibre Channel I am able to kill my ZFS box totally for at least 15 minutes... it simply drops all connections to all hosts connected via FC. This happens under load, for example doing backups writing to the ZFS, running IO Meter against my ZFS...



I have looked at queue depth/length and other stuff, I just cannot seem to find out how this happens... I have tested on 3 different Windows machines and 3 difference ZFS boxes - I have ESXi Servers connected to these ZFS boxes as well and no problem there no matter how much load I put on the LUN's.



I tried sync=always, sync=disabled, with and without log devices, dedup=on, dedup=off, volume based lun, thin provisioned lun, zfs based thin provisioned lun... you name it, I tried it...



tried all kinds of queue lengths from Windows, default is 65535, tried 16, 32, 64, 256 etc... if I put enough stress on the lun presented to Windows it will cause the ZFS box to drop all FC connections for up to 15 minutes.. a reboot is not possible as it will hang for the same amount of minutes... might as well wait for it to come back...



Latest FW on all items, HBA, Switch etc. Monitoring shows a distributed load on the ports as expected using Round Robin and MPIO.



One thing that irritates me is that I don't get any more than ~80-120 MB/sec (sync=always) throughput when writing to this LUN in Windows, where I get 6-700 MB/sec (sync=always) when writing from a VM on ESXi... The abysmal performance is a pain, but the fact that I can downright crash or hang my ZFS box just by running IOMeter is disturbing...



Any ideas why this might happen? Seems to me like a queue problem but I can't really get any closer than that... maybe Windows is just crappy at handling Fibre Channel... however no problems against HP EVA Storage.... same machine, same tests....



br,

Rune




-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://omniosce.org/ml-archive/attachments/20150202/d59cf1f3/attachment-0001.html>


More information about the OmniOS-discuss mailing list