[OmniOS-discuss] Ang: Re: Best infrastructure for VSphere/Hyper-V

Johan Kragsterman johan.kragsterman at capvert.se
Tue Apr 7 08:08:35 UTC 2015


Hi!


-----"OmniOS-discuss" <omnios-discuss-bounces at lists.omniti.com> skrev: -----
Till: danmcd at omniti.com, thomas.wiese at ovis-consulting.de
Från: "Nate Smith" 
Sänt av: "OmniOS-discuss" 
Datum: 2015-04-06 20:46
Kopia: omnios-discuss at lists.omniti.com
Ärende: Re: [OmniOS-discuss] Best infrastructure for VSphere/Hyper-V

Update: I disabled cstates  and mwaits  and that fixed the crashes i was getting when,the system was slow. But I still got the qlogic target dropouts during certain ios.after lots of research, I guessed that I was having a queuing problem. 

To alleviate that I did a couple things:

1. Reconfigured all my host and target views  so that each host port only saw one target port. Formerly I had all four of my target ports mapped to both initiators on each host. This led to a situation  where i had eight mpio paths on each host.

2. Assuming an incoming queue depth of 256 (right?) for each fibre target port, a round robin mpio infrastructure  was bound  to flood the target ports. Some documentation on qlogic indicated the default per lun outgoing queue depth  was 32. After some calculations I dropped this to 16. 

As of now, each host only has two mpio paths in round robin with an outgoing queue depth of 16. I tried to test it under io load and was unable to get it to drop out in scenarios that would cause qlt to drop out in the past. (Multiple io operations while running a backup and destroying a hyperv checkpoint, etc.) system appears stable. So far. Thanks for the help everyone.  Will update.
-Nate





Thanks, Nate!

VERY HELPFUL, indeed!

For all of us maintaining storage infrastructure.

Looking forward to any update on this subject!

Regards Johan





_______________________________________________
OmniOS-discuss mailing list
OmniOS-discuss at lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss




More information about the OmniOS-discuss mailing list