[OmniOS-discuss] The ixgbe driver, Lindsay Lohan, and the Greek economy

Joerg Goltermann jg at osn.de
Mon Feb 23 09:15:20 UTC 2015


Hi,

I remember there was a problem with the flow control settings in the ixgbe
driver, so I updated it a long time ago for our internal servers to 2.5.8.
Last weekend I integrated the latest changes from the FreeBSD driver to 
bring
the illumos ixgbe to 2.5.25 but I had no time to test it, so it's completely
untested!


If you would like to give the latest driver a try you can fetch the
kernel modules from https://cloud.osn.de/index.php/s/Fb4so9RsNnXA7r9

Clone your boot environment, place the modules in the new environment
and update the boot-archive of the new BE.

  - Joerg




On 23.02.2015 02:54, W Verb wrote:
> By the way, to those of you who have working setups: please send me
> your pool/volume settings, interface linkprops, and any kernel tuning
> parameters you may have set.
>
> Thanks,
> Warren V
>
> On Sat, Feb 21, 2015 at 7:59 AM, Schweiss, Chip <chip at innovates.com> wrote:
>> I can't say I totally agree with your performance assessment.   I run Intel
>> X520 in all my OmniOS boxes.
>>
>> Here is a capture of nfssvrtop I made while running many storage vMotions
>> between two OmniOS boxes hosting NFS datastores.   This is a 10 host VMware
>> cluster.  Both OmniOS boxes are dual 10G connected with copper twin-ax to
>> the in rack Nexus 5010.
>>
>> VMware does 100% sync writes, I use ZeusRAM SSDs for log devices.
>>
>> -Chip
>>
>> 2014 Apr 24 08:05:51, load: 12.64, read: 17330243 KB, swrite: 15985    KB,
>> awrite: 1875455  KB
>>
>> Ver     Client           NFSOPS   Reads SWrites AWrites Commits   Rd_bw
>> SWr_bw  AWr_bw    Rd_t   SWr_t   AWr_t   Com_t  Align%
>>
>> 4       10.28.17.105          0       0       0       0       0       0
>> 0       0       0       0       0       0       0
>>
>> 4       10.28.17.215          0       0       0       0       0       0
>> 0       0       0       0       0       0       0
>>
>> 4       10.28.17.213          0       0       0       0       0       0
>> 0       0       0       0       0       0       0
>>
>> 4       10.28.16.151          0       0       0       0       0       0
>> 0       0       0       0       0       0       0
>>
>> 4       all                   1       0       0       0       0       0
>> 0       0       0       0       0       0       0
>>
>> 3       10.28.16.175          3       0       3       0       0       1
>> 11       0    4806      48       0       0      85
>>
>> 3       10.28.16.183          6       0       6       0       0       3
>> 162       0     549     124       0       0      73
>>
>> 3       10.28.16.180         11       0      10       0       0       3
>> 27       0     776      89       0       0      67
>>
>> 3       10.28.16.176         28       2      26       0       0      10
>> 405       0    2572     198       0       0     100
>>
>> 3       10.28.16.178       4606    4602       4       0       0  294534
>> 3       0     723      49       0       0      99
>>
>> 3       10.28.16.179       4905    4879      26       0       0  312208
>> 311       0     735     271       0       0      99
>>
>> 3       10.28.16.181       5515    5502      13       0       0  352107
>> 77       0      89      87       0       0      99
>>
>> 3       10.28.16.184      12095   12059      10       0       0  763014
>> 39       0     249     147       0       0      99
>>
>> 3       10.28.58.1        15401    6040     116    6354      53  191605
>> 474  202346     192      96     144      83      99
>>
>> 3       all               42574   33086     217    6354      53 1913488
>> 1582  202300     348     138     153     105      99
>>
>>
>>
>>
>>
>> On Fri, Feb 20, 2015 at 11:46 PM, W Verb <wverb73 at gmail.com> wrote:
>>>
>>> Hello All,
>>>
>>> Thank you for your replies.
>>> I tried a few things, and found the following:
>>>
>>> 1: Disabling hyperthreading support in the BIOS drops performance overall
>>> by a factor of 4.
>>> 2: Disabling VT support also seems to have some effect, although it
>>> appears to be minor. But this has the amusing side effect of fixing the
>>> hangs I've been experiencing with fast reboot. Probably by disabling kvm.
>>> 3: The performance tests are a bit tricky to quantify because of caching
>>> effects. In fact, I'm not entirely sure what is happening here. It's just
>>> best to describe what I'm seeing:
>>>
>>> The commands I'm using to test are
>>> dd if=/dev/zero of=./test.dd bs=2M count=5000
>>> dd of=/dev/null if=./test.dd bs=2M count=5000
>>> The host vm is running Centos 6.6, and has the latest vmtools installed.
>>> There is a host cache on an SSD local to the host that is also in place.
>>> Disabling the host cache didn't immediately have an effect as far as I could
>>> see.
>>>
>>> The host MTU set to 3000 on all iSCSI interfaces for all tests.
>>>
>>> Test 1: Right after reboot, with an ixgbe MTU of 9000, the write test
>>> yields an average speed over three tests of 137MB/s. The read test yields an
>>> average over three tests of 5MB/s.
>>>
>>> Test 2: After setting "ifconfig ixgbe0 mtu 3000", the write tests yield
>>> 140MB/s, and the read tests yield 53MB/s. It's important to note here that
>>> if I cut the read test short at only 2-3GB, I get results upwards of
>>> 350MB/s, which I assume is local cache-related distortion.
>>>
>>> Test 3: MTU of 1500. Read tests are up to 156 MB/s. Write tests yield
>>> about 142MB/s.
>>> Test 4: MTU of 1000: Read test at 182MB/s.
>>> Test 5: MTU of 900: Read test at 130 MB/s.
>>> Test 6: MTU of 1000: Read test at 160MB/s. Write tests are now
>>> consistently at about 300MB/s.
>>> Test 7: MTU of 1200: Read test at 124MB/s.
>>> Test 8: MTU of 1000: Read test at 161MB/s. Write at 261MB/s.
>>>
>>> A few final notes:
>>> L1ARC grabs about 10GB of RAM during the tests, so there's definitely some
>>> read caching going on.
>>> The write operations are easier to observe with iostat, and I'm seeing io
>>> rates that closely correlate with the network write speeds.
>>>
>>>
>>> Chris, thanks for your specific details. I'd appreciate it if you could
>>> tell me which copper NIC you tried, as well as to pass on the iSCSI tuning
>>> parameters.
>>>
>>> I've ordered an Intel EXPX9502AFXSR, which uses the 82598 chip instead of
>>> the 82599 in the X520. If I get similar results with my fiber transcievers,
>>> I'll see if I can get a hold of copper ones.
>>>
>>> But I should mention that I did indeed look at PHY/MAC error rates, and
>>> they are nil.
>>>
>>> -Warren V
>>>
>>> On Fri, Feb 20, 2015 at 7:25 PM, Chris Siebenmann <cks at cs.toronto.edu>
>>> wrote:
>>>>
>>>>> After installation and configuration, I observed all kinds of bad
>>>>> behavior
>>>>> in the network traffic between the hosts and the server. All of this
>>>>> bad
>>>>> behavior is traced to the ixgbe driver on the storage server. Without
>>>>> going
>>>>> into the full troubleshooting process, here are my takeaways:
>>>> [...]
>>>>
>>>>   For what it's worth, we managed to achieve much better line rates on
>>>> copper 10G ixgbe hardware of various descriptions between OmniOS
>>>> and CentOS 7 (I don't think we ever tested OmniOS to OmniOS). I don't
>>>> believe OmniOS could do TCP at full line rate but I think we managed 700+
>>>> Mbytes/sec on both transmit and receive and we got basically disk-limited
>>>> speeds with iSCSI (across multiple disks on multi-disk mirrored pools,
>>>> OmniOS iSCSI initiator, Linux iSCSI targets).
>>>>
>>>>   I don't believe we did any specific kernel tuning (and in fact some of
>>>> our attempts to fiddle ixgbe driver parameters blew up in our face).
>>>> We did tune iSCSI connection parameters to increase various buffer
>>>> sizes so that ZFS could do even large single operations in single iSCSI
>>>> transactions. (More details available if people are interested.)
>>>>
>>>>> 10: At the wire level, the speed problems are clearly due to pauses in
>>>>> response time by omnios. At 9000 byte frame sizes, I see a good number
>>>>> of duplicate ACKs and fast retransmits during read operations (when
>>>>> omnios is transmitting). But below about a 4100-byte MTU on omnios
>>>>> (which seems to correlate to 4096-byte iSCSI block transfers), the
>>>>> transmission errors fade away and we only see the transmission pause
>>>>> problem.
>>>>
>>>>   This is what really attracted my attention. In our OmniOS setup, our
>>>> specific Intel hardware had ixgbe driver issues that could cause
>>>> activity stalls during once-a-second link heartbeat checks. This
>>>> obviously had an effect at the TCP and iSCSI layers. My initial message
>>>> to illumos-developer sparked a potentially interesting discussion:
>>>>
>>>>
>>>> http://www.listbox.com/member/archive/182179/2014/10/sort/time_rev/page/16/entry/6:405/20141003125035:6357079A-4B1D-11E4-A39C-D534381BA44D/
>>>>
>>>> If you think this is a possibility in your setup, I've put the DTrace
>>>> script I used to hunt for this up on the web:
>>>>
>>>>          http://www.cs.toronto.edu/~cks/src/omnios-ixgbe/ixgbe_delay.d
>>>>
>>>> This isn't the only potential source of driver stalls by any means, it's
>>>> just the one I found. You may also want to look at lockstat in general,
>>>> as information it reported is what led us to look specifically at the
>>>> ixgbe code here.
>>>>
>>>> (If you suspect kernel/driver issues, lockstat combined with kernel
>>>> source is a really excellent resource.)
>>>>
>>>>          - cks
>>>
>>>
>>>
>>> _______________________________________________
>>> OmniOS-discuss mailing list
>>> OmniOS-discuss at lists.omniti.com
>>> http://lists.omniti.com/mailman/listinfo/omnios-discuss
>>>
>>
> _______________________________________________
> OmniOS-discuss mailing list
> OmniOS-discuss at lists.omniti.com
> http://lists.omniti.com/mailman/listinfo/omnios-discuss
>

-- 
OSN Online Service Nuernberg GmbH, Bucher Str. 78, 90408 Nuernberg
Tel: +49 911 39905-0 - Fax: +49 911 39905-55 - http://www.osn.de
HRB 15022 Nuernberg, USt-Id: DE189301263, GF: Joerg Goltermann


More information about the OmniOS-discuss mailing list