[OmniOS-discuss] slowness in aggr - bonded 10G
Bob Friesenhahn
bfriesen at simple.dallas.tx.us
Mon Sep 17 16:37:18 UTC 2018
On Mon, 17 Sep 2018, Lee Damon wrote:
>
> One theory I had was that fs2 wasn't actually using the full throughput of
> the bond so I physically removed one of the two cables. Sure enough, the
> bandwidth reported by iperf remained around 3.2Gb/s. I tested this with
> both aggr0 and aggr_nas0 with the same result. There are times where it
> gets closer to the 6Gb of the other hosts so it is clearly getting more
> than just one link's worth but not often.
Other than the bond group not forming properly, the most likely cause
of only one link being used is that the involved hardware attempts to
avoid out of order packet delivery, and achieve useful
(connection/client) load sharing, by using a policy based on the MAC
addresses on both ends, the IP addresses on both ends, or the
properties of the TCP connection.
Your speed test is using one TCP connection between two hosts (similar
to a client NFS mount) so the policy may result in all the data using
just one link.
Take a close look at what POLICY/ADDRPOLICY are doing on the Juniper
switch. Perhaps the newer Illumos kernel has changed its defaults in
this area.
Bob
--
Bob Friesenhahn
bfriesen at simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer, http://www.GraphicsMagick.org/
More information about the OmniOS-discuss
mailing list