[OmniOS-discuss] hardware specs for illumos storage/virtualization server
Denis Cheong
denis at denisandyuki.net
Sat Dec 1 19:33:22 EST 2012
On Mon, Nov 19, 2012 at 1:20 PM, Paul B. Henson <henson at acm.org> wrote:
> > I have 10 of them in a raidz2 (i.e. 8 x way data drives). Arguably with
>
> That's big for a raidz2.
>
Not really. I've seen bigger, and if you are running with 2 parity, the
only other logical config is 4 data drives + 2 data which means a 33%
overhead which is ridiculous.
> > Sub-100Mbit speeds on a VM that was connected to native dual gigabit
> ports
> > in the host.
>
> Hmm. I'm almosted tempted to go into work on my vacation week to cut my
> test box over to a gig port to see what happens :).
>
I've booted up the KVMs now, fixed a few network configuration issues, done
some more testing, and found this (iperf server is running on the VM):
------------------------------------------------------------
Server listening on UDP port 5001
Receiving 1470 byte datagrams
UDP buffer size: 224 KByte (default)
------------------------------------------------------------
[ 3] local X port 5001 connected with *Y* port 46307
[ ID] Interval Transfer Bandwidth Jitter Lost/Total
Datagrams
[ 3] 0.0-10.0 sec 687 MBytes 576 Mbits/sec 0.046 ms 202862/692593
(29%)
[ 3] 0.0-10.0 sec 1 datagrams received out-of-order
[ 4] local X port 5001 connected with *Z* port 41118
[ 4] 0.0-10.2 sec 517 MBytes 423 Mbits/sec 15.583 ms 539423/908178
(59%)
Note:
*Y* is on my local network connected over gigabit LAN and getting 29%
packet loss
*Z* is the *host* and is getting 59% packet loss
This is clearly NQR ... host to VM transfers should not get any packet loss.
Going from the VM is even worse:
------------------------------------------------------------
Client connecting to Y, UDP port 5001
Sending 1470 byte datagrams
UDP buffer size: 224 KByte (default)
------------------------------------------------------------
[ 3] local X port 34263 connected with *Y* port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 167 MBytes 140 Mbits/sec
[ 3] Sent 119441 datagrams
[ 3] Server Report:
[ 3] 0.0-10.0 sec 167 MBytes 140 Mbits/sec 0.123 ms 0/119440 (0%)
[ 3] 0.0-10.0 sec 1 datagrams received out-of-order
This is the 100mbit-speeds on gigabit ethernet, that I was mentioning
before.
In my further experimentation, I've found that this appears to be caused by
the virtio network device - when using e1000 emulation, it all works much
better.
So what switch was on the other side of the aggregation? Port
> aggregation can be pretty flaky sometimes :(.
Dell PowerConnect 5448 (the one with iSCSI acceleration), so nothing cheap
or dodgy.
> > NFS still goes over the ethernet connectivity, and despite being host-VM
> > native and should not go through the hardware, it was still far less than
> > gigabit throughput, when I would expect it to exceed gigabit ...
>
> Hmm, my quick guest to host test showed over 700Mb/s. With a virtual
> e1000, it might be limited to gigabit no matter what, since it's
> pretending to be a gig card. The vmxnet3 driver in esx claims to be a
> 10G card. I'm not sure what virtio reports to the OS, I haven't tried it
> yet.
>
My experiments over the last few days show it to be tied to virtio, no
issue with e1000 emulation, so this is consistent.
The problems with virtio could be either the Solaris KVM implementation, or
an issue with the virtio drivers in my guest (Linux version 3.2.12-gentoo
(root at slate) (gcc version 4.5.3 (Gentoo 4.5.3-r2 p1.5, pie-0.4.7) ) #1 SMP).
Now I'm off to do a rebuild of the guest kernel, to see if that fixes the
virtio issues ... if so, then it's just an issue with that Linux kernel
version that I was using and therefore easily fixed. If not, then there is
a pretty significant issue with the Illumnos KVM virtio ethernet, with a
workaround being use e1000 instead. This is still not ideal though ...
http://vmstudy.blogspot.com.au/2010/04/network-speed-test-iperf-in-kvm-virtio.html
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://omniosce.org/ml-archive/attachments/20121202/217ef53d/attachment.html>
More information about the OmniOS-discuss
mailing list