[OmniOS-discuss] Network throughout 1GB/sec
Davide Poletto
davide.poletto at gmail.com
Fri Sep 16 22:40:04 UTC 2016
Hope not be wrong here but Port Trunking usage deserves its part in the
whole picture too: be aware that using Port Trunking (with LACP as per IEEE
802.3ad) between your Servers' NIC and your 10Gb Switching infrastructure -
and this happens by aggregating "n" identical ports together on both link's
ends, as you wrote - doesn't consequently mean that your
"one-Host-to-one-Host" generated traffic will be able to use and will be
able to saturate all those "n" 10Gb based links concurrently...
On Sep 16, 2016 7:45 PM, "Ergi Thanasko" <ergi.thanasko at avsquad.com> wrote:
> Hi all,
> We have a a few servers conected via 10g nic LACP, some of them have
> 4nic and some have 6nic in a link aggregation mode. We been moving a lot of
> data around and we are trying to get the maximum performance. I have seen
> zpool can deliver 2-3GB accumulated throughput. Iperf does about
> 600-800MB/sec between those two servers.
> Given the hardware that we have and the zpool performance, we expected
> to see some serious data transfer rates however we only see around
> 200-300MB/sec average using rsync or copy paste over NFS. Standard MTU
> 1500 and nfs block size. I want to ask the community what to do get some
> higher throughout and the application level. I hear ZFS send/receive or
> ZFS shadow does work faster but it does snapshots. Out data (Terabytes) is
> constantly evolving and we prefer something in the nature of rsync but
> to utilize the network hardware.
>
> If Anyone has a hardware setup that can see 1GB/sec throughput and does
> not mind sharing?
> Any software that use multithreads sessions to move data around zfs
> friendly? We do not mind getting going with a commercial solution like
> camvault or veeam if they work.
>
> Thank you for your time
>
>
>
>
> _______________________________________________
> OmniOS-discuss mailing list
> OmniOS-discuss at lists.omniti.com
> http://lists.omniti.com/mailman/listinfo/omnios-discuss
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://omniosce.org/ml-archive/attachments/20160917/07ad7d9c/attachment.html>
More information about the OmniOS-discuss
mailing list