[OmniOS-discuss] Network throughout 1GB/sec
Michael Talbott
mtalbott at lji.org
Fri Sep 16 19:53:42 UTC 2016
Jumbo frames are a major help. Also, try using multiple streams (break single rsync job into multiple jobs). Also be sure to use rsync native protocol and don't tunnel it over ssh. Then there's bbcp which can push a single copy operation into multiple streams to fully saturate your disks/network ;) https://www.slac.stanford.edu/~abh/bbcp/
Michael
> On Sep 16, 2016, at 12:38 PM, Ergi Thanasko <ergi.thanasko at avsquad.com> wrote:
>
> Hi Bob,
> We were testing it between two similar servers rsynic and copy paste both ways ( read/write) was the same around 300MB/sec average. Of course the speed test on the pools provide higher throughput around 600MB/sec
> <ErgiThanasko.png>
>
>> On Sep 16, 2016, at 12:33 PM, Bob Friesenhahn <bfriesen at simple.dallas.tx.us <mailto:bfriesen at simple.dallas.tx.us>> wrote:
>>
>> On Fri, 16 Sep 2016, Ergi Thanasko wrote:
>>
>>> Given the hardware that we have and the zpool performance, we expected to see some serious data transfer rates however we only see around 200-300MB/sec average using rsync or copy paste over NFS. Standard MTU 1500 and nfs block size. I want to ask the community
>>
>> Are these read rates or write rates? Read rates should be able to come close to pool or wire limits. Write rates over NFS are primarily dominated by latency on a per-writer basis.
>>
>> Increasing the MTU to 9k has been shown to improve throughput quite a lot for large transfers.
>>
>> Bob
>> --
>> Bob Friesenhahn
>> bfriesen at simple.dallas.tx.us <mailto:bfriesen at simple.dallas.tx.us>, http://www.simplesystems.org/users/bfriesen/ <http://www.simplesystems.org/users/bfriesen/>
>> GraphicsMagick Maintainer, http://www.GraphicsMagick.org/ <http://www.graphicsmagick.org/>
>
> _______________________________________________
> OmniOS-discuss mailing list
> OmniOS-discuss at lists.omniti.com
> http://lists.omniti.com/mailman/listinfo/omnios-discuss
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://omniosce.org/ml-archive/attachments/20160916/750c3c26/attachment.html>
More information about the OmniOS-discuss
mailing list