[OmniOS-discuss] questions

Richard Jahnel rjahnel at ellipseinc.com
Fri Sep 15 14:10:06 UTC 2017


/agreed for internal network transfers, netcat is the way to go. I have used it in the past for ZFS sends between internal machines with SSD backed volumes. All the crypto work from ssh/scp and other secure copy methods are the bottleneck in these cases.

Obviously if you transfers leave the secure network you’ll want to suck it up and take the CPU hit. ;)

From: OmniOS-discuss [mailto:omnios-discuss-bounces at lists.omniti.com] On Behalf Of Ludovic Orban
Sent: Thursday, September 14, 2017 8:28 AM
To: Dirk Willems <dirk.willems at exitas.be>
Cc: omnios-discuss <omnios-discuss at lists.omniti.com>
Subject: Re: [OmniOS-discuss] questions

Here's a quick thought: if you are copying your files with scp, you might be CPU bound because of all the crypto work. You should try to copy with a much more CPU-lightweight tool. I personally usually use netcat for this kind of stuff.

Just my 2 cents.

--
Ludovic

On Thu, Sep 14, 2017 at 2:26 PM, Dirk Willems <dirk.willems at exitas.be<mailto:dirk.willems at exitas.be>> wrote:

Hello,



I'm trying to understand something let me explain.



Oracle always told to me that if you create a etherstub switch it has infiniband speed 40GB/s.

But I have a customer running on Solaris (Yeah I know but let me explain) who is copy from 1 NGZ to another NGZ on the same GZ over Lan (I know told him to to use etherstub).

The copy witch is performed for a Oracle database with sql command, the DBA witch have 5 streams say it's waiting on the disk, the disk are 50 - 60 % busy the speed is 30 mb/s.



So I did some test just to see and understand if it's the database or the system, but with doing my tests I get very confused ???



On another Solaris at my work copy over etherstub switch => copy speed is 185MB/s expected much more of infiniband speed ???



root at test1:/export/home/Admin# scp test10G Admin at 192.168.1.2:/export/home/Admin/<mailto:Admin at 192.168.1.2:/export/home/Admin/>
Password:
test10G              100% |****************************************************************| 10240 MB    00:59



root at test2:~# dlstat -i 2

 LINK    IPKTS   RBYTES    OPKTS   OBYTES
           net1   25.76K  185.14M   10.08K    2.62M
           net1   27.04K  187.16M   11.23K    3.22M
           net1   26.97K  186.37M   11.24K    3.23M
           net1   26.63K  187.67M   10.82K    2.99M
           net1   27.94K  186.65M   12.17K    3.75M
           net1   27.45K  187.46M   11.70K    3.47M
           net1   26.01K  181.95M   10.63K    2.99M
           net1   27.95K  188.19M   12.14K    3.69M
           net1   27.91K  188.36M   12.08K    3.64M

The disks are all separate luns with all separated pools => disk are 20 - 30% busy



On my OmniOSce at my lab over etherstub



root at GNUHealth:~# scp test10G witte at 192.168.20.3:/export/home/witte/<mailto:witte at 192.168.20.3:/export/home/witte/>
Password:
test10G                                                                 76% 7853MB 116.4MB/s



=> copy is 116.4 MB/s => expected much more from infiniband speed is just the same as Lan ???



Is not that my disk can not follow 17% busy there sleeping ...

   extended device statistics
    r/s    w/s   Mr/s   Mw/s wait actv wsvc_t asvc_t  %w  %b device
    0,0  248,4    0,0    2,1  0,0  1,3    0,0    5,3   0 102 c1
    0,0   37,5    0,0    0,7  0,0  0,2    0,0    4,7   0  17 c1t0d0 => rpool
    0,0   38,5    0,0    0,7  0,0  0,2    0,0    4,9   0  17 c1t1d0 => rpool
    0,0   40,5    0,0    0,1  0,0  0,2    0,0    5,6   0  17 c1t2d0 => data pool
    0,0   43,5    0,0    0,2  0,0  0,2    0,0    5,4   0  17 c1t3d0 => data pool
    0,0   44,5    0,0    0,2  0,0  0,2    0,0    5,5   0  18 c1t4d0 => data pool
    0,0   44,0    0,0    0,2  0,0  0,2    0,0    5,4   0  17 c1t5d0 => data pool
    0,0   76,0    0,0    1,5  7,4  0,4   97,2    4,9  14  18 rpool
    0,0  172,4    0,0    0,6  2,0  0,9   11,4    5,5  12  20 DATA





root at NGINX:/root# dlstat show-link NGINX1 -i 2

 LINK  TYPE      ID  INDEX     PKTS    BYTES
         NGINX1    rx   bcast     --        0        0
         NGINX1    rx      sw     --        0        0
         NGINX1    tx   bcast     --        0        0
         NGINX1    tx      sw     --    9.26K  692.00K
         NGINX1    rx   local     --   26.00K  216.32M
         NGINX1    rx   bcast     --        0        0
         NGINX1    rx      sw     --        0        0
         NGINX1    tx   bcast     --        0        0
         NGINX1    tx      sw     --    7.01K  531.38K
         NGINX1    rx   local     --   30.65K  253.73M
         NGINX1    rx   bcast     --        0        0
         NGINX1    rx      sw     --        0        0
         NGINX1    tx   bcast     --        0        0
         NGINX1    tx      sw     --    8.95K  669.32K
         NGINX1    rx   local     --   29.10K  241.15M



On the other NGZ I receive 250MB/s ????



- So my question is how comes that the speed is equal to Lan 100MB/s on OmniOSce but i receive 250MB/s ?

- Why is etherstub so slow if infiniband speed is 40GB/s ???



I'm very confused right now ...



And want to know for sure how to understand and see this in the right way, because this customer will be the first customer from my who gonna switch complety over to OmniOSce on production and because this customer is one or the biggest company's in Belgium I really don't want to mess up !!!



So any help and clarification will be highly appreciate !!!



Thank you very much.



Kind Regards,



Dirk


--
[http://signatures.sidekick.be/exitas/images/placeholder-exitas.jpg]

Dirk Willems
System Engineer


+32 (0)3 443 12 38<tel:03%20443%2012%2038>
Dirk.Willems at exitas.be<mailto:Dirk.Willems at exitas.be>

Quality. Passion. Personality


www.exitas.be<http://www.exitas.be/> | Veldkant 31 | 2550 Kontich


Illumos OmniOS Installation and Configuration Implementation Specialist.
Oracle Solaris 11 Installation and Configuration Certified Implementation Specialist.

[cid:image001.png at 01D32D3B.E8AE3E10]


_______________________________________________
OmniOS-discuss mailing list
OmniOS-discuss at lists.omniti.com<mailto:OmniOS-discuss at lists.omniti.com>
http://lists.omniti.com/mailman/listinfo/omnios-discuss

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://omniosce.org/ml-archive/attachments/20170915/c235f810/attachment-0001.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image001.png
Type: image/png
Size: 4648 bytes
Desc: image001.png
URL: <https://omniosce.org/ml-archive/attachments/20170915/c235f810/attachment-0001.png>


More information about the OmniOS-discuss mailing list