[OmniOS-discuss] questions

Dirk Willems dirk.willems at exitas.be
Thu Sep 14 16:41:23 UTC 2017


just execute => dladm create-etherstub Backend_Switch0


and having => Backend_Switch0 etherstub 9000 up


On 14-09-17 18:26, Ian Kaufman wrote:
> Networking has always used *bps - that's been the standard for many 
> years. Megabits, Gigabits ...
>
> Disk tools have always measured in bytes since that is how the 
> capacity is defined.
>
> How did you create your etherstub? I know you can set a maxbw (maximum 
> bandiwdth), but I don't know what the default behavior is.
>
> Ian
>
>
>
> On Thu, Sep 14, 2017 at 9:13 AM, Dirk Willems <dirk.willems at exitas.be 
> <mailto:dirk.willems at exitas.be>> wrote:
>
>     Thank you all,  the water is already clearing up :)
>
>
>     So infiniband is 40 Gbps an not 40GB/s, very confusing GB/s Gbps
>     why they not take a standaard and set everything in GB/s or MB/s ?
>
>     A lot of people make a lot of mistakes between them, me too ...
>
>     If it is 40 Gbps a factor of 8 then we theoretical have max 5 GB/s
>     throughput.
>
>     Little difference 40 or 5 :)
>
>     So Ian you have the full blow with 36Gbps very cool looks more
>     like it  :)
>
>     Did I play with the frame size, not really sure what you mean by
>     that sorry but I think its default on 9000
>
>     Backend_Switch0 etherstub 9000 up
>
>
>     Do understand that if we use UDP streams from process to process
>     it will be much quicker over the etherstub gonna need more test to do.
>
>     We used for a customer Mbuffer with zfs send over Lan that is also
>     very quick sometimes I also use it at my home very good prog.
>
>     But still do not understand how it is that I copy from 1 NGZ with
>     100MB/s, I receive on the other NGZ 250MB/s very strange ?
>
>
>     the command dlstat difference between OmniOSce and Solaris ?
>
>     RBYTES => receiving
>
>     OBYTES => sending
>
>     root at test2:~# dlstat -i 2
>     >
>     >  LINK    IPKTS   RBYTES    OPKTS   OBYTES
>     >            net1   25.76K 185.14M 10.08K    2.62M
>     >            net1   27.04K  187.16M   11.23K 3.22M
>
>
>
>     BYTES => receiving and sending ?
>
>     But then still if the copy is not running I have 0 so doesn't
>     explain why I see 216 MB where come the rest of the 116 MB from is
>     it compression ?
>
>     root at NGINX:/root# dlstat show-link NGINX1 -i 2
>     >
>     >  LINK  TYPE      ID  INDEX     PKTS    BYTES
>     >          NGINX1    rx   bcast     --        0     0
>     >          NGINX1    rx      sw     --        0     0
>     >          NGINX1    tx   bcast     --        0     0
>     >          NGINX1    tx      sw     --    9.26K 692.00K
>     >          NGINX1    rx   local     --   26.00K 216.32M
>
>
>     Thank you all for your feedback much appreciations !
>
>
>     Kind Regards,
>
>
>     Dirk
>
>
>
>     On 14-09-17 17:07, Ian Kaufman wrote:
>>     Some other things you need to take into account:
>>
>>     QDR Infiniband is 40Gbps, not 40GB/s. That is a factor of 8
>>     difference. That is also a theoretical maximum throughput, there
>>     is some overhead. In reality, you will never see 40Gbps.
>>
>>     My system tested out at 6Gbps - 8Gbps using NFS over IPoIB, with
>>     DDR (20Gbps) nodes and a QDR (40Gbps) storage server. IPoIB drops
>>     the theoretical max rates to 18Gbps and 36Gbps respectively.
>>
>>     If you are getting 185MB/s, you are seeing 1.48Gbps.
>>
>>     Keep your B's and b's straight. Did you play with your frame size
>>     at all?
>>
>>     Ian
>>
>>     On Thu, Sep 14, 2017 at 7:10 AM, Jim Klimov <jimklimov at cos.ru
>>     <mailto:jimklimov at cos.ru>> wrote:
>>
>>         On September 14, 2017 2:26:13 PM GMT+02:00, Dirk Willems
>>         <dirk.willems at exitas.be <mailto:dirk.willems at exitas.be>> wrote:
>>         >Hello,
>>         >
>>         >
>>         >I'm trying to understand something let me explain.
>>         >
>>         >
>>         >Oracle always told to me that if you create a etherstub
>>         switch it has
>>         >infiniband speed 40GB/s.
>>         >
>>         >But I have a customer running on Solaris (Yeah I know but let me
>>         >explain) who is copy from 1 NGZ to another NGZ on the same
>>         GZ over Lan
>>         >(I know told him to to use etherstub).
>>         >
>>         >The copy witch is performed for a Oracle database with sql
>>         command, the
>>         >
>>         >DBA witch have 5 streams say it's waiting on the disk, the
>>         disk are 50
>>         >-
>>         >60 % busy the speed is 30 mb/s.
>>         >
>>         >
>>         >So I did some test just to see and understand if it's the
>>         database or
>>         >the system, but with doing my tests I get very confused ???
>>         >
>>         >
>>         >On another Solaris at my work copy over etherstub switch =>
>>         copy speed
>>         >is 185MB/s expected much more of infiniband speed ???
>>         >
>>         >
>>         >root at test1:/export/home/Admin# scp test10G
>>         >Admin at 192.168.1.2:/export/
>>         <mailto:Admin at 192.168.1.2:/export/>home/Admin/
>>         >Password:
>>         >test10G              100%
>>         >|****************************************************************|
>>         >10240
>>         >MB    00:59
>>         >
>>         >
>>         >root at test2:~# dlstat -i 2
>>         >
>>         >  LINK    IPKTS   RBYTES    OPKTS  OBYTES
>>         >            net1   25.76K 185.14M 10.08K    2.62M
>>         >            net1   27.04K  187.16M  11.23K    3.22M
>>         >            net1   26.97K  186.37M  11.24K    3.23M
>>         >            net1   26.63K  187.67M  10.82K    2.99M
>>         >            net1   27.94K  186.65M  12.17K    3.75M
>>         >            net1   27.45K  187.46M  11.70K    3.47M
>>         >            net1   26.01K  181.95M  10.63K    2.99M
>>         >            net1   27.95K  188.19M  12.14K    3.69M
>>         >            net1   27.91K  188.36M  12.08K    3.64M
>>         >
>>         >The disks are all separate luns with all separated pools =>
>>         disk are 20
>>         >
>>         >- 30% busy
>>         >
>>         >
>>         >On my OmniOSce at my lab over etherstub
>>         >
>>         >
>>         >root at GNUHealth:~# scp test10G witte at 192.168.20.3:/export/
>>         <mailto:witte at 192.168.20.3:/export/>home/witte/
>>         >Password:
>>         >test10G 76% 7853MB 116.4MB/s
>>         >
>>         >
>>         >=> copy is 116.4 MB/s => expected much more from infiniband
>>         speed is
>>         >just the same as Lan ???
>>         >
>>         >
>>         >Is not that my disk can not follow 17% busy there sleeping ...
>>         >
>>         >    extended device statistics
>>         >     r/s    w/s   Mr/s   Mw/s wait actv wsvc_t asvc_t  %w 
>>         %b device
>>         >     0,0  248,4    0,0    2,1  0,0 1,3    0,0    5,3   0 102 c1
>>         >    0,0   37,5    0,0    0,7  0,0 0,2    0,0    4,7   0  17
>>         c1t0d0 =>
>>         >rpool
>>         >    0,0   38,5    0,0    0,7  0,0 0,2    0,0    4,9   0  17
>>         c1t1d0 =>
>>         >rpool
>>         >    0,0   40,5    0,0    0,1  0,0 0,2    0,0    5,6   0  17
>>         c1t2d0 =>
>>         >data pool
>>         >    0,0   43,5    0,0    0,2  0,0 0,2    0,0    5,4   0  17
>>         c1t3d0 =>
>>         >data pool
>>         >    0,0   44,5    0,0    0,2  0,0 0,2    0,0    5,5   0  18
>>         c1t4d0 =>
>>         >data pool
>>         >    0,0   44,0    0,0    0,2  0,0 0,2    0,0    5,4   0  17
>>         c1t5d0 =>
>>         >data pool
>>         >     0,0   76,0    0,0    1,5  7,4 0,4   97,2    4,9  14  18
>>         rpool
>>         >     0,0  172,4    0,0    0,6  2,0 0,9   11,4    5,5  12  20
>>         DATA
>>         >
>>         >
>>         >
>>         >root at NGINX:/root# dlstat show-link NGINX1 -i 2
>>         >
>>         >  LINK  TYPE      ID  INDEX     PKTS   BYTES
>>         >          NGINX1    rx   bcast     --       0        0
>>         >          NGINX1    rx      sw     --       0        0
>>         >          NGINX1    tx   bcast     --       0        0
>>         >          NGINX1    tx      sw     --   9.26K  692.00K
>>         >          NGINX1    rx   local     --  26.00K 216.32M
>>         >          NGINX1    rx   bcast     --       0        0
>>         >          NGINX1    rx      sw     --       0        0
>>         >          NGINX1    tx   bcast     --       0        0
>>         >          NGINX1    tx      sw     --   7.01K  531.38K
>>         >          NGINX1    rx   local     --  30.65K 253.73M
>>         >          NGINX1    rx   bcast     --       0        0
>>         >          NGINX1    rx      sw     --       0        0
>>         >          NGINX1    tx   bcast     --       0        0
>>         >          NGINX1    tx      sw     --   8.95K  669.32K
>>         >          NGINX1    rx   local     --  29.10K 241.15M
>>         >
>>         >
>>         >On the other NGZ I receive 250MB/s ????
>>         >
>>         >
>>         >- So my question is how comes that the speed is equal to Lan
>>         100MB/s on
>>         >
>>         >OmniOSce but i receive 250MB/s ?
>>         >
>>         >- Why is etherstub so slow if infiniband speed is 40GB/s ???
>>         >
>>         >
>>         >I'm very confused right now ...
>>         >
>>         >
>>         >And want to know for sure how to understand and see this in
>>         the right
>>         >way, because this customer will be the first customer from
>>         my who gonna
>>         >
>>         >switch complety over to OmniOSce on production and because this
>>         >customer
>>         >is one or the biggest company's in Belgium I really don't
>>         want to mess
>>         >up !!!
>>         >
>>         >
>>         >So any help and clarification will be highly appreciate !!!
>>         >
>>         >
>>         >Thank you very much.
>>         >
>>         >
>>         >Kind Regards,
>>         >
>>         >
>>         >Dirk
>>
>>         I am not sure where the infiniband claim comes from, but
>>         copying data disk to disk, you involve the slow layers like
>>         disk, skewed by faster layers like cache of already-read data
>>         and delayed writes :)
>>
>>         If you have a wide pipe that you may fill, it doesn't mean
>>         you do have the means to fill it with a few disks.
>>
>>         To estimate the speeds, try pure UDP streams from process to
>>         process (no disk), large-packet floodping, etc.
>>
>>         I believe etherstub is not constrained artificially, and
>>         defaults to jumbo frames. Going to LAN and back can in fact
>>         use external hardware (IIRC there may be a system option to
>>         disable that, not sure) and so is constrained by that.
>>
>>         Jim
>>         --
>>         Typos courtesy of K-9 Mail on my Android
>>         _______________________________________________
>>         OmniOS-discuss mailing list
>>         OmniOS-discuss at lists.omniti.com
>>         <mailto:OmniOS-discuss at lists.omniti.com>
>>         http://lists.omniti.com/mailman/listinfo/omnios-discuss
>>         <http://lists.omniti.com/mailman/listinfo/omnios-discuss>
>>
>>
>>
>>
>>     -- 
>>     Ian Kaufman
>>     Research Systems Administrator
>>     UC San Diego, Jacobs School of Engineering ikaufman AT ucsd DOT edu
>>
>>
>>     _______________________________________________
>>     OmniOS-discuss mailing list
>>     OmniOS-discuss at lists.omniti.com
>>     <mailto:OmniOS-discuss at lists.omniti.com>
>>     http://lists.omniti.com/mailman/listinfo/omnios-discuss
>>     <http://lists.omniti.com/mailman/listinfo/omnios-discuss>
>
>     -- 
>     	Dirk Willems
>     System Engineer
>
>
>     +32 (0)3 443 12 38 <tel:+32%203%20443%2012%2038>
>     Dirk.Willems at exitas.be <mailto:Dirk.Willems at exitas.be>
>
>     Quality. Passion. Personality
>
>     www.exitas.be <http://www.exitas.be/> | Veldkant 31 | 2550 Kontich
>
>     Illumos OmniOS Installation and Configuration Implementation
>     Specialist.
>     Oracle Solaris 11 Installation and Configuration Certified
>     Implementation Specialist. 	
>
>
>     _______________________________________________
>     OmniOS-discuss mailing list
>     OmniOS-discuss at lists.omniti.com
>     <mailto:OmniOS-discuss at lists.omniti.com>
>     http://lists.omniti.com/mailman/listinfo/omnios-discuss
>     <http://lists.omniti.com/mailman/listinfo/omnios-discuss>
>
>
>
>
> -- 
> Ian Kaufman
> Research Systems Administrator
> UC San Diego, Jacobs School of Engineering ikaufman AT ucsd DOT edu

-- 
	Dirk Willems
System Engineer


+32 (0)3 443 12 38
Dirk.Willems at exitas.be <mailto:Dirk.Willems at exitas.be>

Quality. Passion. Personality

www.exitas.be <http://www.exitas.be/> | Veldkant 31 | 2550 Kontich

Illumos OmniOS Installation and Configuration Implementation Specialist.
Oracle Solaris 11 Installation and Configuration Certified 
Implementation Specialist. 	

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://omniosce.org/ml-archive/attachments/20170914/7e704b40/attachment-0001.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: SmallPhoenixLogotypeRGB.png
Type: image/png
Size: 4648 bytes
Desc: not available
URL: <https://omniosce.org/ml-archive/attachments/20170914/7e704b40/attachment-0001.png>


More information about the OmniOS-discuss mailing list