[OmniOS-discuss] questions
Dirk Willems
dirk.willems at exitas.be
Thu Sep 14 16:55:51 UTC 2017
Actuality have it test with this etherstub => NGINX_Switch0
But all the etherstub switches are created excellently the same
But idd on the etherstub the mtu on Default is standing on 1500 but the
value is on 9000 so 9000 is in use ?
root at OmniOS:/root# dladm show-linkprop NGINX_Switch0
LINK PROPERTY PERM VALUE DEFAULT POSSIBLE
NGINX_Switch0 autopush rw -- -- --
NGINX_Switch0 zone rw -- -- --
NGINX_Switch0 state r- up up up,down
NGINX_Switch0 mtu rw 9000 1500 576-9000
NGINX_Switch0 maxbw rw -- -- --
NGINX_Switch0 cpus rw -- -- --
NGINX_Switch0 cpus-effective r- -- -- --
NGINX_Switch0 pool rw -- -- --
NGINX_Switch0 pool-effective r- -- -- --
NGINX_Switch0 priority rw high high low,medium,high
NGINX_Switch0 forward rw 1 1 1,0
NGINX_Switch0 default_tag rw 1 1 --
NGINX_Switch0 learn_limit rw 1000 1000 --
NGINX_Switch0 learn_decay rw 200 200 --
NGINX_Switch0 stp rw 1 1 1,0
NGINX_Switch0 stp_priority rw 128 128 --
NGINX_Switch0 stp_cost rw auto auto --
NGINX_Switch0 stp_edge rw 1 1 1,0
NGINX_Switch0 stp_p2p rw auto auto true,false,auto
NGINX_Switch0 stp_mcheck rw 0 0 1,0
NGINX_Switch0 protection rw -- -- mac-nospoof,
restricted,
ip-nospoof,
dhcp-nospoof
NGINX_Switch0 allowed-ips rw -- -- --
NGINX_Switch0 allowed-dhcp-cids rw -- -- --
NGINX_Switch0 rxrings rw -- -- --
NGINX_Switch0 rxrings-effective r- -- -- --
NGINX_Switch0 txrings rw -- -- --
NGINX_Switch0 txrings-effective r- -- -- --
NGINX_Switch0 txrings-available r- 0 -- --
NGINX_Switch0 rxrings-available r- 0 -- --
NGINX_Switch0 rxhwclnt-available r- 0 -- --
NGINX_Switch0 txhwclnt-available r- 0 -- --
root at OmniOS:/root# dladm show-linkprop NGINX1
LINK PROPERTY PERM VALUE DEFAULT POSSIBLE
NGINX1 autopush rw -- -- --
NGINX1 zone rw NGINX -- --
NGINX1 state r- up up up,down
NGINX1 mtu rw 9000 9000 576-9000
NGINX1 secondary-macs rw -- -- --
NGINX1 maxbw rw -- -- --
NGINX1 cpus rw -- -- --
NGINX1 cpus-effective r- 0,23 -- --
NGINX1 pool rw -- -- --
NGINX1 pool-effective r- -- -- --
NGINX1 priority rw high high low,medium,high
NGINX1 tagmode rw vlanonly vlanonly normal,vlanonly
NGINX1 protection rw ip-nospoof -- mac-nospoof,
restricted,
ip-nospoof,
dhcp-nospoof
NGINX1 promisc-filtered rw on on off,on
NGINX1 allowed-ips rw 192.168.20.3/32 -- --
NGINX1 allowed-dhcp-cids rw -- -- --
NGINX1 rxrings rw -- -- --
NGINX1 rxrings-effective r- -- -- --
NGINX1 txrings rw -- -- --
NGINX1 txrings-effective r- -- -- --
NGINX1 txrings-available r- 0 -- --
NGINX1 rxrings-available r- 0 -- --
NGINX1 rxhwclnt-available r- 0 -- --
NGINX1 txhwclnt-available r- 0 -- --
root at OmniOS:/root# dladm show-linkprop GNUHealth0
LINK PROPERTY PERM VALUE DEFAULT POSSIBLE
GNUHealth0 autopush rw -- -- --
GNUHealth0 zone rw GNUHealth -- --
GNUHealth0 state r- up up up,down
GNUHealth0 mtu rw 9000 9000 576-9000
GNUHealth0 secondary-macs rw -- -- --
GNUHealth0 maxbw rw -- -- --
GNUHealth0 cpus rw -- -- --
GNUHealth0 cpus-effective r- 21-22 -- --
GNUHealth0 pool rw -- -- --
GNUHealth0 pool-effective r- -- -- --
GNUHealth0 priority rw high high low,medium,high
GNUHealth0 tagmode rw vlanonly vlanonly normal,vlanonly
GNUHealth0 protection rw ip-nospoof -- mac-nospoof,
restricted,
ip-nospoof,
dhcp-nospoof
GNUHealth0 promisc-filtered rw on on off,on
GNUHealth0 allowed-ips rw 192.168.20.5/32 -- --
GNUHealth0 allowed-dhcp-cids rw -- -- --
GNUHealth0 rxrings rw -- -- --
GNUHealth0 rxrings-effective r- -- -- --
GNUHealth0 txrings rw -- -- --
GNUHealth0 txrings-effective r- -- -- --
GNUHealth0 txrings-available r- 0 -- --
GNUHealth0 rxrings-available r- 0 -- --
GNUHealth0 rxhwclnt-available r- 0 -- --
GNUHealth0 txhwclnt-available r- 0 -- --
root at OmniOS:/root#
On 14-09-17 18:46, Ian Kaufman wrote:
> What about the attached vnics?
>
> Can you do:
>
> dladm show-linkprop vnic# for the vnics connected to the etherstub?
> There may be a maxbw setting ...
>
> On Thu, Sep 14, 2017 at 9:41 AM, Dirk Willems <dirk.willems at exitas.be
> <mailto:dirk.willems at exitas.be>> wrote:
>
> just execute => dladm create-etherstub Backend_Switch0
>
>
> and having => Backend_Switch0 etherstub 9000 up
>
>
> On 14-09-17 18:26, Ian Kaufman wrote:
>> Networking has always used *bps - that's been the standard for
>> many years. Megabits, Gigabits ...
>>
>> Disk tools have always measured in bytes since that is how the
>> capacity is defined.
>>
>> How did you create your etherstub? I know you can set a maxbw
>> (maximum bandiwdth), but I don't know what the default behavior is.
>>
>> Ian
>>
>>
>>
>> On Thu, Sep 14, 2017 at 9:13 AM, Dirk Willems
>> <dirk.willems at exitas.be <mailto:dirk.willems at exitas.be>> wrote:
>>
>> Thank you all, the water is already clearing up :)
>>
>>
>> So infiniband is 40 Gbps an not 40GB/s, very confusing GB/s
>> Gbps why they not take a standaard and set everything in GB/s
>> or MB/s ?
>>
>> A lot of people make a lot of mistakes between them, me too ...
>>
>> If it is 40 Gbps a factor of 8 then we theoretical have max 5
>> GB/s throughput.
>>
>> Little difference 40 or 5 :)
>>
>> So Ian you have the full blow with 36Gbps very cool looks
>> more like it :)
>>
>> Did I play with the frame size, not really sure what you mean
>> by that sorry but I think its default on 9000
>>
>> Backend_Switch0 etherstub 9000 up
>>
>>
>> Do understand that if we use UDP streams from process to
>> process it will be much quicker over the etherstub gonna need
>> more test to do.
>>
>> We used for a customer Mbuffer with zfs send over Lan that is
>> also very quick sometimes I also use it at my home very good
>> prog.
>>
>> But still do not understand how it is that I copy from 1 NGZ
>> with 100MB/s, I receive on the other NGZ 250MB/s very strange ?
>>
>>
>> the command dlstat difference between OmniOSce and Solaris ?
>>
>> RBYTES => receiving
>>
>> OBYTES => sending
>>
>> root at test2:~# dlstat -i 2
>> >
>> > LINK IPKTS RBYTES OPKTS OBYTES
>> > net1 25.76K 185.14M 10.08K 2.62M
>> > net1 27.04K 187.16M 11.23K 3.22M
>>
>>
>>
>> BYTES => receiving and sending ?
>>
>> But then still if the copy is not running I have 0 so doesn't
>> explain why I see 216 MB where come the rest of the 116 MB
>> from is it compression ?
>>
>> root at NGINX:/root# dlstat show-link NGINX1 -i 2
>> >
>> > LINK TYPE ID INDEX PKTS BYTES
>> > NGINX1 rx bcast -- 0 0
>> > NGINX1 rx sw -- 0 0
>> > NGINX1 tx bcast -- 0 0
>> > NGINX1 tx sw -- 9.26K 692.00K
>> > NGINX1 rx local -- 26.00K 216.32M
>>
>>
>> Thank you all for your feedback much appreciations !
>>
>>
>> Kind Regards,
>>
>>
>> Dirk
>>
>>
>>
>> On 14-09-17 17:07, Ian Kaufman wrote:
>>> Some other things you need to take into account:
>>>
>>> QDR Infiniband is 40Gbps, not 40GB/s. That is a factor of 8
>>> difference. That is also a theoretical maximum throughput,
>>> there is some overhead. In reality, you will never see 40Gbps.
>>>
>>> My system tested out at 6Gbps - 8Gbps using NFS over IPoIB,
>>> with DDR (20Gbps) nodes and a QDR (40Gbps) storage server.
>>> IPoIB drops the theoretical max rates to 18Gbps and 36Gbps
>>> respectively.
>>>
>>> If you are getting 185MB/s, you are seeing 1.48Gbps.
>>>
>>> Keep your B's and b's straight. Did you play with your frame
>>> size at all?
>>>
>>> Ian
>>>
>>> On Thu, Sep 14, 2017 at 7:10 AM, Jim Klimov
>>> <jimklimov at cos.ru <mailto:jimklimov at cos.ru>> wrote:
>>>
>>> On September 14, 2017 2:26:13 PM GMT+02:00, Dirk Willems
>>> <dirk.willems at exitas.be <mailto:dirk.willems at exitas.be>>
>>> wrote:
>>> >Hello,
>>> >
>>> >
>>> >I'm trying to understand something let me explain.
>>> >
>>> >
>>> >Oracle always told to me that if you create a etherstub
>>> switch it has
>>> >infiniband speed 40GB/s.
>>> >
>>> >But I have a customer running on Solaris (Yeah I know
>>> but let me
>>> >explain) who is copy from 1 NGZ to another NGZ on the
>>> same GZ over Lan
>>> >(I know told him to to use etherstub).
>>> >
>>> >The copy witch is performed for a Oracle database with
>>> sql command, the
>>> >
>>> >DBA witch have 5 streams say it's waiting on the disk,
>>> the disk are 50
>>> >-
>>> >60 % busy the speed is 30 mb/s.
>>> >
>>> >
>>> >So I did some test just to see and understand if it's
>>> the database or
>>> >the system, but with doing my tests I get very confused ???
>>> >
>>> >
>>> >On another Solaris at my work copy over etherstub
>>> switch => copy speed
>>> >is 185MB/s expected much more of infiniband speed ???
>>> >
>>> >
>>> >root at test1:/export/home/Admin# scp test10G
>>> >Admin at 192.168.1.2:/export/
>>> <mailto:Admin at 192.168.1.2:/export/>home/Admin/
>>> >Password:
>>> >test10G 100%
>>> >|****************************************************************|
>>> >10240
>>> >MB 00:59
>>> >
>>> >
>>> >root at test2:~# dlstat -i 2
>>> >
>>> > LINK IPKTS RBYTES OPKTS OBYTES
>>> > net1 25.76K 185.14M 10.08K 2.62M
>>> > net1 27.04K 187.16M 11.23K 3.22M
>>> > net1 26.97K 186.37M 11.24K 3.23M
>>> > net1 26.63K 187.67M 10.82K 2.99M
>>> > net1 27.94K 186.65M 12.17K 3.75M
>>> > net1 27.45K 187.46M 11.70K 3.47M
>>> > net1 26.01K 181.95M 10.63K 2.99M
>>> > net1 27.95K 188.19M 12.14K 3.69M
>>> > net1 27.91K 188.36M 12.08K 3.64M
>>> >
>>> >The disks are all separate luns with all separated
>>> pools => disk are 20
>>> >
>>> >- 30% busy
>>> >
>>> >
>>> >On my OmniOSce at my lab over etherstub
>>> >
>>> >
>>> >root at GNUHealth:~# scp test10G
>>> witte at 192.168.20.3:/export/
>>> <mailto:witte at 192.168.20.3:/export/>home/witte/
>>> >Password:
>>> >test10G 76% 7853MB 116.4MB/s
>>> >
>>> >
>>> >=> copy is 116.4 MB/s => expected much more from
>>> infiniband speed is
>>> >just the same as Lan ???
>>> >
>>> >
>>> >Is not that my disk can not follow 17% busy there
>>> sleeping ...
>>> >
>>> > extended device statistics
>>> > r/s w/s Mr/s Mw/s wait actv wsvc_t asvc_t
>>> %w %b device
>>> > 0,0 248,4 0,0 2,1 0,0 1,3 0,0 5,3 0
>>> 102 c1
>>> > 0,0 37,5 0,0 0,7 0,0 0,2 0,0 4,7 0
>>> 17 c1t0d0 =>
>>> >rpool
>>> > 0,0 38,5 0,0 0,7 0,0 0,2 0,0 4,9 0
>>> 17 c1t1d0 =>
>>> >rpool
>>> > 0,0 40,5 0,0 0,1 0,0 0,2 0,0 5,6 0
>>> 17 c1t2d0 =>
>>> >data pool
>>> > 0,0 43,5 0,0 0,2 0,0 0,2 0,0 5,4 0
>>> 17 c1t3d0 =>
>>> >data pool
>>> > 0,0 44,5 0,0 0,2 0,0 0,2 0,0 5,5 0
>>> 18 c1t4d0 =>
>>> >data pool
>>> > 0,0 44,0 0,0 0,2 0,0 0,2 0,0 5,4 0
>>> 17 c1t5d0 =>
>>> >data pool
>>> > 0,0 76,0 0,0 1,5 7,4 0,4 97,2 4,9 14
>>> 18 rpool
>>> > 0,0 172,4 0,0 0,6 2,0 0,9 11,4 5,5 12
>>> 20 DATA
>>> >
>>> >
>>> >
>>> >root at NGINX:/root# dlstat show-link NGINX1 -i 2
>>> >
>>> > LINK TYPE ID INDEX PKTS BYTES
>>> > NGINX1 rx bcast -- 0 0
>>> > NGINX1 rx sw -- 0 0
>>> > NGINX1 tx bcast -- 0 0
>>> > NGINX1 tx sw -- 9.26K 692.00K
>>> > NGINX1 rx local -- 26.00K 216.32M
>>> > NGINX1 rx bcast -- 0 0
>>> > NGINX1 rx sw -- 0 0
>>> > NGINX1 tx bcast -- 0 0
>>> > NGINX1 tx sw -- 7.01K 531.38K
>>> > NGINX1 rx local -- 30.65K 253.73M
>>> > NGINX1 rx bcast -- 0 0
>>> > NGINX1 rx sw -- 0 0
>>> > NGINX1 tx bcast -- 0 0
>>> > NGINX1 tx sw -- 8.95K 669.32K
>>> > NGINX1 rx local -- 29.10K 241.15M
>>> >
>>> >
>>> >On the other NGZ I receive 250MB/s ????
>>> >
>>> >
>>> >- So my question is how comes that the speed is equal
>>> to Lan 100MB/s on
>>> >
>>> >OmniOSce but i receive 250MB/s ?
>>> >
>>> >- Why is etherstub so slow if infiniband speed is
>>> 40GB/s ???
>>> >
>>> >
>>> >I'm very confused right now ...
>>> >
>>> >
>>> >And want to know for sure how to understand and see
>>> this in the right
>>> >way, because this customer will be the first customer
>>> from my who gonna
>>> >
>>> >switch complety over to OmniOSce on production and
>>> because this
>>> >customer
>>> >is one or the biggest company's in Belgium I really
>>> don't want to mess
>>> >up !!!
>>> >
>>> >
>>> >So any help and clarification will be highly appreciate !!!
>>> >
>>> >
>>> >Thank you very much.
>>> >
>>> >
>>> >Kind Regards,
>>> >
>>> >
>>> >Dirk
>>>
>>> I am not sure where the infiniband claim comes from, but
>>> copying data disk to disk, you involve the slow layers
>>> like disk, skewed by faster layers like cache of
>>> already-read data and delayed writes :)
>>>
>>> If you have a wide pipe that you may fill, it doesn't
>>> mean you do have the means to fill it with a few disks.
>>>
>>> To estimate the speeds, try pure UDP streams from
>>> process to process (no disk), large-packet floodping, etc.
>>>
>>> I believe etherstub is not constrained artificially, and
>>> defaults to jumbo frames. Going to LAN and back can in
>>> fact use external hardware (IIRC there may be a system
>>> option to disable that, not sure) and so is constrained
>>> by that.
>>>
>>> Jim
>>> --
>>> Typos courtesy of K-9 Mail on my Android
>>> _______________________________________________
>>> OmniOS-discuss mailing list
>>> OmniOS-discuss at lists.omniti.com
>>> <mailto:OmniOS-discuss at lists.omniti.com>
>>> http://lists.omniti.com/mailman/listinfo/omnios-discuss
>>> <http://lists.omniti.com/mailman/listinfo/omnios-discuss>
>>>
>>>
>>>
>>>
>>> --
>>> Ian Kaufman
>>> Research Systems Administrator
>>> UC San Diego, Jacobs School of Engineering ikaufman AT ucsd
>>> DOT edu
>>>
>>>
>>> _______________________________________________
>>> OmniOS-discuss mailing list
>>> OmniOS-discuss at lists.omniti.com
>>> <mailto:OmniOS-discuss at lists.omniti.com>
>>> http://lists.omniti.com/mailman/listinfo/omnios-discuss
>>> <http://lists.omniti.com/mailman/listinfo/omnios-discuss>
>>
>> --
>> Dirk Willems
>> System Engineer
>>
>>
>> +32 (0)3 443 12 38 <tel:+32%203%20443%2012%2038>
>> Dirk.Willems at exitas.be <mailto:Dirk.Willems at exitas.be>
>>
>> Quality. Passion. Personality
>>
>> www.exitas.be <http://www.exitas.be/> | Veldkant 31
>> <https://maps.google.com/?q=Veldkant+31&entry=gmail&source=g>
>> | 2550 Kontich
>>
>> Illumos OmniOS Installation and Configuration Implementation
>> Specialist.
>> Oracle Solaris 11 Installation and Configuration Certified
>> Implementation Specialist.
>>
>>
>> _______________________________________________
>> OmniOS-discuss mailing list
>> OmniOS-discuss at lists.omniti.com
>> <mailto:OmniOS-discuss at lists.omniti.com>
>> http://lists.omniti.com/mailman/listinfo/omnios-discuss
>> <http://lists.omniti.com/mailman/listinfo/omnios-discuss>
>>
>>
>>
>>
>> --
>> Ian Kaufman
>> Research Systems Administrator
>> UC San Diego, Jacobs School of Engineering ikaufman AT ucsd DOT edu
>
> --
> Dirk Willems
> System Engineer
>
>
> +32 (0)3 443 12 38 <tel:+32%203%20443%2012%2038>
> Dirk.Willems at exitas.be <mailto:Dirk.Willems at exitas.be>
>
> Quality. Passion. Personality
>
> www.exitas.be <http://www.exitas.be/> | Veldkant 31
> <https://maps.google.com/?q=Veldkant+31&entry=gmail&source=g> |
> 2550 Kontich
>
> Illumos OmniOS Installation and Configuration Implementation
> Specialist.
> Oracle Solaris 11 Installation and Configuration Certified
> Implementation Specialist.
>
>
>
>
> --
> Ian Kaufman
> Research Systems Administrator
> UC San Diego, Jacobs School of Engineering ikaufman AT ucsd DOT edu
--
Dirk Willems
System Engineer
+32 (0)3 443 12 38
Dirk.Willems at exitas.be <mailto:Dirk.Willems at exitas.be>
Quality. Passion. Personality
www.exitas.be <http://www.exitas.be/> | Veldkant 31 | 2550 Kontich
Illumos OmniOS Installation and Configuration Implementation Specialist.
Oracle Solaris 11 Installation and Configuration Certified
Implementation Specialist.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://omniosce.org/ml-archive/attachments/20170914/d3d30255/attachment-0001.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: SmallPhoenixLogotypeRGB.png
Type: image/png
Size: 4648 bytes
Desc: not available
URL: <https://omniosce.org/ml-archive/attachments/20170914/d3d30255/attachment-0001.png>
More information about the OmniOS-discuss
mailing list