[OmniOS-discuss] CIFS ignores TCP Buffer Settings
Mini Trader
miniflowtrader at gmail.com
Wed Mar 9 01:33:47 UTC 2016
Bad
root at storage1:/root# dtrace -s tcp_tput.d
^C
unacked(bytes) 10.250.0.3
2049
value ------------- Distribution ------------- count
32 | 0
64 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 11
128 | 0
unacked(bytes) 10.250.0.2
2049
value ------------- Distribution ------------- count
-1 | 0
0 |@@@@@@@@ 63
1 | 0
2 | 0
4 | 0
8 | 0
16 | 0
32 | 0
64 |@ 5
128 |@@@@@@@@@@@@@@@@@@@@@@@ 195
256 |@@@@@@ 54
512 |@@ 13
1024 |@ 5
2048 | 0
unacked(bytes) 10.255.0.55
445
value ------------- Distribution ------------- count
-1 | 0
0 | 7
1 | 0
2 | 0
4 | 0
8 | 0
16 | 0
32 | 10
64 | 48
128 | 9
256 | 0
512 | 0
1024 | 1
2048 | 0
4096 | 0
8192 | 0
16384 | 2
32768 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 46626
65536 | 6
131072 | 0
SWND(bytes) 10.255.0.55
22
value ------------- Distribution ------------- count
16384 | 0
32768 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 1
65536 | 0
SWND(bytes) 10.250.0.3
2049
value ------------- Distribution ------------- count
32768 | 0
65536 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 17
131072 | 0
SWND(bytes) 10.250.0.2
2049
value ------------- Distribution ------------- count
131072 | 0
262144 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 369
524288 | 0
SWND(bytes) 10.255.0.55
445
value ------------- Distribution ------------- count
16384 | 0
32768 | 367
65536 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 49052
131072 | 0
Good
root at storage1:/root# svcadm restart smb/server
root at storage1:/root# dtrace -s tcp_tput.d
^C
unacked(bytes) 10.255.0.55
22
value ------------- Distribution ------------- count
32 | 0
64 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 1
128 | 0
unacked(bytes) 10.250.0.3
2049
value ------------- Distribution ------------- count
32 | 0
64 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 11
128 | 0
unacked(bytes) 10.250.0.2
2049
value ------------- Distribution ------------- count
-1 | 0
0 |@@@@@@@ 23
1 | 0
2 | 0
4 | 0
8 | 0
16 | 0
32 | 0
64 |@ 4
128 |@@@@@@@@@@@@@@@@@@@@@@@@@ 84
256 |@@@@ 12
512 |@ 4
1024 |@@ 6
2048 | 0
unacked(bytes) 10.255.0.55
445
value ------------- Distribution ------------- count
-1 | 0
0 | 9
1 | 0
2 | 0
4 | 0
8 | 0
16 | 0
32 | 10
64 | 50
128 | 9
256 | 0
512 | 0
1024 | 1
2048 | 0
4096 | 0
8192 | 0
16384 | 0
32768 | 1
65536 | 2
131072 | 260
262144 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 27135
524288 | 0
SWND(bytes) 10.255.0.55
22
value ------------- Distribution ------------- count
16384 | 0
32768 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 1
65536 | 0
SWND(bytes) 10.250.0.3
2049
value ------------- Distribution ------------- count
32768 | 0
65536 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 17
131072 | 0
SWND(bytes) 10.250.0.2
2049
value ------------- Distribution ------------- count
131072 | 0
262144 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 155
524288 | 0
SWND(bytes) 10.255.0.55
445
value ------------- Distribution ------------- count
16384 | 0
32768 | 54
65536 | 63
131072 | 306
262144 | 330
524288 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ 210636
1048576 |@@@@@ 28037
2097152 | 0
On Tue, Mar 8, 2016 at 8:20 PM, Mini Trader <miniflowtrader at gmail.com>
wrote:
> Running the following dtrace.
>
> #!/usr/sbin/dtrace -s
>
> #pragma D option quiet
>
> tcp:::send
> / (args[4]->tcp_flags & (TH_SYN|TH_RST|TH_FIN)) == 0 /
> {
> @unacked["unacked(bytes)", args[2]->ip_daddr, args[4]->tcp_sport] =
> quantize(args[3]->tcps_snxt - args[3]->tcps_suna);
> }
>
> tcp:::receive
> / (args[4]->tcp_flags & (TH_SYN|TH_RST|TH_FIN)) == 0 /
> {
> @swnd["SWND(bytes)", args[2]->ip_saddr, args[4]->tcp_dport] =
> quantize((args[4]->tcp_window)*(1 << args[3]->tcps_snd_ws));
>
> }
>
>
> Is showing that the windows sizes are not going above 64k when things are
> not working properly.
>
> On Tue, Mar 8, 2016 at 7:56 PM, Mini Trader <miniflowtrader at gmail.com>
> wrote:
>
>> If it helps. This doesn't happen on NFS from the exact same client. How
>> do I file a bug?
>>
>> On Tue, Mar 8, 2016 at 1:51 PM, Mini Trader <miniflowtrader at gmail.com>
>> wrote:
>>
>>> Simple example.
>>>
>>> 1 Server 1 client.
>>>
>>> Restart service everything is fast. A few hours later from same client
>>> (nothing happening concurrently) speed is slow. Restart service again,
>>> speed is fast.
>>>
>>> Its like CIFS starts off fast than somehow for whatever reason if it is
>>> not used, the connection for my CIFS drives to the server becomes slow.
>>> Also this only happens when the client is downloading. Not when uploading
>>> to the server that is always fast.
>>>
>>> On Tue, Mar 8, 2016 at 1:42 AM, Jim Klimov <jimklimov at cos.ru> wrote:
>>>
>>>> 8 марта 2016 г. 6:42:13 CET, Mini Trader <miniflowtrader at gmail.com>
>>>> пишет:
>>>> >Is it possible that CIFS will ignore TCP buffer settings after a while?
>>>> >
>>>> >I've confirmed my systems max transfer rate using iperf and have tuned
>>>> >my
>>>> >buffers accordingly. For whatever reason CIFS seems to forget these
>>>> >settings after a while as speed drops significantly. Issuing a restart
>>>> >of
>>>> >the service immediately appears to restore the setting as transfer
>>>> >speed
>>>> >becomes normal again.
>>>> >
>>>> >Any ideas why this would happen?
>>>> >
>>>> >
>>>>
>>>> >------------------------------------------------------------------------
>>>> >
>>>> >_______________________________________________
>>>> >OmniOS-discuss mailing list
>>>> >OmniOS-discuss at lists.omniti.com
>>>> >http://lists.omniti.com/mailman/listinfo/omnios-discuss
>>>>
>>>> As a random guess from experience with other network stuff - does the
>>>> speed-drop happen on a running connection or new ones too? Do you have
>>>> concurrent transfers at this time?
>>>>
>>>> Some other subsystems (no idea if this one too) use best speeds for new
>>>> or recently awakened dormant connections, so short-lived bursts are fast -
>>>> at expence of long-running active bulk transfers (deemed to be bulk because
>>>> they run for a long time).
>>>>
>>>> HTH, Jim
>>>> --
>>>> Typos courtesy of K-9 Mail on my Samsung Android
>>>>
>>>
>>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://omniosce.org/ml-archive/attachments/20160308/28b98d21/attachment-0001.html>
More information about the OmniOS-discuss
mailing list