Improving Data Transfer Protocols in Congested Networks
Improving Data Transfer Protocols in Congested Networks
• Network congestion in data networking and queueing theory is the reduced quality of service that occurs
when a network node or link is carrying more data than it can handle. Typical effects include queueing
delay, packet loss or the blocking of new connections. A consequence of congestion is that an incremental
increase in offered load leads either only to a small increase or even a decrease in network throughput.[1]
• Network protocols that use aggressive retransmissions to compensate for packet loss due to congestion can
increase congestion, even after the initial load has been reduced to a level that would not normally have
induced network congestion. Such networks exhibit two stable states under the same level of load. The
stable state with low throughput is known as congestive collapse.
• Networks use congestion control and congestion avoidance techniques to try to avoid collapse. These
include: exponential backoff in protocols such as CSMA/CA in 802.11 and the similar CSMA/CD in the
original Ethernet, window reduction in TCP, and fair queueing in devices such as routers and network
switches. Other techniques that address congestion include priority schemes which transmit some packets
with higher priority ahead of others and the explicit allocation of network resources to specific flows
through the use of admission control.
FLOW CONTROL
• implement flow control i.e. to ensure smooth data trans-mission rate by not overwhelming a slow
receiver, TCP usesa sliding window protocol that uses three sliding windows;speci fically they are
advertised window,congestion window,and transmission window. These window slides are
adjustedbased on mutual coordination between the sender and re-ceiver for the number of
segments sent to the receiver, e.g.receiver notifies the sender in the advertised window for
thenumber of segments that receiver can receive in the nexttransmission cycle. The advertised
window helps receiversto avoid buffer overflow that the receivers calculate, whichis based on the
available buffer size to accept subsequentdata segments. The sender decides the congestion
windowi.e. maximum number of data segments that the sender cansend without causing
congestion in the network, based on thefeedback from the network. Similarly, transmission
windowis the minimum of advertised window and congestion win-dow in order to respectively
avoid receiver buffer overflowand the network congestion
A. TCP SLOW START PHASE PROBLEMS
• phase in which the initial congestion window size of 1MSS is doubled
for every ACK received. This small initial value of congestion window
results in TCP to slowly probe for more throughput and increases
time for TCP to utilise the large bandwidth that is available to it [10].
B. PROBLEM WITH AIMD PHASE (CONGESTION
AVOIDANCE PHASE)
• Another issue with TCP is that it reduces congestion window size by half or
resets the congestion window to 1MSS after detecting a packet loss
(depending on the packet loss event). This reduction is required for
congestion control however, it results in small window size which reduces
the effective throughput and is inefficient for fast bulk data transfer [58].
Basic Idea of SDTCP
• Step 1
• Network Congestion Trigger. We design a network congestion trigger module at the OF-switch to leverage the queue length to determine if the
network is congested. Once network congestion is discovered, it will send a congestion notification message to our controller.
• Step 2
• Flow Selection. Our flow selection module differentiates the background flows and burst flows by leveraging all of the TCP flow information, e.g.,
TTL (time-to-live), flow size, and IP addresses of TCP flows, gained from OF-switches through the OpenFlow protocol. Upon receiving a congestion
notification message from a congested OF-switch, our controller will select all of the background flows passing through the OF-switch.
• Step 3
• Flow Rate Control. A flow rate control module at the controller side estimates the current bandwidth of these chosen background flows and then
degrades their bandwidth to the desired one. We assess our desired bandwidth in terms of the network congestion level. Then, our controller
generates new flow table entries (called a regulation awnd entry) that is used to regulate the background flow bandwidth to our desired one and
sends them to the OF-switch.
• Step 4
• Flow Match and Regulation. Once TCP ACK packets from the receiver match the regulation awnd entry at OF-switch, the awnd field of these
packets will be modified to the desired one and then the packets are forwarded to the sender. After receiving these modified ACK packets, the
sender will adjust swnd in terms of Equation (1). In this way, the sending rate can be decreased to our desired one
Conclusion
• In this paper, we present SDTCP, a novel transport protocol for providing high-
throughput transmission service for IoT applications. When burst flows arrive at the
switch and queue length is larger than the threshold, SDTCP reduces the sending rate
of background flows proactively to guarantee burst flows by adjusting the advertised
window of TCP ACK packets of the corresponding background flows. SDTCP needs no
modification to the existing TCP stack and makes use of an extended OpenFlow
protocol, a technology already available in current commodity switches. We evaluate
SDTCP through extensive experiments. Our results demonstrate that the SDTCP
mechanism guarantees high throughput for burst flows effectively without starving
background flows. Moreover, the FCT of SDTCP is much better than other protocols.
Therefore, SDTCP can deal with the TCP incast problem excellently.