Lecture 4 Transmission Control Protocol
Lecture 4 Transmission Control Protocol
1
Contents
• Overview of TCP (Review)
• TCP and Congestion Control
– The Causes of Congestion
– Approaches to Congestion Control
– TCP Congestion Control
• TCP Performance Issues
– Window Size
– RTT Estimation
– Fairness
2
Transmission Control Protocol
• The most commonly used transport protocol today
– Almost all Internet applications that require reliability use TCP
• Web browsing, email, file sharing, instant messaging, file transfer,
database access, proprietary business applications, some multimedia
applications (at least for control purposes), …
• TCP provides a reliable, stream-oriented transport service:
– Stream of bits (or bytes) flow between end-points
• Stream is unstructured
– Connection-oriented data transfer
• Set up a connection before sending data
– Buffered transfer
• Applications generate any sized messages
• TCP may buffer messages until large datagram is formed
• Option to force (push) the transmission
– Full duplex connection
• Once the connection is setup, data can be sent in both directions
– Reliability
• Positive acknowledgement with retransmission
3
TCP Segment
4
TCP Segment Fields
7
What is Congestion?
• Congestion occurs when the number of packets being transmitted
through the network that approaches the packet handling capacity of
the network
– What is the packet handling capacity of a network?
– What happens when capacity is approached?
• Congestion control aims to keep number of packets below a level at
which performance falls off dramatically
– How to keep number of packets below level?
8
Congestion Scenario 1
• Two senders (and receivers); a router with infinite buffers
12
Congestion Scenario 2
• Lets assume a sender times out too early and retransmits a packet
even though the original packet was sent by the router to destination
– The output link from the router will be used to send the
original and retransmitted packet
– But the retransmitted packet will be ignored (discarded)
by the destination
13
Congestion Scenario 3
14
Congestion Scenario 3
15
Costs of Congestion
• Large queuing delays are experienced as the sending rate nears
the output link capacity at a router
16
Approaches to Congestion Control
17
Approaches to Congestion Control
• End-to-end Congestion Control: in Transport Layer
– Network layer provides no feedback on congestion in network
– End systems (source/destination hosts) infer (guess) congestion based
on detected events such as packet loss and/or delay
19
TCP Congestion Control
20
TCP Congestion Control
21
Limiting the TCP Sending Rate
• Amount of bytes TCP sender can send is limited by
Advertised Window from Flow Control
– Outstanding Bytes ≤ Advertised Window (this must be
maintained)
– Outstanding Bytes = bytes that are ready to be sent
– Advertised window = how many byte receiver can carry
• In fact, TCP sender also maintains Congestion Window:
– Outstanding Bytes ≤ min (Advertised Window, Congestion
Window)
• When an ACK is received, more bytes can be sent by TCP
sender
• Assume the Advertised Window is very large (buffer at receiver
is very large)
– Sending rate ≈ Congestion Window/RTT (Round trip time)
• By adjusting the Congestion Window, TCP sender can adjust
its sending rate 22
Perceiving Network Congestion
• TCP sender assumes a loss indicates increased network
congestion
– A loss:
• TCP sender times out: has not received ACK within timeout
period
• TCP sender receives 3 duplicate ACKs (same packet repeatedly
transfer request received)
– Is this a valid assumption?
• Most packet losses occur at routers, i.e. congestion
• However, in some networks (e.g. wireless), packets may be lost due
to link errors, not congestion
• TCP sender assumes arrival of ACKs indicates decreased
network congestion
– The faster the arrival rate of ACKs, the large decrease in
congestion assumed
23
TCP Congestion Control Algorithm
• Three main components:
1. Additive Increase, Multiplicative Decrease (AIMD)
2. Slow start (SS) (start in a rate speed)
3. Reaction to Loss Events
• Terminology
– Maximum Segment Size (MSS): determined or assumed by TCP
sender for network path; measured in bytes. (how many bytes can
be sent)
– Congestion Window (cwnd): measured in bytes
– Round Trip Time (RTT): time from sending a segment, until
corresponding ACK is received
• We will assume:
– TCP receiver sends an ACK for every segment received
24
AIMD (Additive increase multiplicative decrease)
• Additive Increase
– If no congestion detected, then TCP sender assumes there is available
(unused) capacity in the network; hence increases its sending rate
• However, TCP slowly increases its sending rate
– TCP sender aims to increase Congestion Window by 1 x MSS every
RTT (round trip time)
– One approach:
• For every new ACK received, increase Congestion Window (cwnd)
by:
cwndnew = cwndold + MSS * MSS / cwndold
– MSS (maximum segment size)
– Additive Increase phase is also called Congestion Avoidance
• Multiplicative Decrease
– If congestion detected, then TCP sender decreases its sending rate
– TCP sender aims to half the Congestion Window for each loss
• For each loss detected:
cwndnew = cwndold/2 25
AIMD Example
Loss events (timeout or 3 duplicate ACKs)
26
Slow Start Phase
• At start of a TCP connection, the TCP sender sends at a slow rate
– By default: cwnd = MSS
– e.g. approximate sending rate for MSS = 1000 bytes, RTT =
200ms is 40kb/s
• If large capacity is available for the connection, using additive
increase (congestion avoidance) will be too slow
30000
25000
Advertised
Additive Increase/
Window
Congestion Avoidance
20000 (awnd)
Window [bytes]
Slow Start
15000
Threshold
(ssthresh)
10000
Slow Start
5000
0
0 100 200 300 400 500 600 700 800 900 1000 1100 1200 1300
1400 1500
Time [msec]
2500
2000
Sending Rate [kb/s]
1500
1000
500
0
0 100 200 300 400 500 600 700 800 900 1000 1100 1200
1300 1400 1500
Time [msec]
20000
Window [bytes]
15000
10000
5000
ssthresh
0
0
00
00
00
00
00
00
00
00
00
0
0
0
40
60
20
80
12
14
18
20
22
24
26
10
16
Time [msec]
31
Reaction to Loss Events
• Why?
– TCP assumes a loss indicates congestion in the network (and
therefore slows down)
– Loss due to 3rd Duplicate ACK
• Some TCP segments are being delivered (since some
ACKs are coming back)
• TCP assumes small level of congestion, therefore
immediately enters Congestion Avoidance phase
– Loss due to Timeout
• Most TCP segments were lost (since not even
duplicate ACKs are received)
• TCP assumes heavy congestion, therefore go back to start
(of Slow Start) with very slow sending rate
32
TCP Congestion Control in Practice
• TCP Congestion Control algorithm works well in networks were
losses are mainly due to congestion
– Note that with a congested network, the throughput of TCP connection can
be severely limited
33
TCP Versions and Options
35
Sliding Window Performance
• Parameters:
– Payload: size of data in a segment
– Header: size of header
– Rate: data rate of the link/network
– Prop: propagation delay between sender and receiver
– W: number of segments that can be sent
36
Source Destination Source Destination
Data(0)
Data(0)
Wmax x DataTransmission
Ack(1)
ACK received, but
cannot yet send next
frame (511)
Ack(1)
D
a
t
a
(
5
1
1
)
37
Basic TCP Performance
38
Basic TCP Performance
39
Basic TCP Performance
40
Basic TCP Performance
41
Basic TCP Performance
42
TCP Window Scaling
43
RTT Estimation
44
RTT Estimation
• TCP sender needs to estimate Round-Trip Time (RTT)
– Why? To determine the appropriate timeout interval.
• Timeout interval too large: Sender will wait too long
before retransmitting
• Timeout interval too short: Sender may retransmit
when data is not lost
• RTT estimation is complex, yet important function in TCP
– Delay in Internet can vary significantly over time
– TCP samples the RTT of segments
– TCP uses a weighted average function to estimate
RTT
• Different algorithms/options used in different versions of
TCP and its implementation
45
RTT Estimation (regression line)
46
TCP Fairness
47
Example: TCP Fairness
48
Example: TCP Fairness
49
TCP Fairness
• If TCP is fair, with N TCP connections using a R bps link
– Each TCP connection should achieve R/N bps
• Does TCP achieve fairness?
– In ideal conditions, yes. If all TCP connections have same RTT
and same sized segments, with no other traffic, fairness is
achieved
– In practice:
• If RTT of connections vary, connections with small RTT are
able to higher proportion of bandwidth than connections with
large RTT
• If other non-TCP data is also present (such as multimedia
using UDP), then TCP connections receive unfair treatment
• Applications can use multiple TCP connections: each TCP
connection gets fair treatment, but the application using
multiple connections gets more bandwidth than application
using single connection
50
Example: TCP Fairness
51
TCP and The Internet
• IP does not include any built-in congestion control mechanisms
– If every host sent IP datagrams as fast as possible, the Internet
would not work
• The Internet relies on TCP mechanisms to avoid collapse
– TCP comprises about 90% of all traffic on the Internet
– As a means for congestion control, TCP has been very successful
• But …
– If hosts/applications choose not to follow TCP’s congestion
control rules, then congestion can become a major problem in
the Internet
– Challenges:
• Web browsers opening many TCP connections at once.
• Growth of multimedia applications that use UDP.(not using
tcp)
• Growth of P2P applications using multiple connections
and/or UDP. 52