0% found this document useful (0 votes)
4 views52 pages

Lecture 4 Transmission Control Protocol

The document provides a comprehensive overview of the Transmission Control Protocol (TCP), detailing its functionality, features, and congestion control mechanisms. It explains how TCP ensures reliable data transmission through connection management, flow control, and the handling of congestion. Additionally, it discusses the TCP congestion control algorithm, including the Additive Increase, Multiplicative Decrease (AIMD) approach, and the phases of Slow Start and congestion avoidance.

Uploaded by

Md. Abdul Mukit
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views52 pages

Lecture 4 Transmission Control Protocol

The document provides a comprehensive overview of the Transmission Control Protocol (TCP), detailing its functionality, features, and congestion control mechanisms. It explains how TCP ensures reliable data transmission through connection management, flow control, and the handling of congestion. Additionally, it discusses the TCP congestion control algorithm, including the Additive Increase, Multiplicative Decrease (AIMD) approach, and the phases of Slow Start and congestion avoidance.

Uploaded by

Md. Abdul Mukit
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 52

Transmission Control Protocol

Dr. Md Abul Kalam Azad


Professor
Dept of CSE, JU, Dhaka

1
Contents
• Overview of TCP (Review)
• TCP and Congestion Control
– The Causes of Congestion
– Approaches to Congestion Control
– TCP Congestion Control
• TCP Performance Issues
– Window Size
– RTT Estimation
– Fairness

2
Transmission Control Protocol
• The most commonly used transport protocol today
– Almost all Internet applications that require reliability use TCP
• Web browsing, email, file sharing, instant messaging, file transfer,
database access, proprietary business applications, some multimedia
applications (at least for control purposes), …
• TCP provides a reliable, stream-oriented transport service:
– Stream of bits (or bytes) flow between end-points
• Stream is unstructured
– Connection-oriented data transfer
• Set up a connection before sending data
– Buffered transfer
• Applications generate any sized messages
• TCP may buffer messages until large datagram is formed
• Option to force (push) the transmission
– Full duplex connection
• Once the connection is setup, data can be sent in both directions
– Reliability
• Positive acknowledgement with retransmission
3
TCP Segment

• Header contains 20 bytes, plus optional fields


– Optional fields must be padded out to multiple of 4
bytes
20 bytes

4
TCP Segment Fields

• Source/Destination port: 16 bit • Window contains the number


port number of the of bytes the receiver is willing
source/destination to accept (for flow control)
• Sequence number of the first • Checksum for detecting
data byte in this segment
errors in the TCP segment
• – Unless the SYN flag is set, in
which case the sequence number • Urgent pointer points to the
is the Initial Sequence Number sequence number of the last
(ISN) byte of urgent data in the
• Acknowledgement number: segment
sequence number of the next
• Options: such as maximum
data byte TCP expects to
receive segment size, window
• Header Length: Size of scaling, selective
header (measured in 4 acknowledgement, …
bytes)
• Reserved for future use
• Flags see next slide
5
TCP Segment Flags
• Flags (1 bit each, if 1 the flag is true or on):
– CWR: Congestion Window Reduced
– ECE: Explicit Congestion Notification Echo
• CWR and ECE are used on a special congestion control mechanism – we do not
cover this here
– URG: segment carries urgent data, use the urgent pointer field; receiver
should notify application program of urgent data as soon as possible
– ACK: segment carries ACK, use the ACK field
– PSH: push function
– RST: reset the connection
– SYN: synchronise the sequence numbers
– FIN: no more data from sender
• Note
– There is only one type of TCP packet
• However the purpose of that packet may differ depending on the flags
set
• If SYN flag is set, we may call it a “SYN packet or TCP SYN”
• If the ACK flag is set, we may call it a “ACK packet”
• If the packet carries data, we may call it a “DATA packet”
• If the packet carries data and the ACK flag is set, it is both a DATA and
6
ACK packet
Main TCP Features
• Connection Management
– Aim: Initialise parameters for data transfer
– Setup a connection before sending data
– Teardown a connection when finished
• Reliability
– Aim: ensure all data is deliver intact, in-order to receiver
– Sequence numbers
– Re-transmission schemes
• Basic (retransmit after timeout), Fast Retransmit (retransmit after receiving 3
duplicate ACKs)
• Flow Control
– Aim: ensure the sender does not overflow the receiver
– Receiver indicates free space in buffer in Advertised Window of ACK
– Sender cannot send more than the Advertised Window
• Congestion Control
– Aim: ensure the sender does not overflow the network (routers)

7
What is Congestion?
• Congestion occurs when the number of packets being transmitted
through the network that approaches the packet handling capacity of
the network
– What is the packet handling capacity of a network?
– What happens when capacity is approached?
• Congestion control aims to keep number of packets below a level at
which performance falls off dramatically
– How to keep number of packets below level?

• Congestion is caused by too many sources trying to send data at too


high a rate
– In IP networks, this typically results in routers dropping packets
– For TCP, lost packets (and larger delay) result in retransmissions
• Retransmission cause more congestion, and more packet
losses, and more retransmissions, …
• Congestion control aims to reduce the rate at which sources send

8
Congestion Scenario 1
• Two senders (and receivers); a router with infinite buffers

– Router outgoing link has capacity R


– Host A sends packets to router at rate λ bytes per sec; so
in
does Host B
– Router has infinite buffer space to store packets when
input rate exceeds output rate
– λout is throughput for a connection 9
Congestion Scenario 1
• Plot of throughput (λ ) and delay for each connection
out

– While sending rate is less than output capacity at router,


each connection achieves full sending rate in throughput
– When sending rate is greater than output capacity at router,
each connection is limited to half of router output capacity
– However, as sending rate approaches output capacity, delay rises
significantly (packets must wait in router buffer)
10
Congestion Scenario 2
• Two senders (and receivers); a router with finite buffers

– If buffer is full and new packets arrive, packets will be dropped/lost


– Lost packets leads to retransmissions by the source hosts
– λ'in is the offered load: original sending rate + retransmission rate
11
Congestion Scenario 2
• Lets assume source retransmits only when a packet is
known to be lost
– Offered load (λ'in): original sent + retransmissions
– If every second packet is lost (due to buffer at router being
full):
• For every 2 original packets, 1 retransmitted packet;
• Offered load is 3 packets
• 2 packets successfully received at destination

12
Congestion Scenario 2
• Lets assume a sender times out too early and retransmits a packet
even though the original packet was sent by the router to destination
– The output link from the router will be used to send the
original and retransmitted packet
– But the retransmitted packet will be ignored (discarded)
by the destination

13
Congestion Scenario 3

14
Congestion Scenario 3

• Throughput can go to 0 with large amount of traffic


• Network spends all the time sending unneeded/wasted packets

15
Costs of Congestion
• Large queuing delays are experienced as the sending rate nears
the output link capacity at a router

• Sender must perform retransmissions in order to compensate


for dropped (lost) packets due to buffer overflow

• Router may send unneeded copies of packets if sender


retransmits due to large delays (but not lost packets)

• With multiple routers in a path, if a packet is dropped by a router,


all links leading up to that router have been wasted

16
Approaches to Congestion Control

17
Approaches to Congestion Control
• End-to-end Congestion Control: in Transport Layer
– Network layer provides no feedback on congestion in network
– End systems (source/destination hosts) infer (guess) congestion based
on detected events such as packet loss and/or delay

• Network Assisted Congestion Control: in Network Layer


– Network devices (mainly routers) provide explicit feedback to the source host
about congestion
• Routers may provide direct feedback to source
• Feedback from routers may be provided via the destination host
– Feedback may be:
• Backpressure: router A tells previous router B to slow down; router B
tells previous router C to slow down; and so on (backward direction)
• Explicit signalling: routers
18
or destination host send special packets to
source informing it of congestion and/or indicate the appropriate rate
Network Assisted Congestion Control
• Feedback may come direct from routers, or via the
destination (receiver)

• ATM (Asynchronous Transfer Mode) is an example network


technology using Network Assisted Congestion Control

19
TCP Congestion Control

20
TCP Congestion Control

• TCP sender limits the rate at which it sends based on


perceived network congestion
• How does TCP sender limit its sending rate?

• How does TCP sender perceive there is network


congestion?
• How does TCP sender respond to congestion?

– TCP congestion control algorithm

21
Limiting the TCP Sending Rate
• Amount of bytes TCP sender can send is limited by
Advertised Window from Flow Control
– Outstanding Bytes ≤ Advertised Window (this must be
maintained)
– Outstanding Bytes = bytes that are ready to be sent
– Advertised window = how many byte receiver can carry
• In fact, TCP sender also maintains Congestion Window:
– Outstanding Bytes ≤ min (Advertised Window, Congestion
Window)
• When an ACK is received, more bytes can be sent by TCP
sender
• Assume the Advertised Window is very large (buffer at receiver
is very large)
– Sending rate ≈ Congestion Window/RTT (Round trip time)
• By adjusting the Congestion Window, TCP sender can adjust
its sending rate 22
Perceiving Network Congestion
• TCP sender assumes a loss indicates increased network
congestion
– A loss:
• TCP sender times out: has not received ACK within timeout
period
• TCP sender receives 3 duplicate ACKs (same packet repeatedly
transfer request received)
– Is this a valid assumption?
• Most packet losses occur at routers, i.e. congestion
• However, in some networks (e.g. wireless), packets may be lost due
to link errors, not congestion
• TCP sender assumes arrival of ACKs indicates decreased
network congestion
– The faster the arrival rate of ACKs, the large decrease in
congestion assumed
23
TCP Congestion Control Algorithm
• Three main components:
1. Additive Increase, Multiplicative Decrease (AIMD)
2. Slow start (SS) (start in a rate speed)
3. Reaction to Loss Events

• Terminology
– Maximum Segment Size (MSS): determined or assumed by TCP
sender for network path; measured in bytes. (how many bytes can
be sent)
– Congestion Window (cwnd): measured in bytes
– Round Trip Time (RTT): time from sending a segment, until
corresponding ACK is received

• We will assume:
– TCP receiver sends an ACK for every segment received
24
AIMD (Additive increase multiplicative decrease)
• Additive Increase
– If no congestion detected, then TCP sender assumes there is available
(unused) capacity in the network; hence increases its sending rate
• However, TCP slowly increases its sending rate
– TCP sender aims to increase Congestion Window by 1 x MSS every
RTT (round trip time)
– One approach:
• For every new ACK received, increase Congestion Window (cwnd)
by:
cwndnew = cwndold + MSS * MSS / cwndold
– MSS (maximum segment size)
– Additive Increase phase is also called Congestion Avoidance
• Multiplicative Decrease
– If congestion detected, then TCP sender decreases its sending rate
– TCP sender aims to half the Congestion Window for each loss
• For each loss detected:

cwndnew = cwndold/2 25
AIMD Example
Loss events (timeout or 3 duplicate ACKs)

26
Slow Start Phase
• At start of a TCP connection, the TCP sender sends at a slow rate
– By default: cwnd = MSS
– e.g. approximate sending rate for MSS = 1000 bytes, RTT =
200ms is 40kb/s
• If large capacity is available for the connection, using additive
increase (congestion avoidance) will be too slow

• Therefore Slow Start phase involves very fast increase of


Congestion Window
– Cwnd is increased exponentially
– For every ACK received in Slow Start phase, increase
cwnd by 1 MSS
cwndnew = cwndold + MSS
– Slow Start phase is continued until a loss event (then multiplicative
decrease) or Congestion Window reaches a threshold (ssthresh)
value (then additive increase)
27
AIMD and Slow Start: Window Size

30000

25000
Advertised
Additive Increase/
Window
Congestion Avoidance
20000 (awnd)
Window [bytes]

Slow Start
15000
Threshold
(ssthresh)
10000

Slow Start
5000

0
0 100 200 300 400 500 600 700 800 900 1000 1100 1200 1300
1400 1500
Time [msec]

MSS = 1000 B; RTT=100ms; ssthresh=16000 B; Advertised Window =


24000 B Window = min (Congestion Window, Advertised Window)
28
AIMD and Slow Start: Sending Rate

2500

2000
Sending Rate [kb/s]

1500

1000

500

0
0 100 200 300 400 500 600 700 800 900 1000 1100 1200
1300 1400 1500
Time [msec]

Assumes Sending Rate = Window / RTT


29
Reaction to Loss Events
• Upon a loss, Multiplicative Decrease reduce halves the current
congestion window
• The next action then depends on type of loss event:
– Loss detected by 3rd Duplicate ACK
• Slow start threshold (ssthresh) is halved
ssthreshnew = ssthreshold/2
• Congestion window set to slow start threshold
cwndnew = ssthresh
• TCP enters Additive Increase (or Congestion Avoidance)
phase
– Loss detected by a timeout
• Slow start threshold is halved
ssthreshnew = ssthreshold/2
• Congestion window set to initial value of 1 MSS
cwndnew = MSS
30
• TCP enters Slow Start phase (repetitive)
Reaction to Loss Events (exam must)
25000

Loss (3rd Dup Ack) Loss (Timeout)

20000
Window [bytes]

15000

10000

5000
ssthresh

0
0

00

00

00

00

00

00

00
00

00
0

0
0

40

60
20

80

12

14

18

20

22

24

26
10

16

Time [msec]

31
Reaction to Loss Events

• Why?
– TCP assumes a loss indicates congestion in the network (and
therefore slows down)
– Loss due to 3rd Duplicate ACK
• Some TCP segments are being delivered (since some
ACKs are coming back)
• TCP assumes small level of congestion, therefore
immediately enters Congestion Avoidance phase
– Loss due to Timeout
• Most TCP segments were lost (since not even
duplicate ACKs are received)
• TCP assumes heavy congestion, therefore go back to start
(of Slow Start) with very slow sending rate

32
TCP Congestion Control in Practice
• TCP Congestion Control algorithm works well in networks were
losses are mainly due to congestion
– Note that with a congested network, the throughput of TCP connection can
be severely limited

• In networks with losses due to errors on links, TCP Congestion


Control has problems
– Example: a wireless link may lose segments due to poor link quality
• TCP slows down (thinking it is congestion) when it should maintain its
sending rate
– Several variants of TCP have been developed specifically for wireless links

• In high-speed networks (>10Gb/s), TCP may perform poorly even


with very few link packet losses

33
TCP Versions and Options

• TCP RFC 793 (1981)


– Reliability (sequence numbers), Flow control (receiver window),
Connection management
• TCP Tahoe (1988)
– Adds Slow Start, Congestion Avoidance, Fast Retransmit
• TCP Reno (1990)
– Adds Fast Recovery
• TCP NewReno (1995)
– Only halves
congestion window
once
• Other Options:
– Selective
Acknowledgement
(SACK)
– TCP Vegas
34
TCP Performance Issues

35
Sliding Window Performance

• Sliding Window operation:


– Sender can send W segments before having to wait for ACK
– When ACK is received then more segments can be sent

• Note: remember TCP actually counts in bytes, not segments. However


for simplicity we will often refer to ACK of a segment

• Parameters:
– Payload: size of data in a segment
– Header: size of header
– Rate: data rate of the link/network
– Prop: propagation delay between sender and receiver
– W: number of segments that can be sent

36
Source Destination Source Destination

Data(0)
Data(0)

Wmax x DataTransmission
Ack(1)
ACK received, but
cannot yet send next
frame (511)

Ack(1)

During this time Now, after sending


the Source cannot first Wmax frame, Data(511)
send any frames next frame (511) can
be sent
After receiving the
first ACK, Source
can send a new
frame

D
a
t
a
(
5
1
1
)

37
Basic TCP Performance

• Data = (Payload + Header) / Rate


• ACK = Header / Rate

• If (Data + 2 х Prop + Ack > W х Data)


Then Throughput = W х Payload
Data + 2 x
Else Prop + Ack
Throughput = Rate x Payload
(Payload + Header)

Prop = propagation delay


Payload = size of data in a segment

38
Basic TCP Performance

39
Basic TCP Performance

• If transmission delay >> propagation delay (huge more), performance


is good
• Relationship between transmission and propagation delay often
referred to as:
– Bandwidth Delay Product
• BD = Data Rate * 2 * Propagation Delay

• A link or network may be characterised by its Bandwidth Delay product


– Larger BD leads to lower throughput
• Examples:
– 1Mb/s ADSL link with 0.1ms propagation: BD = 100 bits
– 100Mb/s Ethernet with 1ms propagation: BD = 100,000 bits
– 1Gb/s satellite link with 250ms propagation: BD = 250 Mbits

40
Basic TCP Performance

41
Basic TCP Performance

• Consider a high speed satellite link with the satellite in


geostationary orbit
– Data rate: 2Mb/s
– Propagation delay: 250ms
– BD: 1,000,000 bits (bandwidth delay product)
– Maximum TCP window size: 65535 Bytes = 524,280 bits
– Efficiency: 5%

• On high BD links, TCP performs poorly because of the limited


maximum window size

• Solution: Allow larger window sizes


– TCP Window Scaling Option

42
TCP Window Scaling

• TCP Segment header has window field of 16 bits


– Maximum window size: 65535 bytes
• TCP Window Scaling is a TCP option
– Scale factor: 20, 21, 22, …, 214
– Maximum window size: 214 x 65535 bytes = 1GByte

• Allows higher speeds (efficiency) for links with large


Bandwidth Delay product
– Satellite links
– Long optical links

43
RTT Estimation

44
RTT Estimation
• TCP sender needs to estimate Round-Trip Time (RTT)
– Why? To determine the appropriate timeout interval.
• Timeout interval too large: Sender will wait too long
before retransmitting
• Timeout interval too short: Sender may retransmit
when data is not lost
• RTT estimation is complex, yet important function in TCP
– Delay in Internet can vary significantly over time
– TCP samples the RTT of segments
– TCP uses a weighted average function to estimate
RTT
• Different algorithms/options used in different versions of
TCP and its implementation

45
RTT Estimation (regression line)

46
TCP Fairness

47
Example: TCP Fairness

48
Example: TCP Fairness

49
TCP Fairness
• If TCP is fair, with N TCP connections using a R bps link
– Each TCP connection should achieve R/N bps
• Does TCP achieve fairness?
– In ideal conditions, yes. If all TCP connections have same RTT
and same sized segments, with no other traffic, fairness is
achieved
– In practice:
• If RTT of connections vary, connections with small RTT are
able to higher proportion of bandwidth than connections with
large RTT
• If other non-TCP data is also present (such as multimedia
using UDP), then TCP connections receive unfair treatment
• Applications can use multiple TCP connections: each TCP
connection gets fair treatment, but the application using
multiple connections gets more bandwidth than application
using single connection
50
Example: TCP Fairness

51
TCP and The Internet
• IP does not include any built-in congestion control mechanisms
– If every host sent IP datagrams as fast as possible, the Internet
would not work
• The Internet relies on TCP mechanisms to avoid collapse
– TCP comprises about 90% of all traffic on the Internet
– As a means for congestion control, TCP has been very successful

• But …
– If hosts/applications choose not to follow TCP’s congestion
control rules, then congestion can become a major problem in
the Internet
– Challenges:
• Web browsers opening many TCP connections at once.
• Growth of multimedia applications that use UDP.(not using
tcp)
• Growth of P2P applications using multiple connections
and/or UDP. 52

You might also like