0% found this document useful (0 votes)
16 views29 pages

Week 3-2

Uploaded by

Nithin Sreeram
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views29 pages

Week 3-2

Uploaded by

Nithin Sreeram
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 29

Chapter 3

Transport
Layer
A note on the use of these PowerPoint slides:
We’re making these slides freely available to all (faculty, students,
readers). They’re in PowerPoint form so you see the animations; and
can add, modify, and delete slides (including this one) and slide content
to suit your needs. They obviously represent a lot of work on our part.
In return for use, we only ask the following:
 If you use these slides (e.g., in a class) that you mention their source
(after all, we’d like people to use our book!)
 If you post any slides on a www site, that you note that they are
adapted from (or perhaps identical to) our slides, and note our
copyright of this material.
Computer Networking: A
For a revision history, see the slide note for this page.
Top-Down Approach
Thanks and enjoy! JFK/KWR 8th edition
Jim Kurose, Keith Ross
All material copyright 1996-2020
J.F Kurose and K.W. Ross, All Rights Reserved Pearson, 2020
Transport Layer: 3-1
Chapter 3: roadmap
 Transport-layer services
 Multiplexing and demultiplexing
 Connectionless transport: UDP
 Principles of reliable data transfer
 Connection-oriented transport: TCP
 Principles of congestion control
 TCP congestion control
 Evolution of transport-layer
functionality
Transport Layer: 3-2
Principles of congestion control
Congestion:
 informally: “too many sources sending too much data too fast for
network to handle”
 manifestations:
• long delays (queueing in router buffers)
• packet loss (buffer overflow at routers)
 different from flow control! congestion control:
 a top-10 problem! too many senders,
sending too fast

flow control: one sender


too fast for one receiver
Transport Layer: 3-3
Causes/costs of congestion: scenario 1
original data: lin throughput: lout
Simplest scenario:
Host A
 one router, infinite buffers
 input, output link capacity: R infinite shared
output link buffers

 two flows
R R
 no retransmissions needed
Host B

R/2
Q: What happens as
lout
arrival rate lin

delay
throughput:

approaches R/2?
lin R/2 lin R/2
maximum per-connection large delays as arrival rate
throughput: R/2 lin approaches capacity
Transport Layer: 3-4
Causes/costs of congestion: scenario 2
 one router, finite buffers
 sender retransmits lost, timed-out packet
• application-layer input = application-layer output: lin = lout
• transport-layer input includes retransmissions : l’in lin

Host A lin : original data


l'in: original data, plus lout
retransmitted data

R R

Host B finite shared output


link buffers
Transport Layer: 3-5
Causes/costs of congestion: scenario 2
Idealization: perfect knowledge R/2

lout
 sender sends only when router buffers available

throughput:
Host A lin : original data lin
copy l'in: original data, plus lout R/2

retransmitted data

free buffer space!

R R

Host B finite shared output


link buffers
Transport Layer: 3-6
Causes/costs of congestion: scenario 2
Idealization: some perfect knowledge
 packets can be lost (dropped at router) due to
full buffers
 sender knows when packet has been dropped:
only resends if packet known to be lost

Host A lin : original data


copy l'in: original data, plus
retransmitted data

no buffer space!

R R

Host B finite shared output


link buffers
Transport Layer: 3-7
Causes/costs of congestion: scenario 2
Idealization: some perfect knowledge R/2
“wasted” capacity due

lout
 packets can be lost (dropped at router) due to to retransmissions
full buffers

throughput:
when sending at
 sender knows when packet has been dropped: R/2, some packets
only resends if packet known to be lost are needed
retransmissions

Host A lin : original data lin R/2


l'in: original data, plus
retransmitted data

free buffer space!

R R

Host B finite shared output


link buffers
Transport Layer: 3-8
Causes/costs of congestion: scenario 2
Realistic scenario: un-needed duplicates R/2
 packets can be lost, dropped at router due to

lout
“wasted” capacity due
full buffers – requiring retransmissions to un-needed
retransmissions
 but sender times can time out prematurely,

throughput:
sending two copies, both of which are delivered when sending at
R/2, some packets
are retransmissions,
including needed
Host A lin : original data lin
and un-needed
timeout R/2 duplicates, that are
copy l'in: original data, plus delivered!
retransmitted data

free buffer space!

R R

Host B finite shared output


link buffers
Transport Layer: 3-9
Causes/costs of congestion: scenario 2
Realistic scenario: un-needed duplicates R/2
 packets can be lost, dropped at router due to

lout
“wasted” capacity due
full buffers – requiring retransmissions to un-needed
retransmissions
 but sender times can time out prematurely,

throughput:
sending two copies, both of which are delivered when sending at
R/2, some packets
are retransmissions,
including needed
and un-needed
lin R/2 duplicates, that are
delivered!
“costs” of congestion:
 more work (retransmission) for given receiver throughput
 unneeded retransmissions: link carries multiple copies of a packet
• decreasing maximum achievable throughput

Transport Layer: 3-10


Causes/costs of congestion: scenario 3
 four senders Q: what happens as lin and lin’ increase ?
 multi-hop paths A: as red lin’ increases, all arriving blue pkts at upper
 timeout/retransmit queue are dropped, blue throughput g 0
Host A lin : original data
Host B
l'in: original data, plus
retransmitted data
finite shared
output link buffers

Host D
lout
Host C

Transport Layer: 3-11


Causes/costs of congestion: scenario 3
R/2
lout

lin’ R/2

another “cost” of congestion:


 when packet dropped, any upstream transmission capacity and
buffering used for that packet was wasted!

Transport Layer: 3-12


Causes/costs of congestion: insights
 throughput can never exceed capacity

 delay increases as capacity approached

 loss/retransmission decreases effective


throughput

 un-needed duplicates further decreases


effective throughput
 upstream transmission capacity / buffering
wasted for packets lost downstream
Transport Layer: 3-13
Approaches towards congestion control

End-end congestion control:


 no explicit feedback from
network
 congestion inferred from ACKs
data data
ACKs
observed loss, delay
 approach taken by TCP

Transport Layer: 3-14


Approaches towards congestion control
Network-assisted congestion
control: explicit congestion info
 routers provide direct feedback
to sending/receiving hosts with data data
ACKs
flows passing through congested ACKs

router
 may indicate congestion level or
explicitly set sending rate
 TCP ECN, ATM, DECbit protocols
Transport Layer: 3-15
Chapter 3: roadmap
 Transport-layer services
 Multiplexing and demultiplexing
 Connectionless transport: UDP
 Principles of reliable data transfer
 Connection-oriented transport: TCP
 Principles of congestion control
 TCP congestion control
 Evolution of transport-layer
functionality
Transport Layer: 3-16
TCP congestion control: AIMD
 approach: senders can increase sending rate until packet loss
(congestion) occurs, then decrease sending rate on loss event
Additive Increase Multiplicative Decrease
increase sending rate by 1 cut sending rate in half at
maximum segment size every each loss event
RTT until loss detected
TCP sender Sending rate

AIMD sawtooth
behavior: probing
for bandwidth

time Transport Layer: 3-17


TCP AIMD: more
Multiplicative decrease detail: sending rate is
 Cut in half on loss detected by triple duplicate ACK (TCP Reno)
 Cut to 1 MSS (maximum segment size) when loss detected by
timeout (TCP Tahoe)

Why AIMD?
 AIMD – a distributed, asynchronous algorithm – has been
shown to:
• optimize congested flow rates network wide!
• have desirable stability properties

Transport Layer: 3-18


TCP congestion control: details
sender sequence number space
cwnd TCP sending behavior:
 roughly: send cwnd bytes,
wait RTT for ACKS, then
send more bytes
last byte
available but cwnd
ACKed sent, but not- TCP rate ~
~ bytes/sec
yet ACKed not used RTT
(“in-flight”) last byte sent

 TCP sender limits transmission: LastByteSent- LastByteAcked < cwnd


 cwnd is dynamically adjusted in response to observed network
congestion (implementing TCP congestion control)
Transport Layer: 3-19
TCP slow start
Host A Host B
 when connection begins,
increase rate exponentially
until first loss event:
one s e gm
ent

RTT
• initially cwnd = 1 MSS two segm
en ts
• double cwnd every RTT
• done by incrementing cwnd
for every ACK received four segm
ents

 summary: initial rate is


slow, but ramps up
exponentially fast time

Transport Layer: 3-20


TCP: from slow start to congestion avoidance
Q: when should the exponential
increase switch to linear?
X
A: when cwnd gets to 1/2 of its
value before timeout.

Implementation:
 variable ssthresh
 on loss event, ssthresh is set to
1/2 of cwnd just before loss event

* Check out the online interactive exercises for more examples: h ttp://gaia.cs.umass.edu/kurose_ross/interactive/
Transport Layer: 3-21
Summary: TCP congestion control
new ACK
duplicate ACK
dupACKcount++ new ACK .
cwnd = cwnd + MSS (MSS/cwnd)
dupACKcount = 0
cwnd = cwnd+MSS transmit new segment(s), as allowed
dupACKcount = 0
L transmit new segment(s), as allowed
cwnd = 1 MSS
ssthresh = 64 KB cwnd > ssthresh
dupACKcount = 0
slow L congestion
start timeout avoidance
ssthresh = cwnd/2
cwnd = 1 MSS duplicate ACK
timeout dupACKcount = 0 dupACKcount++
ssthresh = cwnd/2 retransmit missing segment
cwnd = 1 MSS
dupACKcount = 0
retransmit missing segment
timeout
ssthresh = cwnd/2
cwnd = 1 New ACK
dupACKcount = 0
cwnd = ssthresh dupACKcount == 3
dupACKcount == 3 retransmit missing segment dupACKcount = 0
ssthresh= cwnd/2 ssthresh= cwnd/2
cwnd = ssthresh + 3 cwnd = ssthresh + 3
retransmit missing segment
retransmit missing segment
fast
recovery
duplicate ACK
cwnd = cwnd + MSS
transmit new segment(s), as allowed

Transport Layer: 3-22


TCP CUBIC
 Is there a better way than AIMD to “probe” for usable bandwidth?
 Insight/intuition:
• Wmax: sending rate at which congestion loss was detected
• congestion state of bottleneck link probably (?) hasn’t changed much
• after cutting rate/window in half on loss, initially ramp to to Wmax faster, but then
approach Wmax more slowly

Wmax classic TCP

TCP CUBIC - higher


Wmax/2 throughput in this
example

Transport Layer: 3-23


TCP CUBIC
 K: point in time when TCP window size will reach Wmax
• K itself is tuneable
 increase W as a function of the cube of the distance between current
time and K
• larger increases when further away from K
• smaller increases (cautious) when nearer K
 TCP CUBIC default Wmax
in Linux, most
TCP Reno
popular TCP for TCP CUBIC
popular Web TCP
sending
servers rate

time
sudo sysctl -a | grep tcp_congestion_control
t0 t1 t2 t3 t4
Transport Layer: 3-24
Delay-based TCP congestion control
Keeping sender-to-receiver pipe “just full enough, but no fuller”: keep
bottleneck link busy transmitting, but avoid high delays/buffering
# bytes sent in
measured last RTT interval
RTTmeasured throughput =
RTTmeasured
Delay-based approach:
 RTTmin - minimum observed RTT (uncongested path)
 uncongested throughput with congestion window cwnd is cwnd/RTTmin
if measured throughput “very close” to uncongested throughput
increase cwnd linearly /* since path not congested */
else if measured throughput “far below” uncongested throughout
decrease cwnd linearly /* since path is congested */
Transport Layer: 3-25
Explicit congestion notification (ECN)
TCP deployments often implement network-assisted congestion control:
 two bits in IP header (ToS field) marked by network router to indicate congestion
• policy to determine marking chosen by network operator
 congestion indication carried to destination
 destination sets ECE bit on ACK segment to notify sender of congestion
 involves both IP (IP header ECN bit marking) and TCP (TCP header C,E bit marking)
source TCP ACK segment
destination
application application
ECE=1
TCP TCP
network network
link link
physical physical

ECN=10 ECN=11

IP datagram
Transport Layer: 3-26
TCP fairness
Fairness goal: if K TCP sessions share same bottleneck link of
bandwidth R, each should have average rate of R/K
TCP connection 1

bottleneck
TCP connection 2 router
capacity R

Transport Layer: 3-27


Q: is TCP Fair?
Example: two competing TCP sessions:
 additive increase gives slope of 1, as throughout increases
 multiplicative decrease decreases throughput proportionally

R equal bandwidth share


Is TCP fair?
Connection 2 throughput

A: Yes, under idealized


loss: decrease window by factor of 2 assumptions:
congestion avoidance: additive increase  same RTT
loss: decrease window by factor of 2
congestion avoidance: additive increase
 fixed number of sessions
only in congestion
avoidance

Connection 1 throughput R
Transport Layer: 3-28
Fairness: must all network apps be “fair”?
Fairness and UDP Fairness, parallel TCP
 multimedia apps often do not connections
use TCP  application can open multiple
• do not want rate throttled by parallel connections between two
congestion control hosts
 instead use UDP:  web browsers do this , e.g., link of
• send audio/video at constant rate, rate R with 9 existing connections:
tolerate packet loss • new app asks for 1 TCP, gets rate R/10
 there is no “Internet police” • new app asks for 11 TCPs, gets R/2
policing use of congestion
control

Transport Layer: 3-29

You might also like