Chapter_3-part3
Chapter_3-part3
Transport
Layer
Computer
Networking: A Top-
Down Approach
8th edition
Jim Kurose, Keith Ross
Pearson, 2020
Transport Layer: 3-1
Chapter 3: roadmap
Transport-layer services
Multiplexing and demultiplexing
Connectionless transport: UDP
Principles of reliable data transfer
Connection-oriented transport: TCP
• segment structure
• reliable data transfer
• flow control
• connection management
Principles of congestion control
TCP congestion control
application application
choose x
req_conn(x)
ESTAB
acc_conn(x)
ESTAB
ESTAB
data(x+1) accept
data(x+1
ACK(x+1)
)
connection
x completes
No problem!
choose x
req_conn(x)
ESTAB
retransmit acc_conn(x)
req_conn(
x)
ESTAB
req_conn(x)
connection
client x completes server
terminat forgets x
es
ESTAB
acc_conn(x)
Problem: half open
connection! (no client)
Transport Layer: 3-6
2-way handshake scenarios
choose x
req_conn(x)
ESTAB
retransmit acc_conn(x)
req_conn(
x)
ESTAB
data(x+1) accept
data(x+1
retransmit )
data(x+1)
connection
x completes server
client
terminat forgets x
es req_conn(x)
ESTAB
data(x+1) accept
data(x+1
)
Problem: dup data
accepted!
A human 3-way handshake protocol
1. On belay?
2. Belay on.
3. Climbing.
SYN
SYN sent
rcvd
ESTAB SYNACK(seq=y,ACKnum=x+1)
ACK(ACKnum=y+1)
ACK(ACKnum=y+1)
L
LAST_ACK
FINbit=1, seq=y
TIMED_WAIT can no longer
send data
ACKbit=1; ACKnum=y+1
timed wait
for 2*max CLOSED
segment lifetime
CLOSED
3 W
avg TCP thruput = bytes/sec
4 RTT
W
W/2
TCP over “long, fat pipes”
example: 1500 byte segments, 100ms RTT, want 10 Gbps throughput
requires W = 83,333 in-flight segments
throughput in terms of segment loss probability, L [Mathis 1997]:
1.22 . MSS
TCP throughput =
RTT L
➜ to achieve 10 Gbps throughput, need a loss rate of L = 2·10-10 – a very small
loss rate!
versions of TCP for long, high-speed scenarios
R/2
Q: What happens as
arrival rate lin lout large delays as
delay
throughput:
approaches R/2? arrival rate lin
approaches capacity
R R
lout
throughput:
Host A lin : original data lin
copy l'in: original data, plus lout R/2
retransmitted data
R R
no buffer space!
R R
lout
capacity due to
to full buffers retransmissions
throughput:
sender knows when packet has been when sending
dropped: only resends if packet known to be at R/2, some
lost packets are
needed
retransmission
s
Host A lin : original data lin R/2
l'in: original data, plus
retransmitted data
R R
lout
“wasted”
to full buffers – requiring retransmissions capacity due to
but sender times can time out un-needed
throughput:
retransmissions
prematurely, sending two copies, both of
when sending at
which are delivered R/2, some packets
are retransmissions,
including needed
and un-needed
Host A lin : original data lin
copy R/2 duplicates, that are
timeout
l'in: original data, plus delivered!
retransmitted data
R R
lout
full buffers – requiring retransmissions “wasted”
capacity due to
but sender times can time out prematurely, un-needed
throughput:
sending two copies, both of which are retransmissions
delivered when sending at
R/2, some packets
are retransmissions,
including needed
and un-needed
lin R/2 duplicates, that are
delivered!
“costs” of congestion:
more work (retransmission) for given receiver throughput
unneeded retransmissions: link carries multiple copies of a packet
• decreasing maximum achievable throughput
finite shared
output link buffers
Host D
lout
Host C
lin’ R/2
throughput: lout
lin R/2
delay increases as capacity approached
delay
R/2
lin R/2
throughput: lout
loss/retransmission decreases effective
throughput
lin R/2 R/2
throughput: lout
effective throughput
R/2
lin R/2
upstream transmission capacity / buffering
wasted for packets lost downstream
lout
lin’ R/2
Network-assisted congestion
control:
routers provide direct feedback explicit congestion info
to sending/receiving hosts with
flows passing through congested
router data data
ACKs
ACKs
may indicate congestion level or
explicitly set sending rate
AIMD sawtooth
behavior: probing
for bandwidth
Why AIMD?
AIMD – a distributed, asynchronous algorithm – has been
shown to:
• optimize congested flow rates network wide!
• have desirable stability properties
RTT
• double cwnd every RTT two segm
ents
• done by incrementing cwnd
for every ACK received
four segm
ents
Implementation:
variable ssthresh
on loss event, ssthresh is set
to 1/2 of cwnd just before loss
event
TCP CUBIC -
Wmax/2 higher
throughput in
this example
time
t0 t1 t2 t3 t4
Transport Layer: 3-37
TCP and the congested “bottleneck link”
TCP (classic, CUBIC) increase TCP’s sending rate until packet loss occurs at
some router’s output: the bottleneck link
source destination
application application
TCP TCP
network network
link link
physical physical
packet queue almost
never empty,
sometimes
overflows packet
(loss)
bottleneck link (almost always busy)
Transport Layer: 3-38
TCP and the congested “bottleneck link”
TCP (classic, CUBIC) increase TCP’s sending rate until packet loss occurs at
some router’s output: the bottleneck link
insight: increasing
TCP sending rate
will increase
measured RTT
RTT
Transport Layer: 3-39
Delay-based TCP congestion control
Keeping sender-to-receiver pipe “just full enough, but no fuller”: keep
bottleneck link busy transmitting, but avoid high delays/buffering
# bytes sent
measured in last RTT
RTTmeasured throughput= interval
RTTmeasured
Delay-based approach:
RTTmin - minimum observed RTT (uncongested path)
uncongested throughput with congestion window cwnd is cwnd/RTTmin
ECN=10 ECN=11
IP datagram
Transport Layer: 3-42
TCP fairness
Fairness goal: if K TCP sessions share same bottleneck link of
bandwidth R, each should have average rate of R/K
TCP connection 1
bottleneck
TCP connection 2 router
capacity R
Connection 1 throughput R
Transport Layer: 3-44
Fairness: must all network apps be “fair”?
Fairness and UDP Fairness, parallel TCP connections
multimedia apps often do not application can open multiple
use TCP parallel connections between two
• do not want rate throttled by hosts
congestion control web browsers do this , e.g., link of
instead use UDP: rate R with 9 existing connections:
• send audio/video at constant • new app asks for 1 TCP, gets rate
rate, tolerate packet loss R/10
there is no “Internet police” • new app asks for 11 TCPs, gets
R/2
policing use of congestion
control
Scenario Challenges
Long, fat pipes (large Many packets “in flight”; loss
data transfers) shuts down pipeline
Wireless networks Loss due to noisy wireless links,
mobility; TCP treat this as
congestion loss
Long-delay links Extremely long RTTs
Data center networks Latency sensitive
Background traffic flows Low priority, “background” TCP
moving transport–layer functions to application layer, on top of UDP
flows
• HTTP/3: QUIC
Transport Layer: 3-47
QUIC: Quick UDP Internet Connections
application-layer protocol, on top of UDP
• increase performance of HTTP
• deployed on many Google servers, apps (Chrome, mobile YouTube
app)
TCP handshake
(transport layer) QUIC handshake
data
TLS handshake
(security)
data
HTTP HTTP
HTTP
application
GET GET
GET
HTTP HTTP
GET GET
HTTP
GET QUIC QUIC QUIC QUIC QUIC QUIC
encrypt encrypt encrypt encrypt encrypt encrypt
QUIC QUIC QUIC QUIC QUIC QUIC
TLS encryption TLS encryption RDT RDT RDT RDT
error
RDT RDT
!
QUIC Cong. QUIC Cong.
transport
Cont. Cont.
TCP RDT TCP
error RDT
!
TCP Cong. TCP Cong.
UDP UDP
Contr. Contr.