0% found this document useful (0 votes)
23 views27 pages

ACCSE CSC2B10-Chapter 3 5

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views27 pages

ACCSE CSC2B10-Chapter 3 5

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 27

CSC2B10/CSC02B2

Data Communications

Chapter 3.5
Chapter 3 outline

3.1 Transport-layer services 3.5 Connection-oriented


3.2 Multiplexing and demultiplexing transport: TCP
3.3 Connectionless transport: UDP • segment structure
• reliable data transfer
3.4 Principles of reliable data
• flow control
transfer
• connection management
3.6 Principles of congestion
control
3.7 TCP congestion control

2
Principles of Congestion Control

Congestion:
• informally: “too many sources sending too much data
too fast for network to handle”
• different from flow control!
• manifestations:
• lost packets (buffer overflow at routers)
• long delays (queueing in router buffers)
• a top-10 problem!

3
Causes/costs of congestion: scenario 1
Host A λout
λin : original data
• two senders, two
receivers Host B unlimited shared
output link buffers
• one router, infinite
buffers
• no retransmission

• large delays when


congested
• maximum
achievable
throughput
4
Causes/costs of congestion: scenario 2

• one router, finite buffers


• sender retransmission of timed-out packet
• application-layer input = application-layer output: λin = λout

• transport-layer input includes retransmissions : λin λin

λin : original data


λout
λ'in: original data, plus
retransmitted data
Host B

Host A

finite shared output


link buffers

5
Congestion scenario 2a: ideal case
R/2

• sender sends only


when router buffers

λout
available
R/2
λin
λin : original data
copy λout
λ'in: original data, plus
retransmitted data
Host B

free buffer space!


Host A

finite shared output


link buffers
6
Congestion scenario 2b: known loss
• packets may get dropped at
router due to full buffers
• sometimes lost
• sender only resends if
packet known to be lost
(admittedly idealized)
λin : original data
copy λout
λ'in: original data, plus
retransmitted data
Host B

no buffer space!
Host A

7
Congestion scenario 2b: known loss
• packets may get dropped at
R/2
router due to full buffers
• sometimes not lost when sending at
R/2, some packets

λout
• sender only resends if are retransmissions
but asymptotic
packet known to be lost goodput is still R/2
R/2
(why?)
(admittedly idealized) λin

λin : original data


λout
λ'in: original data, plus
retransmitted data
Host B

free buffer space!


Host A

8
Congestion scenario 2c: duplicates
• packets may get dropped at
R/2
router due to full buffers
when sending at
• sender times out R/2, some packets

λout
prematurely, sending two are retransmissions
including duplicated
copies, both of which are that are delivered!
delivered λin
R/2

λin
copy
timeout
λ'in λout

Host B

free buffer space!


Host A

9
Congestion scenario 2c: duplicates
• packets may get dropped at
R/2
router due to full buffers
when sending at
• sender times out R/2, some packets

λout
prematurely, sending two are retransmissions
including duplicated
copies, both of which are that are delivered!
delivered λin
R/2

“costs” of congestion:
 more work (retrans) for given “goodput”
 unneeded retransmissions: link carries multiple copies of pkt
 decreasing goodput

10
Causes/costs of congestion: scenario 3
• four senders Q: what happens as λ
in
• multihop paths and λ increase ?
in
• timeout/retransmit
Host A λout
λin : original data
λ'in : original data, plus
retransmitted data

finite shared output


link buffers

Host B

11
Causes/costs of congestion: scenario 3
H λ
o
o
s
u
t
A t

H
o
s
t
B

another “cost” of congestion:


 when packet dropped, any “upstream transmission
capacity used for that packet was wasted!

12
Approaches towards congestion control

Two broad approaches towards congestion control:

end-end congestion network-assisted


control: congestion control:
• no explicit feedback from • routers provide feedback to
network end systems
• congestion inferred from end- • single bit indicating
system observed loss, delay congestion (SNA, DECbit,
• approach taken by TCP TCP/IP ECN, ATM)
• explicit rate sender should
send at

13
Case study: ATM ABR congestion control
ABR: available bit rate: RM (resource management)
• “elastic service” cells:
• if sender’s path • sent by sender, interspersed with
“underloaded”: data cells
• sender should use available • bits in RM cell set by switches
bandwidth (“network-assisted”)
• if sender’s path congested: • NI bit: no increase in rate (mild
• sender throttled to minimum congestion)
guaranteed rate • CI bit: congestion indication
• RM cells returned to sender by
receiver, with bits intact

14
Case study: ATM ABR congestion control

• two-byte ER (explicit rate) field in RM cell


• congested switch may lower ER value in cell
• sender’ send rate thus maximum supportable rate on path
• EFCI bit in data cells: set to 1 in congested switch
• if data cell preceding RM cell has EFCI set, sender sets CI bit in returned RM cell

15
Chapter 3 outline

3.1 Transport-layer services 3.5 Connection-oriented


3.2 Multiplexing and demultiplexing transport: TCP
3.3 Connectionless transport: UDP • segment structure
• reliable data transfer
3.4 Principles of reliable data
• flow control
transfer
• connection management
3.6 Principles of congestion
control
3.7 TCP congestion control

16
TCP Congestion Control: details

• sender limits transmission: How does sender perceive


LastByteSent-LastByteAcked congestion?
≤ cwnd • loss event = timeout or 3
• roughly, duplicate acks
cwnd
rate = Bytes/sec • TCP sender reduces rate
RTT
(cwnd) after loss event
• cwnd is dynamic, function of three mechanisms:
perceived network congestion • AIMD
• slow start
• conservative after timeout events

17
TCP congestion control: additive increase, multiplicative
decrease
 approach: increase transmission rate (window size),
probing for usable bandwidth, until loss occurs
 additive increase: increase cwnd by 1 MSS every
RTT until loss detected
 multiplicative decrease: cut cwnd in half after
congestion
window loss
cwnd: congestion window size

24 Kbytes

16 Kbytes

8 Kbytes
saw tooth time
behavior: probing
for bandwidth
time

18
TCP Slow Start

• when connection begins, increase Host A Host B


rate exponentially until first loss
event:

RTT
• initially cwnd = 1 MSS
• double cwnd every RTT
• done by incrementing cwnd for every
ACK received
• summary: initial rate is slow but
ramps up exponentially fast
time

19
Refinement: inferring loss

• after 3 dup ACKs:


• cwnd is cut in half Philosophy:
• window then grows linearly
 3 dup ACKs indicates
• but after timeout event: network capable of
• cwnd instead set to 1 MSS; delivering some segments
 timeout indicates a
• window then grows
“more alarming”
exponentially
congestion scenario
• to a threshold, then grows
linearly

20
Summary: TCP Congestion Control
New
New ACK!
duplicate ACK
dupACKcount++
ACK!
new ACK
new ACK
.
cwnd = cwnd + MSS (MSS/cwnd)
dupACKcount = 0
cwnd = cwnd+MSS transmit new segment(s), as allowed
dupACKcount = 0
Λ transmit new segment(s), as allowed
cwnd = 1 MSS
ssthresh = 64 KB cwnd > ssthresh
dupACKcount = 0 slow Λ congestion
start timeout avoidance
ssthresh = cwnd/2
cwnd = 1 MSS duplicate ACK
timeout dupACKcount = 0 dupACKcount++
ssthresh = cwnd/2 retransmit missing segment
cwnd = 1 MSS
dupACKcount = 0
retransmit missing segment New
timeout
ACK!
ssthresh = cwnd/2
cwnd = 1 New ACK
dupACKcount = 0
cwnd = ssthresh dupACKcount == 3
dupACKcount == 3 retransmit missing segment dupACKcount = 0
ssthresh= cwnd/2 ssthresh= cwnd/2
cwnd = ssthresh + 3 cwnd = ssthresh + 3
retransmit missing segment retransmit missing segment
fast
recovery
duplicate ACK
cwnd = cwnd + MSS
transmit new segment(s), as allowed

21
Refinement
Q: when should the
exponential increase
switch to linear?
A: when cwnd gets to
1/2 of its value before
timeout.

Implementation:
• variable ssthresh
• on loss event, ssthresh is set
to 1/2 of cwnd just before loss
event

22
TCP throughput

what’s the average throughout of TCP as a function of window size and


RTT?
ignore slow start
let W be the window size when loss occurs.
when window is W, throughput is W/RTT
just after loss, window drops to W/2, throughput to W/2RTT.
average throughout: .75 W/RTT

23
TCP Fairness
fairness goal: if K TCP sessions share same bottleneck
link of bandwidth R, each should have average rate of
R/K

TCP connection 1

bottleneck
TCP
router
connection 2
capacity R

25
Why is TCP fair?
two competing sessions:
• additive increase gives slope of 1, as throughout increases
• multiplicative decrease decreases throughput proportionally

R equal bandwidth share

loss: decrease window by factor of 2


congestion avoidance: additive increase
loss: decrease window by factor of 2
congestion avoidance: additive increase

Connection 1 throughputR

26
Fairness (more)
Fairness and UDP Fairness and parallel TCP
• multimedia apps often do connections
not use TCP • nothing prevents app from
• do not want rate throttled by opening parallel connections
congestion control between 2 hosts.
• instead use UDP: • web browsers do this
• pump audio/video at constant • example: link of rate R
rate, tolerate packet loss supporting 9 connections;
• new app asks for 1 TCP, gets rate
R/10
• new app asks for 11 TCPs, gets
R/2 !

27
Chapter 3: Summary

• principles behind transport


layer services:
• multiplexing, demultiplexing
• reliable data transfer
• flow control
Next:
• congestion control
• leaving the network
• instantiation and
“edge” (application,
implementation in the Internet
transport layers)
• UDP
• into the network “core”
• TCP
28

You might also like