0% found this document useful (0 votes)
133 views

Transport Layer: TCP/UDP: Chapter 24, 16

The document discusses the transport layer and transport layer protocols TCP and UDP. It describes the purposes of transport layer services like multiplexing, demultiplexing, reliable data transfer, flow control and congestion control. It explains that TCP provides reliable, in-order delivery using connection-oriented transport, while UDP provides unreliable delivery using connectionless transport. The key aspects of multiplexing, demultiplexing, and the TCP and UDP segment formats are also summarized.

Uploaded by

naaz_pinu
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
133 views

Transport Layer: TCP/UDP: Chapter 24, 16

The document discusses the transport layer and transport layer protocols TCP and UDP. It describes the purposes of transport layer services like multiplexing, demultiplexing, reliable data transfer, flow control and congestion control. It explains that TCP provides reliable, in-order delivery using connection-oriented transport, while UDP provides unreliable delivery using connectionless transport. The key aspects of multiplexing, demultiplexing, and the TCP and UDP segment formats are also summarized.

Uploaded by

naaz_pinu
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 36

Transport Layer: TCP/UDP

Chapter 24, 16

Transport Layer
• Purpose of transport layer services:
– multiplexing/demultiplexing
– reliable data transfer
– flow control
– congestion control
• Connection-less transport: UDP
• Connection-oriented transport: TCP
– reliable transfer
– flow control
– connection management

1
Transport services and protocols
• provide logical application
transport
communication between network
data link network
application processes physical
lo
gnetwork
data link
physical
ic
running on different hosts data
a
link
physical
l
e
n network
• transport protocols run in d
-
e
data link
n physical network
end systems via software d
t
r
data link
a physical
n
• transport vs network layer s
p network
o
rdata link
services: t
physical

• network layer: data transfer application


transport
between end systems network
data link
physical
• transport layer: data transfer
between processes
– relies on, enhances, network
layer services

Transport-layer protocols
Internet transport services: application
transport
network
• reliable, in-order unicast data link
physical
network
lo data link
delivery (TCP) gnetwork
ic
data link
physical
a
physical
l
– congestion e
n
d
network
-
e
data link
– flow control n
d
physical network
t data link
r physical
– connection setup a
n
s
p network
o
• unreliable (“best-effort”), rdata link
t
physical

unordered unicast or application


multicast delivery: UDP transport
network
data link
• services not available: physical

– real-time
– bandwidth guarantees
– reliable multicast

2
Multiplexing/demultiplexing
Recall: segment - unit of data
exchanged between Demultiplexing: delivering
transport layer entities received segments to
– aka TPDU: transport correct app layer processes
protocol data unit or
“packet” receiver
P3 P4
application-layer M M
data
application
segment P1 transport P2
header M
M network
application application
segment Ht M transport transport
Hn segment network network

Multiplexing/demultiplexing
Multiplexing:
gathering data from multiple
app processes, enveloping 32 bits
data with header (later used source port # dest port #
for demultiplexing)
multiplexing/demultiplexing: other header fields
• based on sender, receiver port
numbers, IP addresses
– source, dest port #s in each application
segment data
– recall: well-known port numbers (message)
for specific applications

TCP/UDP segment format

3
Multiplexing/demultiplexing: examples
source port: x Web client
host A dest. port: 23 server B host C
source port:23
dest. port: x
Source IP: C Source IP: C
Dest IP: B Dest IP: B
source port: y source port: x
port use: simple telnet app dest. port: 80 dest. port: 80

Source IP: A
Dest IP: B Web
Web client source port: x server B
host A dest. port: 80
port use: Web server

UDP: User Datagram Protocol [RFC 768]


• “no frills,” “bare bones”
Internet transport protocol Why is there a UDP?
• “best effort” service, UDP
• no connection establishment
segments may be:
(which can add delay)
– lost
– delivered out of order to • simple: no connection state
app at sender, receiver
• connectionless: • small segment header
– no handshaking between • no congestion control: UDP
UDP sender, receiver
can blast away as fast as
– each UDP segment handled
independently of others
desired

4
UDP: more
• often used for streaming
32 bits
multimedia apps
– Controversial: no Length, in source port # dest port #
congestion control bytes of UDP length checksum
segment,
• other UDP uses including
(why?): header

– DNS
Application
– SNMP data
• reliable transfer over (message)
UDP: add reliability at
application layer
– application-specific UDP segment format
error recover!

UDP checksum
Goal: detect “errors” (e.g., flipped bits) in transmitted
segment
Sender: Receiver:
• treat segment contents as • compute checksum of received
sequence of 16-bit integers segment
• checksum: addition (1’s • check if computed checksum
complement sum) of equals checksum field value:
segment contents – NO - error detected
• Toss data OR
• sender puts checksum value • Pass to app with warning
into UDP checksum field – YES - no error detected.

5
Connection Oriented Transport
Protocol Mechanisms
• Properties of connection-oriented Transport
Protocols:
– Logical connection
– Establishment
– Maintenance termination
– Reliable
– e.g. TCP

Connection-Oriented Transport
via Reliable Network Layer
• Transport Layer Services like TCP are complicated – to
start, let’s first assume we are working with a reliable
network layer service
– e.g. reliable packet switched network using X.25
– e.g. frame relay using LAPF control protocol
– e.g. IEEE 802.3 using connection oriented LLC service
– NOT IP! IP is unreliable
• Assume arbitrary length message
• Transport service is end to end protocol between two
systems on same network

6
Issues in a Simple Transprot
Protocol
• If we have a reliable network layer, then the
transport layer must consider:
– Addressing
– Multiplexing
– Flow Control
– Connection establishment and termination

Addressing
• Target user specified by:
– User identification
• Usually host, port
– Called a socket in TCP/UDP
• Port represents a particular transport service (TS), e.g. HTTPD
– Transport protocol identification
• Generally only one per host
• If more than one, then usually one of each type
– Specify transport protocol (TCP, UDP)
– Host address
• An attached network device
• In an internet, a global internet address (IP Address)
• A well-known address or lookup via name server

7
Multiplexing
• Multiple users employ same transport protocol
• User identified by port number or service access
point (SAP)
• Described previously

Flow Control
• Can be difficult than flow control at the data link layer –
data is likely traveling across many networks, not one
network. Some potential problems:
– Longer transmission delay between transport entities compared
with actual transmission time
• Delay in communication of flow control info
– Variable transmission delay
• Difficult to use timeouts
• Flow may be controlled because:
– The receiving user cannot keep up
– The receiving transport entity cannot keep up
– If either happens, the results is a buffer that can get full and
eventually lose data

8
Model of Frame Transmission
Diagram for Frame/Packet Transmission

We’ll use this model to discuss flow control issues

Coping with Flow Control


Requirements (1)
• Do nothing
– Segments that overflow are discarded
– Sending transport entity will fail to get ACK and will
retransmit
• Thus further adding to incoming data and could exacerbate
the flow control problem
• Refuse further segments from network layer
– Clumsy
– Multiplexed connections are controlled on aggregate
flow

9
Coping with Flow Control
Requirements (2)
• One protocol: Stop-and-Wait
– Sender must wait for recipient to send ACK before
sending the next packet
• Not very efficient usage of the network, only one
outstanding message can be in transit at a time
– Works well on reliable network
• Failure to receive ACK is taken as flow control indication
– Does not work well on unreliable network
• Cannot distinguish between lost segment and flow control

Coping with Flow Control


• Credit-Based Scheme
– Credit = How much data sender can transmit
• Sliding window idea, sender can send a number of frames up to the
window size
• Receiver sends single ACK that acknowledges all previous frames
• Window size varies based on credit available
• Receiver can control credit of the sender
– In acknowledgement, receiver could change the window size
– Advantages
• Better network usage, allows outstanding messages to be in transit
than Stop-And-Wait
• More effective on unreliable network
– Decouples flow control from ACK
• May ACK without granting credit and vice versa
– Each octet has sequence number
– Each transport segment has a sequence number,
acknowledgement number and window size in header

10
Sliding Window Enhancements
• Receiver can acknowledge frames without permitting
further transmission (Receive Not Ready)
• Must send a normal acknowledge to resume
• If full duplex two-way communications, we need two
windows: one for transmit and one for receive
– Piggybacking – if sending data and acknowledgement frame,
combine together

• More efficient than stop-and-wait since many frames may


be in the pipeline

Example Sliding Window


Fixed Window Size

RR N=Receive Ready on N

11
Use of Header Fields
• For credit-based window size
– When sending, Sequence Number is that of first octet
in segment
– ACK includes AN=i (Acknowledgement Number),
W=j (Window Size)
– All octets through SN=i-1 acknowledged
• Next expected octet is i
– Permission to send additional window of W=j octets
• i.e. octets through i+j-1

Credit Allocation Example


A, B initially “synched”

200 octets per segment

12
Establishment and Termination
• Even with a reliable network service, both ends
need to “set up” the connection:
– Allow each end to know the other exists and is
listening
– Negotiation of optional parameters
• Maximum Segment Size
• Maximum Window Size
– Triggers allocation of transport entity resources
• Buffer space allocated
• Entry in connection tables

Connection State Diagram –


Reliable Network Service
Start State

SYN=Sync
FIN=Finish

13
Connection Establishment

Setting up the connection


• What if a SYN received while not in the Listen
state?
– Reject with RST (Reset)
– Queue request until matching open issued
– Signal TS user to notify of pending request

14
Termination
• Connection can be terminated by sending FIN
• Graceful termination
– CLOSE_WAIT state and FIN_WAIT must accept
incoming data until FIN received
– Ensures both sides have received all outstanding data
and that both sides agree to connection termination
before actual termination

Unreliable Network Service


• Now let’s look at the more general case if we are building
our transport service on top of an unreliable network
layer

• An unreliable network service makes the transport layer


much more complicated if we want to ensure reliability
• Examples of unreliable network services:
– Internet using IP,
– Frame Relay using LAPF
– IEEE 802.3 using unacknowledged connectionless LLC
• Segments may get lost
• Segments may arrive out of order

15
Problems
• Ordered Delivery
• Retransmission strategy
• Duplication detection
• Flow control
• Connection establishment
• Connection termination
• Crash recovery

Ordered Delivery
• Segments may arrive out of order
• Number segments sequentially
• TCP numbers each octet sequentially
• Segments are numbered by the first octet number in
the segment

• TCP actually numbers segments starting at a random


value!
– Minimizes possibility that a segment still in the network
from an earlier, terminated connection between the same
hosts is mistaken for a valid segment in a later connection
(who would also have to happen to use the same port
numbers)

16
Retransmission Strategy
• Need to re-transmit when
– Segment damaged in transit
– Segment fails to arrive
• Receiver must acknowledge successful receipt
• Use cumulative acknowledgement
• Time out waiting for ACK triggers
re-transmission

• How long to wait until re-transmitting?


– Too short: duplicate data
– Too long: Unnecessary delay delivering data

Timer Value
• Fixed timer
– Based on understanding of network behavior
– Can not adapt to changing network conditions
– Too small leads to unnecessary re-transmissions
– Too large and response to lost segments is slow
– Should be a bit longer than Round Trip Time (RTT)
• Adaptive scheme
– E.g. set timer to average of previous ACKs
– Problems:
• Sender may not ACK immediately
• Cannot distinguish between ACK of original segment and re-
transmitted segment
• Conditions may change suddenly

17
Duplication Detection
• If ACK lost, segment is re-transmitted
• Receiver must recognize duplicates
• Duplicate received prior to closing connection
– Receiver assumes ACK lost and ACKs duplicate
– Sender must not get confused with multiple ACKs
– Sequence number space large enough to not cycle within
maximum life of segment

• Also possible to receive a duplicate after closing the


connection!

Incorrect
Duplicate
Detection

Illustrates need for the sequence


number space to be larger than
the maximum possible segment
lifetime

Note: cycle back to SN=1

18
Flow Control
• Can use credit allocation described earlier

• Generally little harm if a single ACK/Credit segment is


lost, will resynchronize the next time

• Problem if B sends AN=i, W=0 closing window


• Later, B sends AN=i, W=j to reopen, but this is lost
• Sender thinks window is closed, receiver thinks it is open
• Solution: use window timer
• If timer expires, send something to break the deadlock
– Could be re-transmission of previous segment

Connection Establishment
• Two way handshake
– A send SYN, B replies with SYN
– Lost SYN handled by re-transmission
• Can lead to duplicate SYNs
– Ignore duplicate SYNs once connected
• Lost or delayed data segments can cause
connection problems
– Segment from old connections

19
Two Way
Handshake:
Obsolete
Data
Segment

Two Way Handshake:


Obsolete SYN Segment

A wants new
connection, B expects SN j
picks SN k

20
Connection Establishment –
Three Way Handshake
• Solution: Explicitly acknowledge each other’s
SYN and sequence number
– Use SYN i
– Need ACK to include i

• Called the Three Way Handshake

Three Way
Handshake:
Examples

21
Three Way
Handshake:
State
Diagram

Connection Termination
• Same problems we had with connection
establishment can also occur with connection
termination
– Lost or obsolete FIN segment
– Can lose last data segment if FIN arrives before last
data segment
• Solution: associate sequence number with FIN
• Receiver waits for all segments before FIN
sequence number
• Must explicitly ACK FIN

22
Graceful Close
• Send FIN i and receive AN i
• Receive FIN j and send AN j
• Wait twice maximum expected segment lifetime

Crash Recovery
• If the transport service crashes and restarts, after restart
all state info is lost
• Connection is half open
– Side that did not crash still thinks it is connected
• Close connection using persistence timer
– Wait for ACK for (time out) * (number of retries)
– When expired, close connection and inform user
• Send RST i in response to any i segment arriving
• User must decide whether to reconnect
– Problems with lost or duplicate data

23
TCP:Overview RFCs: 793, 1122, 1323, 2018, 2581
• point-to-point: • full duplex data:
– one sender, one receiver – bi-directional data flow in
same connection
• reliable, in-order byte – MSS: maximum segment
stream: size
– no “message boundaries” • connection-oriented:
• pipelined: – handshaking (exchange of
control msgs) init’s sender,
– TCP congestion and flow
receiver state before data
control set window size exchange
• send & receive buffers • flow controlled:
– sender will not overwhelm
socket
application
writes data
application
reads data
receiver
socket
door door
TCP TCP
send buffer receive buffer
segment

TCP Properties
• stream orientation. stream of OCTETS (bytes) passed
between send/ recv
• byte stream is full duplex
– think of it as two independent streams joined with a
piggybacking mechanism
– piggybacking - one data stream has control info for the other
data stream (going the other way)
• unstructured stream
– TCP doesn’t show packet boundaries to applications
– But you can still structure your message if you want
– Recall usage with sockets:
• One write() call to send data
• May require multiple read() calls

24
TCP segment structure

32 bits
URG: urgent data
source port # dest port # counting
by bytes
ACK: ACK #
sequence number of data
valid acknowledgement number (not segments!)
head not
PSH: push data now len used U A P R S F rcvr window size
(generally not used) # bytes
checksum ptr urgent data
rcvr willing
RST, SYN, FIN: Options (variable length) to accept
connection estab
(setup, teardown
commands)
application
Internet data
checksum (variable length)
(as in UDP)

TCP Fields
• Source, Destination Port: 16 bits each
• Sequence Number: 32 bits
– Sequence # of first data octet in the segment, initialized
randomly as described earlier
• ACK Number: 32 bits
– Piggybacked ACK, contains sequence number of the next data
octet the receiver expects
• Header Len: 4 bits
– Number of 32 bit words in the header
• Not Used: 6 bits for future use

25
TCP Fields
• Flags – 6 bits
– URG – Urgent Pointer field significant
– ACK – Ack field significant
– PSH – Push (flush or “push” buffer now, send data to app)
– RST – Reset connection
– SYN – Synchronize sequence numbers
– FIN – No more data
• Window – 16 bits
– Flow control credit allocation
• Checksum – 16 bits
– One’s complement sum as in UDP
• Urgent Pointer – 16 bits
– Last octet in a seq of “urgent” data. Sometimes not interpreted. Urgent
data should be processed now, even before any data sitting in the buffer
(e.g. send control-c to terminate)
• Options – Variable
– Support for timestamping, negotiating MSS

TCP seq. #’s and ACKs


Seq. #’s:
Host A Host B
– byte stream “number”
of first byte in User Seq=4
segment’s data types 2, AC
K=79
, data
‘C’ = ‘C ’
ACKs: host ACKs
– seq # of next byte receipt of
expected from other , data
= ‘C’ ‘C’, echoes
side 79, A
C K=43 back ‘C’
Seq=
– cumulative ACK Note piggybacking!
Q: How does the receiver host ACKs
handles out-of-order receipt Seq=4
segments? of echoed 3, AC
K=80
‘C’
– A: TCP spec doesn’t
say, - up to
implementer time
simple telnet scenario

26
TCP: retransmission scenarios
Host A Host B Host A Host B

Seq=9 Seq=9
2, 8 b 2, 8 b
ytes
yte s data tu Seq=
data
oe 100,
20 by
t uto im tes d
u ata
o =100 e t
e ACK
m 29
m
i it =q 10
0
t X 00 eS K=
AC ACK=
120
loss 1=
Seq=9 qe Seq=9
2, 8 b
2, 8 b
yte s data
S yte s data

20
K=1
=100 AC
AC K

time time
lost ACK scenario premature timeout,
cumulative ACKs

Sender must be smart enough to ignore duplicate ACK

TCP Flow Control


flow control receiver: explicitly
sender won’t overrun informs sender of
receiver’s buffers by (dynamically
transmitting too much, changing) amount of
too fast free buffer space
– RcvWindow field
RcvBuffer = size or TCP Receive Buffer
in TCP segment
RcvWindow = amount of spare room in Buffer
sender: keeps the amount
of transmitted,
unACKed data less
than most recently
received RcvWindow

receiver buffering

27
TCP Round Trip Time and Timeout
Q: how to set TCP Q: how to estimate RTT?
timeout value? • SampleRTT: measured time
• longer than RTT from segment transmission until
ACK receipt
– note: RTT will vary
– ignore retransmissions,
• too short: premature cumulatively ACKed
timeout segments
– unnecessary • SampleRTT will vary, want
retransmissions estimated RTT “smoother”
• too long: slow reaction – use several recent
to segment loss measurements, not just
current SampleRTT

TCP Round Trip Time and Timeout


EstimatedRTT = (1-x)*EstimatedRTT + x*SampleRTT

• Exponential weighted moving average


• influence of given sample decreases exponentially
fast
• typical value of x: 0.1

Setting the timeout


• EstimatedRTT plus “safety margin”
• large variation in EstimatedRTT -> larger safety margin
Timeout = EstimatedRTT + 4*Deviation

28
TCP Connection Management

Three way handshake:


Step 1: client end system sends TCP SYN control segment
to server
– specifies initial seq #

Step 2: server end system receives SYN, replies with


SYNACK control segment
– ACKs received SYN
– allocates buffers
– specifies server-> receiver initial seq. #

TCP Connection Management (cont.)


Closing a connection:
client server
Step 1: client end system sends TCP
FIN control segment to server close
F IN
Step 2: server receives FIN, replies
with ACK. Closes connection,
sends FIN. AC K
close
Step 3: client receives FIN, replies with FI N
ACK.
– Enters “timed wait” - will t
i ACK
a
respond with ACK to received w
FINs d
e
m
it
Step 4: server, receives ACK.
Connection closed. closed

29
Principles of Congestion Control
Congestion:
• informally: “too many sources sending too much data too
fast for network to handle”
• different from flow control!
• manifestations:
– lost packets (buffer overflow at routers)
– long delays (queueing in router buffers)
• A top-10 problem!

Causes/costs of congestion: scenario 1

• two senders, two


receivers
• one router,
infinite buffers
• no retransmission

• large delays when


congested
• maximum
achievable
throughput

30
Causes/costs of congestion: scenario 2

• one router, finite buffers


• sender retransmission of lost packet

“offered load”

Causes/costs of congestion: scenario 2


• if: λ = λ in’ (goodput)
in
λ >λ
• retransmission only when loss: in out

• Even worse: retransmission of delayed (not lost) packet makes


λ
larger in than the previous case for the same λout
“costs” of congestion:
• more work (retrans) for given “goodput”
• unneeded retransmissions: link carries multiple copies of pkt

31
Causes/costs of congestion: scenario 3
• four senders Q: what happens as λin
• multihop paths
and λinincrease ?
• timeout/retransmit

Causes/costs of congestion: scenario 3

Another “cost” of congestion:


• when packet dropped, any “upstream transmission capacity used for
that packet was wasted!
• Throughput goes to 0 as the heavy traffic approaches infinity
• In everyone’s best interest to “back off” on transmission

32
Approaches towards congestion control

Two broad approaches towards congestion control:


End-end congestion control: Network-assisted congestion
• no explicit feedback from control:
network • routers provide feedback to
• congestion inferred from end- end systems
system observed loss, delay – single bit indicating
• approach taken by TCP congestion (SNA, DECbit,
TCP/IP ECN, ATM)
– explicit rate sender should
send at

TCP Congestion Control


• end-end control (no network assistance)
• transmission rate limited by congestion window size, Congwin,
over segments:

Congwin

• w segments, each with MSS bytes sent in one RTT:


w * MSS
throughput = Bytes/sec
RTT

33
TCP congestion control:
• “probing” for usable • two “phases”
bandwidth: – slow start
– ideally: transmit as fast as – congestion avoidance
possible (Congwin as
large as possible) without • important variables:
loss – Congwin
– Reality: – threshold: defines
– increase Congwin until threshold between two
loss (congestion) slow start phase,
– loss: decrease Congwin, congestion control phase
then begin probing
(increasing) again

TCP Slowstart
Host A Host B
Slowstart algorithm one segm
T ent
T
initialize: Congwin = 1 R
for (each segment ACKed) two segm
ents
Congwin++
until (loss event OR
four segm
CongWin > threshold) ents

• exponential increase (per RTT) in


window size (not so slow!)
• loss event: timeout (Tahoe TCP)
time
and/or or three duplicate ACKs
(Reno TCP)

• (What causes duplicate ACKs?)

34
TCP Congestion Avoidance

Congestion avoidance
/* slowstart is over */
/* Congwin > threshold */
Until (loss event) {
every w segments ACKed:
Congwin++
}
threshold = Congwin/2
Congwin = 1
perform slowstart 1

TCP Fairness
AIMD
TCP congestion Fairness goal: if N TCP
avoidance: sessions share same
• AIMD: additive bottleneck link, each
increase, should get 1/N of link
multiplicative
capacity
decrease
TCP connection 1
– increase window by
1 per RTT
– decrease window by
factor of 2 on loss
event bottleneck
TCP
router
connection 2
capacity C

35
Why is TCP fair?
Two competing sessions:
• Additive increase gives slope of 1, as throughput increases
• multiplicative decrease decreases throughput proportionally
C equal bandwidth share
tu
ph
gu
rho loss: decrease window by factor of 2
t congestion avoidance: additive increase
2 loss: decrease window by factor of 2
no congestion avoidance: additive increase
it
ce
nn
oC
C
Connection 1 throughput

Eventually the two connections fluctuate along equal bandwidth line

TCP vs. UDP


• When to use TCP
– Need reliable network service
– Want flow, congestion control
• When to use UDP
– Don’t want overhead of TCP
– Don’t want congestion control! I.e. we don’t want to be
“fair”
• Multimedia apps
• Don’t want data rate throttled, but ironically this can lead to unfair
transmission rate and could actually bring all traffic to a halt
• Could also be unfair using TCP by opening multiple parallel
connections (often done with web data)

36

You might also like