0% found this document useful (0 votes)
24 views95 pages

Transport Layer1

The document provides an overview of the Transport Layer in networking, detailing its role in logical communication between application processes on different hosts. It explains the functionalities of transport protocols like UDP and TCP, including their services, multiplexing, demultiplexing, and error-checking mechanisms. Additionally, it covers reliable data transfer protocols, including Go-Back-N and Selective Repeat, highlighting their operations and differences.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
24 views95 pages

Transport Layer1

The document provides an overview of the Transport Layer in networking, detailing its role in logical communication between application processes on different hosts. It explains the functionalities of transport protocols like UDP and TCP, including their services, multiplexing, demultiplexing, and error-checking mechanisms. Additionally, it covers reliable data transfer protocols, including Go-Back-N and Selective Repeat, highlighting their operations and differences.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 95

Transport Layer

Module 2
Introduction and Transport Layer Services
-provides logical-communication b/w application-
processes running on different hosts.
-implemented in the end-systems but not in network-
routers.
On the sender, the transport-layer
→ receives messages from an application-process
→ converts the messages into the segments and
→ passes the segment to the network-layer.
On the receiver, the transport-layer
→ receives the segment from the network-layer
→ converts the segments into to the messages and
→ passes the messages to the application-process.
Relationship between Transport and Network Layers

A transport-layer protocol provides logical-


communication b/w processes running on different
hosts.
A network-layer protocol provides logical-
communication between hosts.
Within an end-system, a transport protocol
→ moves messages from application-processes to
the network-layer and vice versa.
→ but doesn't say anything about how the
messages are moved within the network-core.
-The routers do not recognize any info. which is
appended to the messages by the transport-layer.
Overview of the Transport Layer in the Internet

UDP (User Datagram Protocol)


– UDP provides a connectionless service to the invoking application.
– The UDP provides following 2 services:
» Process-to-process data delivery and
» Error checking.
– UDP is an unreliable service i.e. it doesn’t guarantee data will arrive to
destination-process.
– TCP (Transmission Control Protocol)
– TCP provides a connection-oriented service to the invoking application.
– The TCP provides following 3 services:
• Reliable data transfer i.e. guarantees data will arrive to
destination-process correctly.
• Congestion control and
• Error checking.
Multiplexing and Demultiplexing

Multiplexing
At the sender, the transport-layer
→ gathers data-chunks at the source-host from different sockets
→ encapsulates data-chunk with header to create segments and
→ passes the segments to the network-layer.
The job of combining the data-chunks from different sockets to create a
segment is called multiplexing.
Demultiplexing
At the receiver, the transport-layer
→ examines the fields in the segments to identify the receiving-
socket and
→ directs the segment to the receiving-socket.
The job of delivering the data in a segment to the correct socket is
Endpoint Identification

 Each socket must have a unique identifier.


 Each segment must include 2 header-fields to identify
the socket
Source-port-number field (16-bit number: 0
to 65535)
The port-numbers ranging from 0 to 1023 are called
well-known port-numbers and are restricted.
ex: HTTP uses port-no 80,FTP uses port-no 21
When we develop a new application, we must assign
the application a port-number, which are known as
ephemeral ports (49152–65535).
Destination-port-number field.
Connectionless Multiplexing and Demultiplexing
ClientSocket= socket(socket.AF_INET,socket.SOCK_DGRAM)
clientSocket.bind((‘’,19157)
At the sender A, the transport-layer
→ creates a segment containing source-port 19157,
destination-port 46428 & data
→ passes then the resulting segment to the network-
layer.
At the receiver B, the transport-layer
→ examines the destination-port field in the segment
→ delivers the segment to the socket identified by
port 46428.
A UDP socket is identified by a two-tuple:
– Destination IP address &
– Destination-port-no.
Connection Oriented Multiplexing and
Demultiplexing
• TCP socket is identified by a four-tuple:
–Source IP address
–Source-port-no
–Destination IP address &
–Destination-port-no.
• clientSocket=socket(AF_INET,SOCK_STREAM)
• clientSocket.connect((serverName,12000))
• connectSocket,addr=serverSocket.accept()
Web Servers and TCP
-A host running a Web-server (ex: Apache) on port
80.
-When clients (ex: browsers) send segments to the
server, all segments will have destination-port 80.
The server distinguishes the segments from the
different clients using two-tuple:
Source IP addresses
Source-port-nos.
The server can use either
Persistent HTTP
Throughout the duration of the persistent
connection the client and server exchange HTTP
messages via the same server socket.
Non-persistent HTTP
A new TCP connection(new socket) is created
and closed for every request/response.
This can severely impact the performance of a
busy Web-server.
Connectionless Transport: UDP

• UDP is an unreliable, connectionless protocol.


• It provides following 2 services:
Process-to-process data delivery and
Error checking.
It does not provide flow, error, or
congestion control.
At the sender, UDP
→ takes messages from the application-process
→ attaches source- & destination-port-nos and
→ passes the resulting segment to the network-layer.
At the receiver, UDP
→ examines the destination-port-no in the segment
and
→ delivers the segment to the correct application-
process.
It is suitable for application program that
→ needs to send short messages &
→ cannot afford the retransmission.
• UDP is suitable for many applications for the
following reasons:
 Finer Application Level Control over what Data
is Sent, and when.
 No Connection Establishment
 No Connection State.
 Small Packet Header Overhead.
Popular Internet applications and their
underlying transport protocols
Application Application-Layer Protocol Underlying Transport Protocol

Electronic mail SMTP TCP

Remote terminal access Telnet TCP

Web HTTP TCP

File transfer FTP TCP

Remote file server NFS Typically UDP

Streaming multimedia typically proprietary UDP or TCP

Internet telephony typically proprietary UDP or TCP

Network management SNMP Typically UDP

Routing protocol RIP Typically UDP

Name translation DNS Typically UDP


UDP Segment Structure
UDP Checksum
• The checksum is used for error-detection.
• The checksum is used to determine whether bits
within the segment have been altered.
• How to calculate checksum on the sender:
 All the 16-bit words in the segment are added to get
a sum.
 Then, the 1's complement of the sum is obtained to
get a result.
 Finally, the result is added to the checksum-field
inside the segment.
Internet checksum: example
– Suppose that we have the following three 16-bit words:
0110011001100000
0101010101010101 → three 16 bits words
1000111100001100
– The sum of first two 16-bit words is:
0110011001100000
0101010101010101
1011101110110101
– Adding the third word to the above sum gives:
1011101110110101 → sum of 1 two 16 bit words
st

1000111100001100 → third 16 bit word


0100101011000010 → sum of all three 16 bit words
– Taking 1’s complement for the final sum:
• 1011010100111101 → 1’s complement for the final
On the receiver
– All four 16-bit words are added, including the checksum.
• If no errors are Aintroduced into the packet, then clearly the
sum will be 1111111111111111.
• If one of the bits is a 0, then errors have been introduced into
the packet.
0 1 1 0 0 1 1 0 0 1 1 0 0 0 0 0
0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1
1 0 0 0 1 1 1 1 0 0 0 0 1 1 0 0
1 0 1 1 0 1 0 1 0 0 1 1 1 1 0 1
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
Principles of Reliable Data Transfer
Reliable Data Transfer over a Perfectly Reliable Channel: rdt1.0
• In FSM, following notations are used:
 The arrows indicate the transition of the protocol from one state to
another.
 The event causing the transition is shown above the horizontal line labelling
the transition.
 The action taken when the event occurs is shown below the horizontal line.
 The dashed arrow indicates the initial state.
Reliable Data Transfer over a Channel with Bit
Errors: rdt2.0
The message dictation protocol uses both
→ positive acknowledgements (ACK) and
→ negative acknowledgements (NAK).
The receiver uses these control messages to inform the sender about
→ what has been received correctly and
→ what has been received in error and thus requires retransmission.
Reliable data transfer protocols based on the retransmission are
known as ARQ protocols.
Three additional protocol capabilities are required in ARQ protocols:
1. Error Detection
2. Receiver Feedback
3. Retransmission
Rdt 2.0
Sender Handles Garbled ACK/NAKs: rdt2.1

Problem with rdt2.0:


- If an ACK or NAK is corrupted, the sender cannot know
whether the receiver has correctly received the data or
not.
Solution: The sender resends the current data packet when
it receives garbled ACK or NAK packet.
– Problem: This approach introduces duplicate packets into the
channel.
– Solution: Add sequence-number field to the data packet.
– The receiver has to only check the sequence-number to
determine whether the received packet is a retransmission or
A 1-bit sequence-number allows the receiver to know
whether the sender is sending
→ previously transmitted packet (0) or
→ new packet (1).
This protocol is rdt2.1
rdt 2.1 sender
rdt 2.1 receiver
Sender uses ACK/NAKs: rdt2.2

• Protocol rdt2.2 uses both positive and negative


acknowledgments from the receiver to the sender.
• When out-of-order packet is received, the receiver
sends a positive acknowledgment (ACK).
• When a corrupted packet is received, the receiver
sends a negative acknowledgment (NAK).
rdt 2.2 sender
rdt 2.2 receiver
Reliable Data Transfer over a Lossy Channel with Bit Errors:
rdt3.0

Consider data transfer over an unreliable channel in which


packet lose may occur.
Two problems must be solved by the rdt3.0:
How to detect packet loss?
What to do when packet loss occurs?
Solution:
The sender
→ sends one packet & starts a timer and
→ waits for ACK from the receiver (okay to go ahead).
If the timer expires before ACK arrives, the sender retransmits
• The sender must wait at least as long as
• A round-trip delay between the sender and receiver
+
• Amount of time needed to process a packet at the
receiver.
• Implementing a time-based retransmission
mechanism requires a countdown timer.
• The timer must interrupt the sender after a given
amount of time has expired.
rdt 3.0 sender
Operation of rdt3.0, the alternating-bit protocol
Pipelined Reliable Data Transfer Protocols

RTT- 30msecs
Transmission rate (R)- 1 Gbps=109 bits/sec
Packet size(L) = 1000bytes=8000 bits(header + data)
Dtrans= L/R = (8000 bits/packet)/(109 bits/sec)
= 8 microsecs
t= RTT/2+ L/R =15.008 msec
With ACK received t= 30.008 msecs
Usender=(L/R)/(RTT+L/R) =0.008/30.008=0.00027
• RTT- 30msecs
• Transmission rate (R)- 1 Gbps=109 bits/sec
• Packet size(L) = 1000bytes=8000 bits(header + data)
• Dtrans= L/R = (8000 bits/packet)/(109 bits/sec)
= 8 microsecs
t= RTT/2+ L/R =15.008 msec
With ACK received t= 30.008 msecs
Usender=(L/R)/(RTT+L/R) =0.008/30.008=0.00027
Pipelined Reliable Data Transfer Protocols
Continued..
• The sender is allowed to send multiple packets
without waiting for acknowledgments.
• Pipelining has the following consequences:
– The range of sequence-numbers must be increased.
– The sender and receiver may have to buffer more than
one packet.
• Two basic approaches toward pipelined error
recovery can be identified:
• 1) Go-Back-N and 2) Selective repeat.
Stop-and-wait and pipelined sending
Go-Back-N (GBN)

• The sender is allowed to transmit multiple packets


without waiting for an acknowledgment.
• But, the sender is constrained to have at most N
unacknowledged packets in the pipeline.
• Where N = window-size which refers maximum no.
of unacknowledged packets in the pipeline
• GBN protocol is called a sliding-window protocol.
Sender’s view of sequence-numbers in Go-Back-N
Extended FSM description of GBN sender
Extended FSM description of GBN receiver
GBN Sender
» Invocation from above
– When rdt_send() is called from above, the sender first
checks to see if the window is full, whether there are N
outstanding, unacknowledged packets.
• If the window is not full, the sender creates
and sends a packet.
• If the window is full, the sender simply returns
the data back to the upper layer. This is an
implicit indication that the window is full.
» Receipt of an ACK.
– An acknowledgment for a packet with sequence-number n will
be taken to be a cumulative acknowledgment.
– All packets with a sequence-number up to n have been
correctly received at the receiver.
» A Timeout Event.
– A timer will be used to recover from lost data or
acknowledgment packets.
• If a timeout occurs, the sender resends all packets that have
been previously sent but that have not yet been
acknowledged.
• If an ACK is received but there are still additional transmitted
but not yet acknowledged packets, the timer is restarted.
• If there are no outstanding unacknowledged packets, the
timer is stopped.
GBN Receiver
• If a packet with sequence-number n is received
correctly and is in order, the receiver
→ sends an ACK for packet n
→ delivers the packet to the upper layer.
• In all other cases, the receiver
→ discards the packet and
→ resends an ACK for the most recently received in-
order packet.
Operation of the GBN Protocol
Selective Repeat (SR)

• Problem with GBN:


– GBN suffers from performance problems.
– When the window-size and bandwidth-delay product are
both large, many packets can be in the pipeline.
– Thus, a single packet error results in retransmission of a
large number of packets.
• Solution: Use Selective Repeat (SR).
Selective-repeat (SR) sender and receiver views of
sequence-number space
SR Sender

1. Data Received from above


2. Timeout
3. ACK Received
SR Receiver
1. Packet with sequence-number in [rcv_base,
rcv_base+N-1] is correctly received.
2. Packet with sequence-number in [rcv_base-N,
rcv_base-1] is correctly received.
3. Otherwise
SR operation
Connection-Oriented Transport: TCP

• The features of TCP


 Connection Oriented
 Runs in the End Systems
 Full Duplex Service
 Point-to-Point
 Three-way Handshake
 Maximum Segment Size (MSS)
 Send & Receive Buffers
TCP Segment Structure
Sequence numbers & Acknowledge numbers

Dividing file data into TCP segments


Telnet: A Case Study for Sequence and
Acknowledgment Numbers
• Telnet is a popular application-layer protocol used
for remote-login.
• Telnet runs over TCP.
• Telnet is designed to work between any pair of
hosts.
Sequence Numbers and Acknowledgment Numbers
Round Trip Time Estimation and Timeout
• TCP maintains an average of the SampleRTT values, which is referred
to as EstimatedRTT.

• DevRTT is defined as “An estimate of how much SampleRTT typically


deviates from EstimatedRTT.”

• If the SampleRTT values have little fluctuation, then DevRTT will be


small.
• Setting and Managing the Retransmission Timeout Interval

• Alpha=0.125=1/8
• Beta=0.25
Reliable Data Transfer
IP is unreliable i.e. IP does not guarantee data
delivery.
TCP creates a reliable data-transfer-service on top of
IP’s unreliable-service.
At the receiver, reliable-service means
→ data-stream is uncorrupted
→ data-stream is without duplication and
→ data-stream is in sequence.
A Few Interesting Scenarios
First Scenario

Retransmission due to a lost acknowledgment


Second Scenario

Segment 100 not retransmitted


Third Scenario

A cumulative acknowledgment avoids retransmission of the first-


segment
Fast Retransmit

• The timeout period can be relatively long. The sender can


often detect packet-loss well before the timeout occurs by
noting duplicate ACKs.
• A duplicate ACK refers to ACK the sender receives for the
second time.
Fast Retransmit

Fast retransmit: retransmitting the missing segment before the


Flow Control
• A flow-control service eliminates the possibility of the
sender overflowing the receiver-buffer.
• The transport layer provides a flow control mechanism
between the adjacent layers of the TCP/IP model. TCP also
prevents data loss due to a fast sender and slow receiver by
imposing some flow control techniques.
• Receive window-rwnd
• LastByteRcvd – LastByteRead ≤ RcvBuffer
• rwnd = RcvBuffer – [LastByteRcvd – LastByteRead ]
• LastByteSent – LastByteAcked ≤ rwnd
TCP Connection Management

• Connection Setup & Data Transfer


 Step 1: Client sends a connection-request segment to the
Server
 Step 2: Server sends a connection-granted segment to the
Client
 Step 3: Client sends an ACK segment to the Server
TCP three-way handshake
Connection Release
A typical sequence of TCP states visited by a
client TCP
Atypical sequence of TCP states visited by a
Server side TCP
Principles of Congestion Control

Scenario 1: Two Senders, a Router with Infinite Buffers

Two connections sharing a single hop with infinite buffers


Two Senders and a Router with Finite Buffers
» The amount of router buffering is finite.
– Packets will be dropped when arriving to an already full
buffer.
» Each connection is reliable.
- If a packet is dropped at the router, the sender
will eventually retransmit
Two hosts (with retransmissions) and a router with finite
buffers
Scenario 3: Four Senders, Routers with Finite Buffers, and Multihop
Paths
Approaches to Congestion Control
1) End-to-end Congestion Control
2) Network Assisted congestion Control

Two feedback pathways for network-indicated congestion


information
Network Assisted Congestion Control Example: ATM
ABR(Asynchronous Transfer Mode-Available Bit Rate) Congestion
Control

Congestion-control framework for ATM ABR service


• Data-cells are transmitted from a source to a
destination through a series of intermediate
switches.
• RM-cells are placed between the data-cells. (RM à
Resource Management).
• The RM-cells are used to send congestion-related
information to the hosts & switches.
• When an RM-cell arrives at a destination, the cell
will be sent back to the sender
• Thus, RM-cells can be used to provide both
→ direct network feedback
→ network feedback via the receiver.
Three Methods to indicate Congestion

• ABR provides 3 mechanisms for indicating


congestion-related information:
» EFCI Bit(Explicit forward congestion indication)
» CI and NI Bits( congestion indication & No
Increase)
» ER Setting ( explicit rate)
TCP Congestion Control

• TCP uses end-to-end congestion-control rather than


network-assisted congestion-control
• Here is how it works:
– Each sender limits the rate at which it sends traffic into
its connection as a function of perceived congestion.
– If sender perceives that there is little congestion, then
sender increases its data-rate.
– If sender perceives that there is congestion, then sender
reduces its data-rate.
• The sender keeps track of an additional variable
called the congestion-window (cwnd).
• The congestion-window imposes a constraint on the
data-rate of a sender.
• The amount of unacknowledged-data at a sender
will not exceed minimum of (cwnd & rwnd),
• that is:

• The sender’s data-rate is roughly cwnd/RTT


bytes/sec.
–Congestion-control algorithm has 3 major components:

 Slow start
 Congestion avoidance
 Fast recovery.
• Slow Start

TCP slow start


FSM description of TCP congestion-control
Congestion Avoidance

» When a timeout occurs.


– When the loss event occurred,
→ value of cwnd is set to 1 MSS
→ value of ssthresh(slow start threshold) is set to half
the value of cwnd.
» When triple duplicate ACK occurs.

→ value of cwnd is halved.


→ value of ssthresh is set to half the value of cwnd.
Fast Recovery

• The value of cwnd is increased by 1 MSS for every


duplicate ACK received.
• When an ACK arrives for the missing segment, the
congestion-avoidance state is entered.
• If a timeout event occurs, fast recovery transitions
to the slow-start state.
• When the loss event occurred
→ value of cwnd is set to 1 MSS, and
→ value of ssthresh is set to half the value of cwnd
• There are 2 versions of TCP:
» TCP Tahoe(early version )
→ cut the congestion-window to 1 MSS and
→ entered the slow-start phase after either
• timeout-indicated
• triple-duplicate-ACK-indicated loss event.
»TCP Reno(newer version )
– TCP Reno incorporated fast recovery.
TCP Congestion Control: Retrospective

TCP’s congestion-control consists of (AIMD  additive


increase, multiplicative decrease)
→ Increasing linearly (additive) value of cwnd by 1 MSS per
RTT
→ Halving (multiplicative decrease) value of cwnd on a
triple duplicate-ACK event.
→ TCP congestion-control is often referred to as an AIMD.
TCP
→ increases linearly the congestion-window-size until a
triple duplicate-ACK event occurs and
→ decreases then the congestion-window-size by a factor
of 2
Fairness

• Assume the two connections have the same MSS


and RTT.
• Congestion-control mechanism is fair if each
connection gets equal share of the link-bandwidth.

Two TCP connections sharing a single bottleneck link


– If TCP shares the link-bandwidth equally b/w the 2
connections,
• then the throughput falls along the 45-degree
arrow starting from the origin.
– Fairness and UDP
• Many multimedia-applications (such as Internet phone)
often do not run over TCP.
• Instead, these applications prefer to run over UDP. This is
because
→ applications can pump their audio into the network at a
constant rate and
→ occasionally lose packets.
– Fairness and Parallel TCP Connections
• Web browsers use multiple parallel-connections to
transfer the multiple objects within a Web page.
• Thus, the application gets a larger fraction of the
bandwidth in a congested link.
• Web-traffic is so pervasive in the Internet; multiple

You might also like