0% found this document useful (0 votes)
2 views

Introduction to Transport Layer Protocols

The document provides an overview of transport layer protocols in the TCP/IP suite, focusing on UDP, TCP, and SCTP, detailing their characteristics, services, and applications. UDP is a connectionless and unreliable protocol ideal for low-latency applications, while TCP is connection-oriented and ensures reliable data transfer with error correction. The document also discusses port numbers, data transmission methods, and mechanisms for handling congestion and flow control in TCP.
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

Introduction to Transport Layer Protocols

The document provides an overview of transport layer protocols in the TCP/IP suite, focusing on UDP, TCP, and SCTP, detailing their characteristics, services, and applications. UDP is a connectionless and unreliable protocol ideal for low-latency applications, while TCP is connection-oriented and ensures reliable data transfer with error correction. The document also discusses port numbers, data transmission methods, and mechanisms for handling congestion and flow control in TCP.
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 61

Introduction to Transport Layer

Protocols

 The transport layer in the TCP/IP protocol suite includes UDP, TCP, and SCTP. These protocols
provide different types of communication services.
 Services Provided by Transport Layer ProtocolsUDP (User Datagram Protocol)Unreliable and
connectionless transport-layer protocol.
 Used for simplicity and efficiency in applications where error control is handled at the application
layer.
 TCP (Transmission Control Protocol)Reliable and connection-oriented protocol. Used in
applications where data integrity and reliability are crucial.
 SCTP (Stream Control Transmission Protocol)Newer protocol that combines features of both UDP
and TCP.
 Supports multi-streaming and multi-homing for improved reliability.Port NumbersPort numbers are
used to establish process-to-process communication.
 They provide end-to-end addressing at the transport layer, similar to how IP addresses work at
the network layer.Port numbers enable multiplexing and demultiplexing of data.
User Datagram Protocol

 User Datagram Protocol (UDP)UDP is a connectionless and unreliable transport


protocol.
 It does not add extra services to IP except for process-to-process
communication.Compared to TCP, UDP has minimal overhead and is simple.
 Best for applications where reliability is not a major concern.Uses: Ideal for small
messages, where interaction between sender and receiver is minimal.User
DatagramUDP packets are called user datagrams.
 The header is fixed at 8 bytes, containing four fields (each of 2 bytes or 16 bits).
 Fields:Source port numberDestination port numberTotal length (header +
data)Optional checksumThe total length field can define sizes up to 65,535 bytes.
 The actual usable size is less due to UDP being stored in an IP datagram.
UDP Services
 UDP provides process-to-process communication using socket
addresses.
 It allows efficient and simple communication without establishing a
connection.

UDP Checksum Calculation


 If the sender decides to include a checksum:It calculates the checksum
and includes it in the segment.
 f the checksum value is zero, it means no checksum is included.If the
checksum is incorrect at the receiver's end:The UDP datagram is
discarded
UDP Characteristics

 No Congestion Control: UDP does not have congestion control


mechanisms like TCP.
 This means UDP does not reduce its sending rate even if the network is
congested.
 Use in Real-time Applications : UDP is often used for applications
requiring low latency, such as live audio/video streaming, where speed
is prioritized over reliability.
 Encapsulation and Transmission : UDP encapsulates and sends
messages without establishing a connection.
UDP Applications

 UDP is a simple, lightweight transport protocol.


 Unlike TCP, UDP does not guarantee reliable delivery, order, or error
correction.
 Ideal for applications where speed is more important than reliability.
Connectionless Service

 UDP is connectionless, meaning : Each packet (datagram) is sent


independently . No handshaking between sender and receiver.
 No guarantee of delivery or order.
Advantage

 Low overhead, faster communication.


 Useful in scenarios where quick transmission is more important than
reliability.
 Example : Client-server model: A client sends a request, and the server
responds without maintaining a long-term connection.
 If some packets are lost, they are not retransmitted.
Example

 Client-server model: A client sends a request, and the server responds


without maintaining a long-term connection .
 If some packets are lost, they are not retransmitted.
TCP (Transmission Control Protocol)

 It is a connection-oriented, reliable protocol.


 Data transfer is ensured to be reliable and secure.
 Handles packet loss, duplicate packets, etc. efficiently.
 Includes error detection and correction mechanisms.
TCP Services

 Process-to-process communication
 Reliable data transfer
 Error detection & correction
 Flow control
 Congestion control
Port Numbers & Process
Communication

 TCP processes communicate using port numbers.


 Some important port numbers are listed in Table 2.4.
 Like UDP, TCP also provides process-to-process communication.
TCP vs. UDP Services

 UDP is a connectionless protocol that delivers messages without ensuring reliability .


 TCP is a connection-oriented protocol that ensures reliable delivery of data between
processes.
 TCP’s Byte Stream Service: TCP does not send messages as distinct packets but as a
stream of bytes. Data is delivered sequentially and without message boundaries.
 Sending and Receiving Buffers : TCP uses buffers for both sending and receiving data.
 Sending Buffer : Stores data before transmission and sends it as a continuous byte
stream.
 Receiving Buffer : Holds incoming data until the receiving process reads it
 Process Communication : Sending processes may write data in small chunks, while
receiving processes may read at different rates.TCP manages these differences using
buffers and flow control mechanisms
TCP Buffers and Stream Handling

 TCP uses sending and receiving buffers to manage data flow.


 Sending Buffer: Stores data before transmission.
 Data is sent as a stream of bytes, not discrete messages.
 Receiving Buffer : Holds incoming data until it is read by the receiving
process.
 Bytes are recycled after being read.
TCP Segments

 TCP does not send raw streams of bytes over the network; it groups
bytes into segments .
 Each segment contains a header (for control information) and a data
payload
 Segments are then encapsulated into IP datagrams and transmitted.
Reliable Delivery

 Segments may be received out of order, lost, or corrupted.


 TCP handles these issues by retransmitting lost data and reordering
segments at the receiver.
 The receiving process remains unaware of TCP’s internal handling
mechanisms
TCP Segment Size

 Segments are not of fixed size; they can vary based on the network
conditions and buffer availability .
 Example: One segment may carry 3 bytes, while another carries 5
bytes, but in reality, they can be much larger (hundreds or thousands of
bytes).
TCP Feature

To ensure reliable communication, TCP provides several is important features:


1. Connection-Oriented Service
2. Full-Duplex Communication
3. Reliable Service
4. TCP Numbering System
Instead of assigning a segment number, TCP assigns : A byte number to
each transmitted byte. A sequence number and an acknowledgment number
to ensure ordered delivery.
Example : If the first byte of data is numbered 10,157, and 500 bytes are sent,
the next byte will be 10,657.
Segments
TCP Header Fields:

 Control Flags: URG, ACK, PSH, RST, SYN, FIN (used for different TCP
operations).Window Size: Defines the size of the sending TCP buffer.
 Checksum: Ensures error-free transmission.
 Urgent Pointer: Used when the URG flag is set.
 Options: Additional configurations can be included.
TCP Segment Structure

 A TCP segment encapsulates data from the application layer.


 The TCP header includes important fields like source/destination ports,
sequence numbers, acknowledgment numbers, etc.
TCP Connection Establishment
(Three-way Handshake)

 Step 1: Client sends a SYN (synchronize) request to initiate a


connection.
 Step 2: Server responds with SYN-ACK (synchronize-acknowledge).
 Step 3: Client sends an ACK to complete the connection.
 Reliable Data Transmission : TCP ensures reliable data transfer by using
sequence numbers and acknowledgments.
 Retransmission occurs if packets are lost or corrupted.
TCP Three-Way Handshake

 Step 1: The client sends a SYN segment to initiate the connection.


 Step 2: The server responds with a SYN + ACK segment to acknowledge
the request .
 Step 3: The client sends an ACK segment to finalize the connection.
Sequence and Acknowledgment
Numbers

 A SYN segment consumes one sequence number.


 A SYN + ACK segment also consumes one sequence number.
 An ACK segment (without data) does not consume a sequence number.
SYN Flooding Attack

 A security vulnerability in the TCP connection establishment process.


 Attackers send multiple SYN requests without completing the
handshake, consuming server resources.
 This can lead to a Denial-of-Service (DoS) attack, making the server
unresponsive.
Data Transfer in TCP

 Once a TCP connection is established, bi-directional data transfer can


take place.
 The client and server can send data to each other simultaneously.
 Acknowledgments can be piggybacked with data segments to improve
efficiency.
Example of Data Transfer

 The client sends 2000 bytes of data in two segments.


 The acknowledgment can be included with the data to reduce
overhead.
Handling SYN Flooding Attack

 TCP implementations use techniques like delaying resource allocation


to prevent SYN Flood attacks.
 This prevents resource exhaustion caused by multiple half-open
connections.
Pushing Data with the PUSH Flag

 TCP uses a buffer to store and send data efficiently.The PUSH flag
ensures data is immediately sent without waiting for the buffer to fill.
 Some TCP implementations allow applications to enable or disable the
PUSH flag.
 5. Flexibility in TCP Data Transmission : TCP allows both immediate and
delayed data transmission based on application needs.
 Applications can choose whether to wait for more data or send it
immediately.
Urgent Data in TCP

 TCP is a stream-oriented protocol, meaning data is transmitted as a


continuous flow.
 Sometimes, an application may need to send urgent data that requires
immediate processing.
 TCP provides an Urgent Pointer (URG flag) to mark urgent data.
 The receiver prioritizes urgent data over normal data.
Handling Urgent Data

 The urgent pointer marks the first and last byte of the urgent data.
 The application decides how to handle urgent data; TCP itself does not
process it differently.
TCP Connection Termination
(Three-Way Handshake)

 A TCP connection is closed in an orderly way using a three-way


handshake : Client sends a FIN (Finish) segment to initiate connection
termination .
 Server responds with an ACK to acknowledge the request .
 Server sends its own FIN, which the client acknowledges.
 This ensures both parties gracefully close the connection.
Half-Close Mechanism

 A client can stop sending data but continue receiving from the server.
 This is useful when one side is done sending but still expects data.
 The connection remains open in one direction.
Connection Reset (RST Flag)

 TCP allows a connection to be abruptly terminated using the RST


(Reset) flag .
 Used when a connection is invalid or needs immediate termination.
Windows in TCP

 TCP uses two windows (send window and receive window) for each
direction of data transfer.

 Send Window: The window size is 100 bytes. The


send window size is dictated by the receiver (flow control) and the
congestion in the underlying network (congestion control).
Sender window

 The send window in TCP is similar to the one used with the Selective-Repeat protocol,
but with some differences:
 1. One difference is the nature of entities related to the window. The window size in
 SR is the number of packets, but the window size in TCP is the number of bytes.
Although actual transmission in TCP occurs segment by segment, the variables that
control the window are expressed in bytes.
 2. The second difference is that, in some implementations, TCP can store data
received from the process and send them later, but we assume that the sending
 TCP is capable of sending segments of data as soon as it receives them from its
process.
 3. Another difference is the number of timers. The theoretical Selective-Repeat protocol
may use several timers for each packet sent, but as mentioned before, the TCP protocol
uses only one timer.
Receive Window

 The window size is 100 bytes.


 1. The first difference is that TCP allows the receiving process to pull data at its own pace. This
means that part of the allocated buffer at the receiver may be occupied by bytes that have
been received and acknowledged, but are waiting to be pulled by the receiving process.
 The receive window size is then always smaller than or equal to the buffer size.
 rwnd = buffer size - number of waiting bytes to be pulled.
 2.The second difference is the way acknowledgments are used in the TCP protocol. The
major acknowledgment mechanism in TCP is a cumulative acknowledgment announcing the
next expected byte to receive.
 Flow Control: Flow control balances the rate a producer creates data with the rate a consumer
can use the data.
 TCP separates flow control from error control.
Opening and Closing Windows

 The receive window closes when more bytes arrive from the sender.

 It opens when more bytes are pulled by the process.


 The opening, closing, and shrinking of the send window is controlled
by the receiver.
 The send window closes when a new acknowledgment allows it to do
so.
 The send window opens when the receive window size (rwnd)
advertised by the receiver allows it to do so.
Shrinking of windows

 The send window can shrink if the receiver defines a value for rwnd
that results in shrinking the window.
 The limitation does not allow the right wall of the send window to move
to the left.
 new ackNo + new rwnd ≥ last ackNo + last rwnd
 The left side of the inequality represents the new position of the right
wall with respect to the sequence number space.
 The right side shows the old position of the right wall.
 The relationship shows that the right wall should not move to the left.
Window Shutdown

 The receiver can temporarily shut down the window by sending a rwnd
of 0.
 If the receiver does not want to receive any data from the sender for a
while.
 In this case, the sender does not actually shrink the size of the window,
but stops sending data until a new advertisement has arrived.
 Even when the window is shut down by an order from the receiver, the
sender can always send a segment with 1 byte of data.
This is called probing and is used to prevent a deadlock.
Silly Window Syndrome

 A serious problem can arise in the sliding window operation when either
the sending application program creates data slowly or the receiving
application program consumes data slowly, or both.
 Any of these situations results in the sending of data in very small
segments, which reduces the efficiency of the operation.
 The inefficiency is even worse after accounting for the data-link layer
and physical-layer overhead. This problem is called the silly window
syndrome.
Syndrome created by the sender

 The sending TCP may create a silly window syndrome if it is serving an application
program that creates data slowly.
 Nagle found an elegant solution.
 Nagle’s algorithm is simple: 1. The sending TCP sends the first piece of data it
receives from the sending application program even if it is only 1 byte.
 2. After sending the first segment, the sending TCP accumulates data in the
output buffer and waits until either the receiving TCP sends an acknowledgment or
until enough data have accumulated to fill a maximum-size segment. At this time ,
sending TCP can send the segment.
 3. Step 2 is repeated for the rest of the transmission. Segment 3 is sent
immediately if an acknowledgment is received for segment 2, or if enough data
have accumulated to fill a maximum-size segment.
Syndrome created by the reciever

 The receiving TCP may create a silly window syndrome if it is serving an application
program that consumes data slowly
 Two solutions have been proposed to prevent the silly window syndrome created by an
application program that consumes data more slowly than they arrive.
 Clark’s solution is to send an acknowledgment as soon as the data arrive, but to
announce a window size of zero until either there is enough space to accommodate a
segment of maximum size or until at least half of the receive buffer is empty.
 The second solution is to delay sending the acknowledgment. This means that when a
segment arrives, it is not acknowledged immediately. The receiver waits until there is a
decent amount of space in its incoming buffer before acknowledging the arrived
segments.
 The delayed acknowledgment prevents the sending TCP from sliding its window. After
the sending TCP has sent the data in the window, it stops. This kills the syndrome.
Error Control

 TCP is a reliable transport-layer protocol. This means that an application


program that delivers a stream of data
 TCP relies on TCP to deliver the entire stream to the application program on
the other end in order, without error, and without any part lost or duplicated.
 TCP provides reliability using error control.
 Error control includes mechanisms for detecting and resending corrupted
segments, resending lost segments, storing out-oforder segments until missing
segments arrive, and detecting and discarding duplicated segments.
 Error control in TCP is achieved through the use of three simple tools:
checksum, acknowledgment, and time-out.
Checksum

 Each segment includes a checksum field, which is used to check for a


corrupted segment.
 If a segment is corrupted, as detected by an invalid checksum, the
segment is discarded by the destination TCP and is considered as lost.

 TCP uses a 16-bit checksum that is mandatory in every segment.


Acknowledgment

 TCP uses acknowledgments to confirm the receipt of data segments.


Control segments that carry no data, but consume a sequence number,
are also acknowledged.
 ACK segments are never acknowledged.
Acknowledgement Type

 Cumulative Acknowledgment (ACK): TCP was originally designed to


acknowledge receipt of segments cumulatively.
 The receiver advertises the next byte it expects to receive, ignoring all
segments received and stored out of order.
 This is sometimes referred to as positive cumulative acknowledgment, or
ACK.
 The word positive indicates that no feedback is provided for discarded,
lost, or duplicate segments.
 The 32-bit ACK field in the TCP header is used for cumulative
acknowledgments, and its value is valid only when the ACK flag bit is set
to 1.
Selective Acknowledgment (SACK)

 More and more implementations are adding another type of


acknowledgment called selective acknowledgment, or SACK.
 A SACK does not replace an ACK, but reports additional information to
the sender.
 A SACK reports a block of bytes that is out of order, and also a block of
bytes that is duplicated, i.e., received more than once.
 However, since there is no provision in the TCP header for adding this
type of information, SACK is implemented as an option at the end of the
TCP header.
Retransmission:

 The heart of the error control mechanism is the retransmission of


segments. When a segment is sent, it is stored in a queue until it is
acknowledged.
 When the retransmission timer expires or when the sender receives
three duplicate ACKs for the first segment in the queue, that segment is
retransmitted.
Retransmission after RTO

 The sending TCP maintains one retransmission time-out (RTO) for each
connection.
 When the timer matures, i.e. times out, TCP resends the segment in the
front of the queue (the segment with the smallest sequence number)
and restarts the timer.
 The value of RTO is dynamic in TCP and is updated based on the round-
trip time (RTT) of segments.
 RTT is the time needed for a segment to reach a destination and for an
acknowledgment to be received.
Retransmission after Three Duplicate
ACK Segments

 The previous rule about retransmission of a segment is sufficient if the


value of RTO is not large.
 To expedite service throughout the Internet by allowing senders to
retransmit without waiting for a time out, most implementations today
follow the three duplicateACKs rule and retransmit the missing segment
immediately.
 This feature is called fast retransmission.
 In this version, if three duplicate acknowledgments arrive for a
segment, the next segment is retransmitted without waiting for the
time-out.
Out-of-Order Segments

 TCP implementations today do not discard out-of-order segments.


 They store them temporarily and flag them as out-of-order segments
until the missing segments arrive.
TCP Congestion Control

 1. What is Congestion?
 Occurs when too many packets are sent through a network, causing
packet loss and delays.
 2. Why is Congestion Control Needed?
 Prevents network overload.Ensures fair bandwidth usage among
users.Improves overall network efficiency and performance
TCP Congestion Control Mechanisms
.

 a) Slow Start
 Initially, TCP increases the Congestion Window (CWND) exponentially.Helps probe the
available bandwidth safely.
 b) Congestion Avoidance
 After reaching a threshold, TCP increases CWND linearly.Prevents sudden congestion by
gradually increasing traffic.
 c) Fast Retransmit
 If 3 duplicate ACKs are received, TCP assumes packet loss and retransmits it immediately.
 d) Fast Recovery
 Instead of restarting from scratch, TCP reduces CWND but avoids going back to the slow
start phase.
Advantages of TCP Congestion
Control

✔ Prevents packet loss and network collapse.


✔ Ensures fairness among users.
✔ Optimizes network bandwidth usage.
TCP Variants for Congestion Control

 1. TCP Tahoe
Key Features:
Uses Slow Start, Congestion Avoidance, and Fast Retransmit.If packet loss
is detected (via timeout or 3 duplicate ACKs), it reduces CWND to 1 and
enters Slow Start again.
This drastic reduction slows down recovery but prevents congestion
collapse.
Drawback:
Resetting CWND to 1 after every packet loss is inefficient, especially in
high-bandwidth networks.
 2. TCP Reno
Key Features:
Builds on Tahoe but introduces Fast Recovery to improve performance .
If 3 duplicate ACKs are received : Halves CWND instead of resetting it to 1.
Enters Fast Recovery to keep sending packets, avoiding Slow Start.
If packet loss is detected via timeout, it behaves like Tahoe (CWND=1).
Advantage : Recovers faster from packet loss than Tahoe, improving throughput.
Drawback : Not efficient when multiple packet losses occur in one window of
data.
 3. TCP New Reno
Key Features : Improves Reno by handling multiple packet losses in one
congestion window .
Uses partial acknowledgments (partial ACKs) to detect lost packets and
retransmit without leaving Fast Recovery .
Helps recover from multiple losses without reducing CWND too
aggressively.
Advantage : Better performance in networks with high packet loss .
Drawback : Still relies on duplicate ACKs for detecting losses, which may
not work well in all cases.
TCP Timers

 TCP uses timers to ensure reliable communication and efficient


congestion control.
 The main timers are:

 1. Retransmission Timer
 Used to detect packet loss and trigger retransmission .
 Set based on the Round Trip Time (RTT) between sender and receiver.
 If no ACK is received within the timeout period, TCP retransmits the
packet.
2. Persistence Timer

 Prevents deadlocks due to zero-window size.


 If the receiver’s buffer is full, it sends a zero-window update .
 The sender waits and periodically probes the receiver using this timer.
3. Keepalive Timer

 Detects idle connections to determine if the other side is still active.


 If no data is exchanged for a long time, TCP sends keepalive probes .
 If no response, the connection is terminated.
4. Time-Wait Timer

 Ensures the last ACK in a connection termination process is received .


 Prevents issues like delayed duplicate packets affecting new
connections .
 The connection stays in the TIME-WAIT state for 2 × Maximum Segment
Lifetime (MSL) before closing.
Karn's Algorithm

 Used in TCP Retransmission Timeout (RTO) Calculation .


 Helps handle ambiguity in measuring Round Trip Time (RTT) when
packet loss occurs.
 Rule: If a packet is retransmitted, do not update RTT using its ACK.
 Instead, wait for a successful, non-retransmitted ACK to update RTT.
 Prevents incorrect RTO estimations and improves TCP performance

You might also like