0% found this document useful (0 votes)
9 views41 pages

Unit 4 CN

The Transport Layer is the second layer in the TCP/IP model, responsible for end-to-end communication between applications, ensuring reliable data transmission through protocols like TCP and UDP. It performs functions such as segmentation, multiplexing, congestion control, and error correction, facilitating process-to-process delivery using port numbers. TCP provides reliable, ordered delivery, while UDP offers faster, connectionless communication, making both protocols suitable for different application needs.

Uploaded by

astharaghav11
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views41 pages

Unit 4 CN

The Transport Layer is the second layer in the TCP/IP model, responsible for end-to-end communication between applications, ensuring reliable data transmission through protocols like TCP and UDP. It performs functions such as segmentation, multiplexing, congestion control, and error correction, facilitating process-to-process delivery using port numbers. TCP provides reliable, ordered delivery, while UDP offers faster, connectionless communication, making both protocols suitable for different application needs.

Uploaded by

astharaghav11
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 41

Unit 4

Transport Layer

The transport Layer is the second layer in the TCP/IP model and the fourth layer in the OSI
model. It is an end-to-end layer used to deliver messages to a host. It is termed an end-to-end
layer because it provides a point-to-point connection rather than hop-to-hop, between the
source host and destination host to deliver the services reliably. The unit of data
encapsulation in the Transport Layer is a segment.

Working of Transport Layer


The transport layer takes services from the Application layer and provides services to
the Network layer.
The transport layer ensures the reliable transmission of data between systems.
At the sender’s side: The transport layer receives data (message) from the Application layer
and then performs Segmentation, divides the actual message into segments, adds the source
and destination’s port numbers into the header of the segment, and transfers the message to
the Network layer.
At the receiver’s side: The transport layer receives data from the Network layer, reassembles
the segmented data, reads its header, identifies the port number, and forwards the message to
the appropriate port in the Application layer.
Responsibilities of a Transport Layer
 The Process to Process Delivery
 End-to-End Connection between Hosts
 Multiplexing and Demultiplexing
 Congestion Control
 Data integrity and Error correction
 Flow control
1. The Process to Process Delivery
While Data Link Layer requires the MAC address (48 bits address contained inside the
Network Interface Card of every host machine) of source-destination hosts to correctly
deliver a frame and the Network layer requires the IP address for appropriate routing of
packets, in a similar way Transport Layer requires a Port number to correctly deliver the
segments of data to the correct process amongst the multiple processes running on a
particular host. A port number is a 16-bit address used to identify any client-server program
uniquely.
Process to Process Delivery
2. End-to-end Connection between Hosts
The transport layer is also responsible for creating the end-to-end Connection between hosts
for which it mainly uses TCP and UDP. TCP is a secure, connection-orientated protocol that
uses a handshake protocol to establish a robust connection between two end hosts. TCP
ensures the reliable delivery of messages and is used in various applications. UDP, on the
other hand, is a stateless and unreliable protocol that ensures best-effort delivery. It is suitable
for applications that have little concern with flow or error control and requires sending the
bulk of data like video conferencing. It is often used in multicasting protocols.

End to End Connection.


3. Multiplexing and Demultiplexing
Multiplexing(many to one) is when data is acquired from several processes from the sender
and merged into one packet along with headers and sent as a single packet. Multiplexing
allows the simultaneous use of different processes over a network that is running on a host.
The processes are differentiated by their port numbers. Similarly, Demultiplexing(one to
many) is required at the receiver side when the message is distributed into different
processes. Transport receives the segments of data from the network layer distributes and
delivers it to the appropriate process running on the receiver’s machine.

Multiplexing and Demultiplexing


4. Congestion Control
Congestion is a situation in which too many sources over a network attempt to send data and
the router buffers start overflowing due to which loss of packets occurs. As a result, the
retransmission of packets from the sources increases the congestion further. In this situation,
the Transport layer provides Congestion Control in different ways. It uses open-loop
congestion control to prevent congestion and closed-loop congestion control to remove the
congestion in a network once it occurred. TCP provides AIMD – additive increases
multiplicative decrease and leaky bucket technique for congestion control.

Leaky Bucket Congestion Control Technique


5. Data integrity and Error Correction
The transport layer checks for errors in the messages coming from the application layer by
using error detection codes, and computing checksums, it checks whether the received data is
not corrupted and uses the ACK and NACK services to inform the sender if the data has
arrived or not and checks for the integrity of data.
Error Correction using Checksum
6. Flow Control
The transport layer provides a flow control mechanism between the adjacent layers of the
TCP/IP model. TCP also prevents data loss due to a fast sender and slow receiver by
imposing some flow control techniques. It uses the method of sliding window protocol which
is accomplished by the receiver by sending a window back to the sender informing the size of
data it can receive.
Protocols of Transport Layer
 Transmission Control Protocol (TCP)
 User Datagram Protocol (UDP)
 Stream Control Transmission Protocol (SCTP)
 Datagram Congestion Control Protocol (DCCP)
 AppleTalk Transaction Protocol (ATP)
 Fibre Channel Protocol (FCP)
 Reliable Data Protocol (RDP)
 Reliable User Data Protocol (RUDP)
 Structured Steam Transport (SST)
 Sequenced Packet Exchange (SPX)

Process to Process Delivery:

The data link layer is responsible for delivery of frames between two neighboring nodes over
a link. This is called node-to-node delivery. The network layer is responsible for delivery of
datagrams between two hosts. This is called host-to-host delivery. Real communication takes
place between two processes (application programs). We need process-to-process delivery.
The transport layer is responsible for process-to-process delivery-the delivery of a packet,
part of a message, from one process to another. Figure 4.1 shows these three types of
deliveries and their domains
1. Client/Server Paradigm

Although there are several ways to achieve process-to-process communication, the most
common one is through the client/server paradigm. A process on the local host, called a
client, needs services from a process usually on the remote host, called a server. Both
processes (client and server) have the same name. For example, to get the day and time from
a remote machine, we need a Daytime client process running on the local host and a Daytime
server process running on a remote machine. For communication, we must define the
following:

1. Local host
2. Local process
3. Remote host
4. Remote process

2. Addressing

Whenever we need to deliver something to one specific destination among many, we need an
address. At the data link layer, we need a MAC address to choose one node among several
nodes if the connection is not point-to-point. A frame in the data link layer needs a
Destination MAC address for delivery and a source address for the next node's reply.

Figure 4.2 shows this concept.


The IP addresses and port numbers play different roles in selecting the final destination of
data. The destination IP address defines the host among the different hosts in the world. After
the host has been selected, the port number defines one of the processes on this particular
host (see Figure 4.3).

3. lANA Ranges

The lANA (Internet Assigned Number Authority) has divided the port numbers into three
ranges: well known, registered, and dynamic (or private), as shown in Figure 4.4.
· Well-known ports. The ports ranging from 0 to 1023 are assigned and controlledby lANA.
These are the well-known ports.

· Registered ports. The ports ranging from 1024 to 49,151 are not assigned orcontrolled by
lANA. They can only be registered with lANA to prevent duplication.

·Dynamic ports. The ports ranging from 49,152 to 65,535 are neither controllednor
registered. They can be used by any process. These are the ephemeral ports.

4. Socket Addresses

Process-to-process delivery needs two identifiers, IP address and the port number, at each end
to make a connection. The combination of an IP address and a port number is called a socket
address. The client socket address defines the client process uniquely just as the server socket
address defines the server process uniquely (see Figure 4.5).

UDP or TCP header contains the port numbers.

5. Multiplexing and Demultiplexing

The addressing mechanism allows multiplexing and demultiplexing by the transport layer, as
shown in Figure 4.6.
Multiplexing

At the sender site, there may be several processes that need to send packets. However, there is
only one transport layer protocol at any time. This is a many-to-one relationship and requires
multiplexing.

Demultiplexing

At the receiver site, the relationship is one-to-many and requires demultiplexing. The
transport layer receives datagrams from the network layer. After error checking and dropping
of the header, the transport layer delivers each message to the appropriate process based on
the port number.

6. Connectionless Versus Connection-Oriented Service


A transport layer protocol can either be connectionless or connection-oriented.

Connectionless Service

In a connectionless service, the packets are sent from one party to another with no need for
connection establishment or connection release. The packets are not numbered; they may be
delayed or lost or may arrive out of sequence. There is no acknowledgment either.
Connection~Oriented Service

In a connection-oriented service, a connection is first established between the sender and the
receiver. Data are transferred. At the end, the connection is released.

7. Reliable Versus Unreliable

The transport layer service can be reliable or unreliable. If the application layer program
needs reliability, we use a reliable transport layer protocol by implementing flow and error
control at the transport layer. This means a slower and more complex service.

In the Internet, there are three common different transport layer protocols. UDP is
connectionless and unreliable; TCP and SCTP are connection oriented and reliable. These
three can respond to the demands of the application layer programs.

The network layer in the Internet is unreliable (best-effort delivery), we need to implement
reliability at the transport layer. To understand that error control at the data link layer does not
guarantee error control at the transport layer, let us look at Figure 4.7.

8. Three Protocols

The original TCP/IP protocol suite specifies two protocols for the transport layer: UDP and
TCP. We first focus on UDP, the simpler of the two, before discussing TCP. A new transport
layer protocol, SCTP, has been designed. Figure 4.8 shows the position of these protocols in
the TCP/IP protocol suite.
The Transport Layer in the network architecture is responsible for end-to-end communication
between applications. In this layer, TCP (Transmission Control Protocol) and UDP (User
Datagram Protocol) are the two main protocols that handle the responsibility of moving data
between applications. TCP focuses on reliable, ordered delivery of information, perfect for
tasks where accuracy is improvement. On the other hand, UDP is simpler and faster, often
used for activities like live streaming or online gaming, where speed matters more than
absolute reliability.
Together, TCP and UDP provide a flexible foundation for various network applications and
diverse communication needs. Understanding the basics of TCP and UDP helps us choose the
right approach for efficient and effective data delivery.
What is Transmission Control Protocol (TCP)?
TCP is a layer 4 core communication protocol within the internet protocol which ensures
reliable, ordered, and error-checked delivery of data between devices. When two devices
establish a TCP connection, they perform a three-way handshake to confirm each other’s
presence and agree on parameters for data exchange. TCP breaks information into packets,
sends them, and then ensures all packets arrive in the correct order. If any packets are lost or
damaged during transmission, TCP automatically requests them to be re-sent. This approach
makes TCP ideal for applications where data accuracy is more important than speed, such as
browsing web pages, sending emails, and downloading files because users receive complete
and correctly sequenced information.
It is better than UDP but due to these features, it has an additional overhead. Also, application
protocols like HTTP and FTP use it.
Transmission Control Protocol
Use Cases of TCP Protocol
TCP (Transmission Control Protocol) is one of the main parts of the internet which provides
reliable and ordered data delivery. Here are some of its key use cases:
1. Web Browsing:
 When you type a URL into your browser, your computer uses TCP to establish a
connection with the web server.
 TCP ensures that the HTML, CSS, and JavaScript files that make up the webpage are
delivered accurately and in the correct order.
2. Email:
 Protocols like SMTP (Simple Mail Transfer Protocol) and IMAP (Internet Message
Access Protocol) rely on TCP for sending and receiving emails.
 TCP guarantees that your emails are delivered completely and in the correct sequence.
3. File Transfer:
 Protocols like FTP (File Transfer Protocol) and SFTP (Secure File Transfer Protocol)
utilize TCP for transferring files between computers.
 TCP’s reliability ensures that files are transferred accurately and without data
corruption.
4. Remote Access:
 Protocols like Telnet and SSH (Secure Shell) use TCP for remote access to other
computers.
 TCP ensures that commands and data are transmitted reliably, allowing you to interact
with remote systems securely.
5. Online Banking and Financial Transactions:
 TCP’s reliability and security are crucial for online banking and financial transactions.
 It ensures that sensitive data is transmitted securely and accurately, preventing data
loss or corruption.
What is User Datagram Protocol (UDP)?
User Datagram Protocol (UDP) is a layer 4 communication protocol used in the internet’s
network layer, transport layer, and session layer. Unlike TCP it sends data as independent
packets called datagrams without first establishing a dedicated connection. This means UDP
does not guarantee delivery, order, or error correction, it simply sends data and hopes it
arrives. Because it skips these checks, UDP has very low overhead and latency which makes
it ideal for applications where speed is more important than perfect reliability. Examples
include live video streaming, online gaming, and voice calls, where a few missed packets are
often less noticeable than the delay that comes from waiting for perfect delivery.

User Datagram Protocol


Use Cases of the UDP Protocol
UDP (User Datagram Protocol) is a connectionless protocol that prioritizes speed and
efficiency over reliability. Here are some key use cases of the UDP:
1. Real-time Applications:
 Voice over IP (VoIP): Services like Skype, Zoom, and Google Meet often utilize
UDP for real-time voice and video communication. While some packet loss is
acceptable, minimizing latency is crucial for a smooth conversation.
 Online Gaming: Many online games rely on UDP for fast, low-latency
communication between players and game servers. This ensures responsiveness and
prevents gameplay delays.
 Video Conferencing: Similar to VoIP, UDP is used for real-time video conferencing
applications where timely delivery of video and audio streams is essential.
2. Streaming Media:
 Live Streaming: Services like Twitch, YouTube Live, and Netflix use UDP for
streaming audio and video content. While some packet loss is acceptable, UDP’s
speed and efficiency are crucial for delivering a smooth streaming experience.
3. Network Management Protocols:
 DNS (Domain Name System): UDP is commonly used for DNS lookups, where
quick responses are essential for resolving domain names into IP addresses.
 SNMP (Simple Network Management Protocol): This protocol is used for
monitoring and managing network devices. UDP’s speed and efficiency make it
suitable for collecting performance data from network devices.
 DHCP (Dynamic Host Configuration Protocol): UDP is used for dynamically
assigning IP addresses to devices on a network.
4. Broadcast and Multicast:
 Broadcast Applications: UDP is well-suited for broadcast applications where a
single message needs to be sent to multiple recipients simultaneously, such as network
discovery protocols.
 Multicast Applications: UDP is used for multicast applications where a message
needs to be sent to a specific group of recipients efficiently.
What is TCP vs UDP?
Session Multiplexing:
 A single host with a single IP address is able to communicate with multiple servers.
While using TCP, first a connection must be established between the server and the
receiver and the connection is closed when the transfer is completed. TCP also
maintains reliability while the transfer is taking place.
 UDP on the other hand sends no acknowledgement of receiving the packets.
Therefore it provides no reliability.
Segmentation:
 Information sent is first broken into smaller packets for transmission.
 Maximum Transmission Unit or MTU of a Fast Ethernet is 1500 bytes whereas the
theoretical value of TCP is 65495 bytes. Therefore data has to be broken into smaller
packets before being sent to the lower layers. MSS or Maximum Segment Size should
be set small enough to avoid fragmentation. TCP supports MSS and Path MTU
discovery with which the sender and the receiver can automatically determine the
maximum transmission capability.
 UDP doesn’t support this, therefore it depends on the higher layer protocols for data
segmentation.
Flow Control:
 If sender sends data faster than what receiver can process then the receiver will drop
the data and then request for a retransmission, leading to wastage of time and
resources. TCP provides end-to-end flow control which is realized using a sliding
window. The sliding window sends an acknowledgement from receiver’s end
regarding the data that the receiver can receive at a time.
 UDP doesn’t implement flow control and depends on the higher layer protocols for
the same.

TCP vs. UDP


Connection Oriented:
 TCP is connection oriented, i.e., it creates a connection for the transmission to take
place, and once the transfer is over that connection is terminated.
 UDP on the other hand is connectionless just like IP (Internet Protocol).
Reliability:
 TCP sends an acknowledgement when it receives a packet. It requests a
retransmission in case a packet is lost.
 UDP relies on the higher layer protocols for the same.
Headers:
 The size of TCP header is 20-bytes (16-bits for source port, 16-bits for the destination
port, 32-bits for seq number, 32-bits for ack number, 4-bits header length)
 The size of the UDP header is 8-bytes (16-bits for source port, 16-bits for destination
port, 16-bits for length, 16-bits for checksum), it’s significantly smaller than the TCP
header.
 Both UDP and TCP header is comprised of 16-bit Source port(these are used for
identifying the port number of the source) fields and 16-bits destination port (these are
used for specifying the offered application) fields.
For more details refer to: Difference Between TCP and UDP.
How to Choose Between TCP and UDP?
On the Basis of Reliability vs. Speed:
 TCP provides reliable, ordered, and error-checked delivery of data which makes it
ideal for applications where accuracy matters more than speed (e.g., web pages,
emails, file transfers).
 UDP offers faster, connectionless transmission without guaranteeing delivery or
order, suitable for real-time applications like video streaming, online gaming, or voice
calls, where speed and low latency are more important than perfect accuracy.
On the Basis of Connection Overhead:
 TCP establishes and maintains a connection between sender and receiver, adding
overhead but ensuring stable data flow and retransmissions if packets are lost.
 UDP does not establish a dedicated connection, reducing latency and overhead, but
leaving error-handling to the application if needed.
On the Basis of Use Case :
 TCP is best when demanding correctness such as web browsing, file downloads, and
secure data transfers.
 UDP is preferred when you need minimal delay and can tolerate some data loss such
as live broadcasts, online gaming, and VoIP.

Multiplexing-
Multiplexing and Demultiplexing services are provided in almost every protocol architecture
ever designed. UDP and TCP perform the demultiplexing and multiplexing jobs by including
two special fields in the segment headers: the source port number field and the destination
port number field.
Multiplexing –
Gathering data from multiple application processes of the sender, enveloping that data with a
header, and sending them as a whole to the intended receiver is called multiplexing.
Demultiplexing –
Delivering received segments at the receiver side to the correct app layer processes is called
demultiplexing.
Figure – Abstract view of multiplexing and demultiplexing
Multiplexing and demultiplexing are the services facilitated by the transport layer of the OSI
model.
Figure – Transport layer- junction for multiplexing and demultiplexing
There are two types of multiplexing and Demultiplexing:

1. Connectionless Multiplexing and Demultiplexing


2. Connection-Oriented Multiplexing and Demultiplexing
How Multiplexing and Demultiplexing is done –
For sending data from an application on the sender side to an application at the destination
side, the sender must know the IP address of the destination and port number of the
application (at the destination side) to which he wants to transfer the data. Block diagram is
shown below:
Figure – Transfer of packet between applications of sender and receiver
Let us consider two messaging apps that are widely used nowadays viz. Hike and WhatsApp.
Suppose A is the sender and B is the receiver. Both sender and receiver have these
applications installed in their system (say smartphone). Suppose A wants to send messages to
B in WhatsApp and hike both. In order to do so, A must mention the IP address of B and
destination port number of the WhatsApp while sending the message through the WhatsApp
application. Similarly, for the latter case, A must mention the IP address of B and the
destination port number of the hike while sending the message.
Now the messages from both the apps will be wrapped up along with appropriate headers(viz.
source IP address, destination IP address, source port no, destination port number) and sent as
a single message to the receiver. This process is called multiplexing. At the destination, the
received message is unwrapped and constituent messages (viz messages from a hike and
WhatsApp application) are sent to the appropriate application by looking to the destination
the port number. This process is called demultiplexing. Similarly, B can also transfer the
messages to A.

Connection management-
TCP is a connection-oriented protocol and every connection-oriented protocol needs to
establish a connection in order to reserve resources at both the communicating ends.
Connection Establishment –
TCP connection establishment involves a three-way handshake to ensure reliable
communication between devices. Understanding each step of this handshake process is
critical for networking professionals.
1. Sender starts the process with the following:
• Sequence number (Seq=521): contains the random initial sequence number generated at the
sender side.
• Syn flag (Syn=1): request the receiver to synchronize its sequence number with the above-
provided sequence number.
• Maximum segment size (MSS=1460 B): sender tells its maximum segment size, so that
receiver sends datagram which won’t require any fragmentation. MSS field is present inside
Option field in TCP header.
• Window size (window=14600 B): sender tells about his buffer capacity in which he has to
store messages from the receiver.
2. TCP is a full-duplex protocol so both sender and receiver require a window for receiving
messages from one another.
• Sequence number (Seq=2000): contains the random initial sequence number generated at
the receiver side.
• Syn flag (Syn=1): request the sender to synchronize its sequence number with the above-
provided sequence number.
• Maximum segment size (MSS=500 B): receiver tells its maximum segment size, so that
sender sends datagram which won’t require any fragmentation. MSS field is present inside
Option field in TCP header.
Since MSS receiver < MSS sender , both parties agree for minimum MSS i.e., 500 B to avoid
fragmentation of packets at both ends.
Therefore, receiver can send maximum of 14600/500 = 29 packets.
This is the receiver's sending window size.
• Window size (window=10000 B): receiver tells about his buffer capacity in which he has to
store messages from the sender.
Therefore, sender can send a maximum of 10000/500 = 20 packets.
This is the sender's sending window size.
• Acknowledgement Number (Ack no.=522): Since sequence number 521 is received by the
receiver so, it makes a request for the next sequence number with Ack no.=522 which is the
next packet expected by the receiver since Syn flag consumes 1 sequence no.
• ACK flag (ACk=1): tells that the acknowledgement number field contains the next
sequence expected by the receiver.
3. Sender makes the final reply for connection establishment in the following way:
• Sequence number (Seq=522): since sequence number = 521 in 1 st step and SYN flag
consumes one sequence number hence, the next sequence number will be 522.
• Acknowledgement Number (Ack no.=2001): since the sender is acknowledging SYN=1
packet from the receiver with sequence number 2000 so, the next sequence number expected
is 2001.
• ACK flag (ACK=1): tells that the acknowledgement number field contains the next
sequence expected by the sender.
The connection is established in TCP using the three-way handshake as discussed earlier to
create a connection. One side, say the server, passively stays for an incoming link by
implementing the LISTEN and ACCEPT primitives, either determining a particular other side
or nobody in particular.
The other side performs a connect primitive specifying the I/O port to which it wants to join.
The maximum TCP segment size available, other options are optionally like some private
data (example password).
The CONNECT primitive transmits a TCP segment with the SYN bit on and the ACK bit off
and waits for a response.
The sequence of TCP segments sent in the typical case, as shown in the figure below −
When the segment sent by Host-1 reaches the destination, i.e., host -2, the receiving server
checks to see if there is a process that has done a LISTEN on the port given in the destination
port field. If not, it sends a response with the RST bit on to refuse the connection. Otherwise,
it governs the TCP segment to the listing process, which can accept or decline (for example,
if it does not look similar to the client) the connection.
Call Collision
If two hosts try to establish a connection simultaneously between the same two sockets, then
the events sequence is demonstrated in the figure under such circumstances. Only one
connection is established. It cannot select both the links because their endpoints identify
connections.
Suppose the first set up results in a connection identified by (x, y) and the second connection
are also released up. In that case, only tail enter will be made, i.e., for (x, y) for the initial
sequence number, a clock-based scheme is used, with a clock pulse coming after every 4
microseconds. For ensuring additional safety when a host crashes, it may not reboot for sec,
which is the maximum packet lifetime. This is to make sure that no packets from previous
connections are roaming around.

TCP Connection (A 3-way handshake)


Handshake refers to the process to establish connection between the client and server.
Handshake is simply defined as the process to establish a communication link. To transmit a
packet, TCP needs a three way handshake before it starts sending data. The reliable
communication in TCP is termed as PAR (Positive Acknowledgement Re-transmission).
When a sender sends the data to the receiver, it requires a positive acknowledgement from the
receiver confirming the arrival of data. If the acknowledgement has not reached the sender, it
needs to resend that data. The positive acknowledgement from the receiver establishes a
successful connection.
Here, the server is the server and client is the receiver. The above diagram shows 3 steps for
successful connection. A 3-way handshake is commonly known as SYN-SYN-ACK and
requires both the client and server response to exchange the data. SYN means synchronize
Sequence Number and ACK means acknowledgment. Each step is a type of handshake
between the sender and the receiver.
The diagram of a successful TCP connection showing the three handshakes is shown below:

The three handshakes are discussed in the below steps:


Step 1: SYN
SYN is a segment sent by the client to the server. It acts as a connection request between the
client and server. It informs the server that the client wants to establish a connection.
Synchronizing sequence numbers also helps synchronize sequence numbers sent between any
two devices, where the same SYN segment asks for the sequence number with the connection
request.
Step 2: SYN-ACK
It is an SYN-ACK segment or an SYN + ACK segment sent by the server. The ACK segment
informs the client that the server has received the connection request and it is ready to build
the connection. The SYN segment informs the sequence number with which the server is
ready to start with the segments.
Step 3: ACK
ACK (Acknowledgment) is the last step before establishing a successful TCP connection
between the client and server. The ACK segment is sent by the client as the response of the
received ACK and SN from the server. It results in the establishment of a reliable data
connection.
After these three steps, the client and server are ready for the data communication process.
TCP connection and termination are full-duplex, which means that the data can travel in both
the directions simultaneously.
TCP Termination (A 4-way handshake)
Any device establishes a connection before proceeding with the termination. TCP requires 3-
way handshake to establish a connection between the client and server before sending the
data. Similarly, to terminate or stop the data transmission, it requires a 4-way handshake. The
segments required for TCP termination are similar to the segments to build a TCP connection
(ACK and SYN) except the FIN segment. The FIN segment specifies a termination request
sent by one device to the other.
The client is the data transmitter and the server is a receiver in a data transmission process
between the sender and receiver. Consider the below TCP termination diagram that shows the
exchange of segments between the client and server.
The diagram of a successful TCP termination showing the four handshakes is shown below:

Let's discuss the TCP termination process with the help of six steps that includes the sent
requests and the waiting states. The steps are as follows:
Step 1: FIN
FIN refers to the termination request sent by the client to the server. The first FIN
termination request is sent by the client to the server. It depicts the start of the termination
process between the client and server.
Step 2: FIN_ACK_WAIT
The client waits for the ACK of the FIN termination request from the server. It is a waiting
state for the client.
Step 3: ACK
The server sends the ACK (Acknowledgement) segment when it receives the FIN termination
request. It depicts that the server is ready to close and terminate the connection.
Step 4: FIN _WAIT_2
The client waits for the FIN segment from the server. It is a type of approved signal sent by
the server that shows that the server is ready to terminate the connection.
Step 5: FIN
The FIN segment is now sent by the server to the client. It is a confirmation signal that the
server sends to the client. It depicts the successful approval for the termination.
Step 6: ACK
The client now sends the ACK (Acknowledgement) segment to the server that it has received
the FIN signal, which is a signal from the server to terminate the connection. As soon as the
server receives the ACK segment, it terminates the connection.

Flow control and retransmission-


In the transport layer, flow control manages data flow between the sender and receiver,
preventing a faster sender from overwhelming a slower receiver. Retransmission, on the other
hand, ensures reliable data delivery by re-sending lost or damaged data segments.
Flow Control:
 Purpose:
To prevent a sender from sending more data than a receiver can handle or process.
 Mechanism:
The receiver uses techniques like the sliding window protocol to inform the sender about how
much data it can currently accept.
 Example:
If a receiver's buffer fills up, it may send a "stop sending" signal to the sender, pausing
further transmission until the receiver is ready.
 Benefits:
Ensures efficient use of network resources, prevents buffer overflows, and improves network
stability.

Advantages of Flow Control


 Prevents buffer overflow: Flow control prevents buffer overflow by regulating the
rate at which data is sent from the sender to the receiver.
 Helps in handling different data rates: Flow control helps in handling different data
rates by regulating the flow of data to match the capacity of the receiving device.
 Efficient use of network resources: Flow control helps in the efficient use of network
resources by avoiding packet loss and reducing the need for retransmissions.
Disadvantages of Flow Control
 May cause delays: Flow control may cause delays in data transmission as it regulates
the rate of data flow.
 May not be effective in congested networks: Flow control may not be effective in
congested networks where the congestion is caused by multiple sources.
 May require additional hardware or software: Flow control may require additional
hardware or software to implement the flow control mechanism.

Retransmission:
 Purpose:
To ensure reliable data delivery by re-sending lost or corrupted data segments.
 Mechanism:
The sender uses techniques like acknowledgements (ACKs) and sequence numbers to detect
missing or damaged segments.
 Example:
If the receiver doesn't receive an expected segment or receives a corrupted one, it may send a
negative acknowledgement (NAK) or a duplicate ACK, prompting the sender to retransmit
the lost or damaged segment.
 Benefits:
Guarantees that data reaches the destination accurately and completely, even in the presence
of network errors or congestion.
Relationship between Flow Control and Retransmission:
Flow control and retransmission work together to ensure reliable and efficient data
transfer. Flow control helps prevent data loss due to a fast sender, while retransmission
ensures that any lost or damaged data is recovered and delivered. Together, they provide a
robust and reliable mechanism for data transmission in the transport layer.

Key Differences:
Feature Flow Control Retransmission

Purpose Prevent overwhelming the receiver Ensure reliable delivery by resending lost
data

Mechanis Receiver informs sender of buffer Sender waits for ACK, resends if not
m capacity received

Focus Rate of transmission Data loss and error correction

Example TCP's sliding window protocol TCP's retransmission timer

"window management" primarily refers to TCP windowing, a technique used by the


Transmission Control Protocol (TCP) to control the flow of data packets between two
computers. It's a mechanism to ensure reliable and efficient data delivery. Additionally,
"window management" can also refer to the management of application windows within a
graphical user interface, though this is a distinct concept related to operating systems and
windowing systems, not directly network protocols.
TCP Windowing:
 Purpose:
TCP windowing is a flow control mechanism that prevents a sender from sending data faster
than the receiver can process it.
 How it works:
The receiver (TCP receiver) advertises its "window size" (RWIN) to the sender (TCP sender),
indicating how much data it's willing to accept before sending an acknowledgment
(ACK). The sender uses this RWIN to determine how many packets it can send before
waiting for an ACK.
 Benefits:
This technique helps prevent buffer overflows at the receiver's end, ensures data delivery in
the correct order, and allows for efficient utilization of network resources.
 Dynamic Adjustment:
The TCP sender also maintains a "congestion window" (CWIN), which dynamically adjusts
based on network conditions (e.g., packet loss, congestion). This further helps manage data
flow based on real-time network status.
Window Management (Graphical User Interface):
 Purpose:
This refers to the system software that controls the placement, size, and appearance of
windows within a windowing system in a GUI.
 Example:
A window manager is responsible for arranging multiple application windows on the screen,
allowing users to resize, move, and close them,
Relationship to networks:
While not directly related to network protocols, window management is crucial for how users
interact with applications on a networked system, particularly when working with remote
desktops or applications running on servers.

TCP windowing is a flow control mechanism used by the sender and receiver to manage the
amount of data that can be transmitted at any given time.
The basic idea of windowing is that the sender can only transmit data up to a certain point,
and the receiver will acknowledge receipt of that data.
The sender will then send more data, and the process continues until the entire data stream
has been transmitted.
TCP Windowing Mechanisms
TCP windowing is implemented using a sliding window algorithm, which is a method for
managing data flow between two endpoints. The sliding window algorithm is used to manage
both the sending and receiving window sizes.
Sliding Window Algorithm
The sliding window algorithm works by dividing the data stream into smaller segments, or
packets. Each packet is assigned a sequence number, which is used to ensure that packets are
received in the correct order. The sender maintains a sliding window, which is a range of
sequence numbers that can be transmitted at any given time. The size of the window is
determined by the receiver's buffer size and the network conditions.
Sending and Receiving Window Sizes
The sending window size is the amount of data that the sender can transmit before receiving
an acknowledgment from the receiver. The receiving window size is the amount of data that
the receiver can buffer before sending an acknowledgment to the sender. The sender and
receiver negotiate the window sizes during the TCP handshake process.
Window Scaling
Window scaling is a mechanism used to increase the maximum window size beyond the
default limit of 64KB. Window scaling is negotiated during the TCP handshake process when
the receiver sends a Window Scale option to the sender.
Benefits and Limitations of Window Scaling
Window scaling can improve network performance by allowing more data to be transmitted
at once. However, it can also increase the risk of congestion and packet loss if not managed
properly.
Congestion Control and TCP Windowing
TCP windowing is closely tied to congestion control, which is a mechanism used to prevent
network congestion and packet loss. There are two major congestion control algorithms used
by TCP: Slow Start and Congestion Avoidance.
Slow Start Algorithm
The Slow Start algorithm is used to initially establish the transmission rate of a data stream.
The sender gradually increases the window size until it reaches a point where packet loss is
detected.
Impact of Slow Start on TCP Windowing
Slow Start can impact TCP windowing by limiting the amount of data that can be transmitted
initially. As the sender gradually increases the window size, the receiver's buffer may become
full, resulting in packet loss and decreased network performance.
Congestion Avoidance Algorithm
The Congestion Avoidance algorithm is used to maintain an optimal transmission rate while
avoiding network congestion. The sender gradually increases the window size until packet
loss is detected, and then reduces the window size to alleviate congestion.
Effects of Congestion Avoidance on TCP Windowing
Congestion Avoidance can impact TCP windowing by reducing the window size when
congestion is detected. This can result in decreased network performance, but it is necessary
to prevent network congestion and packet loss.
Enhancing TCP Windowing
There are several mechanisms used to enhance TCP windowing, including Selective
Acknowledgment (SACK) and Explicit Congestion Notification (ECN).
Selective Acknowledgment (SACK)
Selective Acknowledgment (SACK) is a mechanism used to improve data transmission
efficiency by allowing the receiver to acknowledge receipt of non-contiguous data segments.
This allows the sender to retransmit only the missing data segments, rather than the entire
window.
Improving Data Transmission Efficiency with SACK
SACK can improve network performance by reducing the amount of data that needs to be
retransmitted in the event of packet loss.
Explicit Congestion Notification (ECN)
Explicit Congestion Notification (ECN) is a mechanism used to notify the sender of network
congestion before packet loss occurs. ECN is implemented using a bit in the TCP header,
which is set by the router when congestion is detected.
ECN and its Role in TCP Windowing
ECN can impact TCP windowing by allowing the sender to reduce the window size before
congestion occurs, preventing packet loss and improving network performance.
TCP Windowing Strategies
There are several strategies used to optimize TCP windowing, including the use of larger
window sizes and dynamic vs fixed window sizes.
Advantages and Disadvantages of Larger Window Sizes
Larger window sizes can improve network performance by allowing more data to be
transmitted at once. However, they can also increase the risk of congestion and packet loss if
not managed properly.
Dynamic vs Fixed Window Sizes
Dynamic window sizes allow for more flexibility in managing data flow, but they require
more processing overhead. Fixed window sizes are easier to manage but may not be optimal
for all network conditions.
Choosing the Right Window Size for Your Network
Choosing the right window size for your network requires careful consideration of network
conditions, traffic patterns, and hardware capabilities.
TCP Windowing Best Practices
There are several best practices for optimizing TCP windowing, including fine-tuning TCP
windowing parameters and troubleshooting TCP windowing errors.
Fine-tuning TCP Windowing Parameters
Fine-tuning TCP windowing parameters requires a thorough understanding of network
conditions, traffic patterns, and hardware capabilities. Important parameters to consider
include buffer sizes and round-trip time (RTT).
Importance of Buffer Sizes and RTT in TCP Windowing
Buffer sizes and RTT can impact TCP windowing by affecting the amount of data that can be
transmitted and the speed at which data is transmitted.
Common TCP Windowing Issues and Solutions
Common TCP windowing issues include slow performance, packet loss, and network
congestion. Solutions to these issues may include adjusting window sizes, implementing
congestion control algorithms, and optimizing network hardware.
Troubleshooting TCP Windowing Errors
Troubleshooting TCP windowing errors requires a systematic approach that involves testing
network conditions, analyzing performance metrics, and identifying potential hardware or
software issues.

TCP Congestion control-

TCP congestion control is a method used by the TCP protocol to manage data flow over a
network and prevent congestion. TCP uses a congestion window and congestion policy that
avoids congestion. Previously, we assumed that only the receiver could dictate the sender’s
window size. We ignored another entity here, the network. If the network cannot deliver the
data as fast as it is created by the sender, it must tell the sender to slow down. In other words,
in addition to the receiver, the network is a second entity that determines the size of the
sender’s window.

Congestion Policy in TCP


Slow Start Phase: Starts slow increment is exponential to the threshold.
Congestion Avoidance Phase: After reaching the threshold increment is by 1.
Congestion Detection Phase: The sender goes back to the Slow start phase or the Congestion
avoidance phase.
Slow Start Phase
Exponential Increment: In this phase after every RTT the congestion window size increments
exponentially.

Example: If the initial congestion window size is 1 segment, and the first segment is
successfully acknowledged, the congestion window size becomes 2 segments. If the next
transmission is also acknowledged, the congestion window size doubles to 4 segments. This
exponential growth continues as long as all segments are successfully acknowledged.

Initially cwnd = 1
After 1 RTT, cwnd = 2^(1) = 2
2 RTT, cwnd = 2^(2) = 4
3 RTT, cwnd = 2^(3) = 8
Congestion Avoidance Phase
Additive Increment: This phase starts after the threshold value also denoted as ssthresh. The
size of CWND (Congestion Window) increases additive. After each RTT cwnd = cwnd + 1.

For example: if the congestion window size is 20 segments and all 20 segments are
successfully acknowledged within an RTT, the congestion window size would be increased
to 21 segments in the next RTT. If all 21 segments are again successfully acknowledged, the
congestion window size will be increased to 22 segments, and so on.

Initially cwnd = i
After 1 RTT, cwnd = i+1
2 RTT, cwnd = i+2
3 RTT, cwnd = i+3
Congestion Detection Phase
Multiplicative Decrement: If congestion occurs, the congestion window size is decreased.
The only way a sender can guess that congestion has happened is the need to retransmit a
segment. Retransmission is needed to recover a missing packet that is assumed to have been
dropped by a router due to congestion. Retransmission can occur in one of two cases: when
the RTO timer times out or when three duplicate ACKs are received.

Case 1: Retransmission due to Timeout – In this case, the congestion possibility is high.

(a) ssthresh is reduced to half of the current window size.


(b) set cwnd = 1
(c) start with the slow start phase again.
Case 2: Retransmission due to 3 Acknowledgement Duplicates – The congestion possibility
is less.

(a) ssthresh value reduces to half of the current window size.


(b) set cwnd= ssthresh
(c) start with congestion avoidance phase
Example
Assume a TCP protocol experiencing the behavior of slow start. At the 5th transmission
round with a threshold (ssthresh) value of 32 goes into the congestion avoidance phase and
continues till the 10th transmission. At the 10th transmission round, 3 duplicate ACKs are
received by the receiver and entered into additive increase mode. Timeout occurs at the 16th
transmission round. Plot the transmission round (time) vs congestion window size of TCP
segments.

Congestion Detection Phase in TCP


Congestion Detection Phase
Congestion Detection Phase

Quality of service-
Quality-of-service (QoS) refers to traffic control mechanisms that seek to differentiate
performance based on application or network-operator requirements or provide predictable or
guaranteed performance to applications, sessions, or traffic aggregates. The basic phenomenon
for QoS is in terms of packet delay and losses of various kinds.
QoS Specification
 Delay
 Delay Variation(Jitter)
 Throughput
 Error Rate
Types of Quality of Service
 Stateless Solutions – Routers maintain no fine-grained state about traffic, one positive
factor of it is that it is scalable and robust. But it has weak services as there is no
guarantee about the kind of delay or performance in a particular application which we
have to encounter.
 Stateful Solutions – Routers maintain a per-flow state as flow is very important in
providing the Quality-of-Service i.e. providing powerful services such as guaranteed
services and high resource utilization, providing protection, and is much less scalable and
robust.
QoS Parameters
 Packet loss: This occurs when network connections get congested, and routers
and switches begin losing packets.
 Jitter: This is the result of network congestion, time drift, and routing changes. Too
much jitter can reduce the quality of voice and video communication.
 Latency: This is how long it takes a packet to travel from its source to its destination. The
latency should be as near to zero as possible.
 Bandwidth: This is a network communications link’s ability to transmit the majority of
data from one place to another in a specific amount of time.
 Mean opinion score: This is a metric for rating voice quality that uses a five-point scale,
with five representing the highest quality.
How does QoS Work?
Quality of Service (QoS) ensures the performance of critical applications within limited network
capacity.
 Packet Marking: QoS marks packets to identify their service types. For example, it
distinguishes between voice, video, and data traffic.
 Virtual Queues: Routers create separate virtual queues for each application based on
priority. Critical apps get reserved bandwidth.
 Handling Allocation: QoS assigns the order in which packets are processed, ensuring
appropriate bandwidth for each application
Benefits of QoS
 Improved Performance for Critical Applications
 Enhanced User Experience
 Efficient Bandwidth Utilization
 Increased Network Reliability
 Compliance with Service Level Agreements (SLAs)
 Reduced Network Costs
 Improved Security
 Better Scalability
Why is QoS Important?
 Video and audio conferencing require a bounded delay and loss rate.
 Video and audio streaming requires a bounded packet loss rate, it may not be so sensitive
to delay.
 Time-critical applications (real-time control) in which bounded delay is considered to be
an important factor.
 Valuable applications should provide better services than less valuable applications.
Implementing QoS
 Planning: The organization should develop an awareness of each department’s service
needs and requirements, select an appropriate model, and build stakeholder support.
 Design: The organization should then keep track of all key software and hardware
changes and modify the chosen QoS model to the characteristics of its network
infrastructure.
 Testing: The organization should test QoS settings and policies in a secure, controlled
testing environment where faults can be identified.
 Deployment: Policies should be implemented in phases. An organization can choose to
deploy rules by network segment or by QoS function (what each policy performs).
 Monitoring and analyzing: Policies should be modified to increase performance based
on performance data.
Models to Implement QoS
1. Integrated Services(IntServ)
 An architecture for providing QoS guarantees in IP networks for individual application
sessions.
 Relies on resource reservation, and routers need to maintain state information of allocated
resources and respond to new call setup requests.
 Network decides whether to admit or deny a new call setup request.
2. IntServ QoS Components
 Resource reservation: call setup signaling, traffic, QoS declaration, per-element
admission control.
 QoS-sensitive scheduling e.g WFQ queue discipline.
 QoS-sensitive routing algorithm(QSPF)
 QoS-sensitive packet discard strategy.
3. RSVP-Internet Signaling
It creates and maintains distributed reservation state, initiated by the receiver and scales for
multicast, which needs to be refreshed otherwise reservation times out as it is in soft state. Latest
paths were discovered through “PATH” messages (forward direction) and used by RESV
messages (reserve direction).
4. Call Admission
 Session must first declare it’s QoS requirement and characterize the traffic it will send
through the network.
 R-specification: defines the QoS being requested, i.e. what kind of bound we want on the
delay, what kind of packet loss is acceptable, etc.
 T-specification: defines the traffic characteristics like bustiness in the traffic.
 A signaling protocol is needed to carry the R-spec and T-spec to the routers where
reservation is required.
 Routers will admit calls based on their R-spec, T-spec and based on the current resource
allocated at the routers to other calls.
5. Diff-Serv
Differentiated Service is a stateful solution in which each flow doesn’t mean a different state. It
provides reduced state services i.e. maintaining state only for larger granular flows rather than
end-to-end flows tries to achieve the best of both worlds. Intended to address the following
difficulties with IntServ and RSVP:
 Flexible Service Models: IntServ has only two classes, want to provide more qualitative
service classes: want to provide ‘relative’ service distinction.
 Simpler signaling: Many applications and users may only want to specify a more
qualitative notion of service.
QoS Tools
 Traffic Classification and Marking
 Traffic Shaping and Policing
 Queue Management and Scheduling
 Resource Reservation
 Congestion Management

TCP Header- TCP congestion control is a method used by the TCP protocol to manage data flow
over a network and prevent congestion. TCP uses a congestion window and congestion policy
that avoids congestion. Previously, we assumed that only the receiver could dictate the sender’s
window size. We ignored another entity here, the network. If the network cannot deliver the data
as fast as it is created by the sender, it must tell the sender to slow down. In other words, in
addition to the receiver, the network is a second entity that determines the size of the sender’s
window.

Congestion Policy in TCP


Slow Start Phase: Starts slow increment is exponential to the threshold.
Congestion Avoidance Phase: After reaching the threshold increment is by 1.
Congestion Detection Phase: The sender goes back to the Slow start phase or the Congestion
avoidance phase.
Slow Start Phase
Exponential Increment: In this phase after every RTT the congestion window size increments
exponentially.

Example: If the initial congestion window size is 1 segment, and the first segment is successfully
acknowledged, the congestion window size becomes 2 segments. If the next transmission is also
acknowledged, the congestion window size doubles to 4 segments. This exponential growth
continues as long as all segments are successfully acknowledged.

Initially cwnd = 1
After 1 RTT, cwnd = 2^(1) = 2
2 RTT, cwnd = 2^(2) = 4
3 RTT, cwnd = 2^(3) = 8
Congestion Avoidance Phase
Additive Increment: This phase starts after the threshold value also denoted as ssthresh. The size
of CWND (Congestion Window) increases additive. After each RTT cwnd = cwnd + 1.

For example: if the congestion window size is 20 segments and all 20 segments are successfully
acknowledged within an RTT, the congestion window size would be increased to 21 segments in
the next RTT. If all 21 segments are again successfully acknowledged, the congestion window
size will be increased to 22 segments, and so on.

Initially cwnd = i
After 1 RTT, cwnd = i+1
2 RTT, cwnd = i+2
3 RTT, cwnd = i+3
Congestion Detection Phase
Multiplicative Decrement: If congestion occurs, the congestion window size is decreased. The
only way a sender can guess that congestion has happened is the need to retransmit a segment.
Retransmission is needed to recover a missing packet that is assumed to have been dropped by a
router due to congestion. Retransmission can occur in one of two cases: when the RTO timer
times out or when three duplicate ACKs are received.

TCP Header-The Transmission Control Protocol is the most common transport layer protocol. It
works together with IP and provides a reliable transport service between processes using the
network layer service provided by the IP protocol.
The various services provided by the TCP to the application layer are as follows:

1. Process-to-Process Communication –
TCP provides a process to process communication, i.e, the transfer of data that takes place
between individual processes executing on end systems. This is done using port numbers or port
addresses. Port numbers are 16 bits long that help identify which process is sending or receiving
data on a host.

2. Stream oriented –
This means that the data is sent and received as a stream of bytes(unlike UDP or IP that divides
the bits into datagrams or packets). However, the network layer, that provides service for the
TCP, sends packets of information not streams of bytes. Hence, TCP groups a number of bytes
together into a segment and adds a header to each of these segments and then delivers these
segments to the network layer. At the network layer, each of these segments is encapsulated in an
IP packet for transmission. The TCP header has information that is required for control purposes
which will be discussed along with the segment structure.

3. Full-duplex service –
This means that the communication can take place in both directions at the same time.

4. Connection-oriented service –
Unlike UDP, TCP provides a connection-oriented service. It defines 3 different phases:
• Connection establishment
• Data transfer
• Connection termination
5. Reliability –
TCP is reliable as it uses checksum for error detection, attempts to recover lost or corrupted
packets by re-transmission, acknowledgement policy and timers. It uses features like byte number
and sequence number and acknowledgement number so as to ensure reliability. Also, it uses
congestion control mechanisms.

6. Multiplexing –
TCP does multiplexing and de-multiplexing at the sender and receiver ends respectively as a
number of logical connections can be established between port numbers over a physical
connection.

Byte number, Sequence number and Acknowledgement number:


All the data bytes that are to be transmitted are numbered and the beginning of this numbering is
arbitrary. Sequence numbers are given to the segments so as to reassemble the bytes at the
receiver end even if they arrive in a different order. The sequence number of a segment is the byte
number of the first byte that is being sent. The acknowledgement number is required since TCP
provides full-duplex service. The acknowledgement number is the next byte number that the
receiver expects to receive which also provides acknowledgement for receiving the previous
bytes.
Example:

In this example we see that A sends acknowledgement number1001, which means that it has
received data bytes till byte number 1000 and expects to receive 1001 next, hence B next sends
data bytes starting from 1001. Similarly, since B has received data bytes till byte number 13001
after the first data transfer from A to B, therefore B sends acknowledgement number 13002, the
byte number that it expects to receive from A next.
TCP Segment structure –
A TCP segment consists of data bytes to be sent and a header that is added to the data by TCP as
shown:
The header of a TCP segment can range from 20-60 bytes. 40 bytes are for options. If there are
no options, a header is 20 bytes else it can be of upmost 60 bytes.
Header fields:

• Source Port Address –


A 16-bit field that holds the port address of the application that is sending the data segment.

• Destination Port Address –


A 16-bit field that holds the port address of the application in the host that is receiving the data
segment.

• Sequence Number –
A 32-bit field that holds the sequence number, i.e, the byte number of the first byte that is sent in
that particular segment. It is used to reassemble the message at the receiving end of the segments
that are received out of order.

• Acknowledgement Number –
A 32-bit field that holds the acknowledgement number, i.e, the byte number that the receiver
expects to receive next. It is an acknowledgement for the previous bytes being received
successfully.
• Header Length (HLEN) –
This is a 4-bit field that indicates the length of the TCP header by a number of 4-byte words in
the header, i.e if the header is 20 bytes(min length of TCP header), then this field will hold 5
(because 5 x 4 = 20) and the maximum length: 60 bytes, then it’ll hold the value 15(because 15 x
4 = 60). Hence, the value of this field is always between 5 and 15.

• Control flags –
These are 6 1-bit control bits that control connection establishment, connection termination,
connection abortion, flow control, mode of transfer etc. Their function is:
o URG: Urgent pointer is valid
o ACK: Acknowledgement number is valid( used in case of cumulative acknowledgement)
o PSH: Request for push
o RST: Reset the connection
o SYN: Synchronize sequence numbers
o FIN: Terminate the connection
• Window size –
This field tells the window size of the sending TCP in bytes.

• Checksum –
This field holds the checksum for error control. It is mandatory in TCP as opposed to UDP.

• Urgent pointer –
This field (valid only if the URG control flag is set) is used to point to data that is urgently
required that needs to reach the receiving process at the earliest. The value of this field is added to
the sequence number to get the byte number of the last urgent byte.

TCP Connection –
TCP is connection-oriented. A TCP connection is established by a 3-way handshake.

UDP Header-

User Datagram Protocol (UDP) is a Transport Layer protocol. UDP is a part of the Internet
Protocol suite, referred to as UDP/IP suite. Unlike TCP, it is an unreliable and
connectionless protocol. So, there is no need to establish a connection before data
transfer. The UDP helps to establish low-latency and loss-tolerating connections over the
network. The UDP enables process-to-process communication.
What is User Datagram Protocol?
User Datagram Protocol (UDP) is one of the core protocols of the Internet Protocol (IP) suite.
It is a communication protocol used across the internet for time-sensitive transmissions such
as video playback or DNS lookups . Unlike Transmission Control Protocol (TCP), UDP is
connectionless and does not guarantee delivery, order, or error checking, making it a
lightweight and efficient option for certain types of data transmission.
UDP Header
UDP header is an 8-byte fixed and simple header, while for TCP it may vary from 20 bytes
to 60 bytes. The first 8 Bytes contain all necessary header information and the remaining part
consists of data. UDP port number fields are each 16 bits long, therefore the range for port
numbers is defined from 0 to 65535; port number 0 is reserved. Port numbers help to
distinguish different user requests or processes.

UDP Header
 Source Port: Source Port is a 2 Byte long field used to identify the port number of
the source.
 Destination Port: It is a 2 Byte long field, used to identify the port of the destined
packet.
 Length: Length is the length of UDP including the header and the data. It is a 16-bits
field.
 Checksum: Checksum is 2 Bytes long field. It is the 16-bit one’s complement of the
one’s complement sum of the UDP header, the pseudo-header of information from the
IP header, and the data, padded with zero octets at the end (if necessary) to make a
multiple of two octets.
Notes – Unlike TCP, the Checksum calculation is not mandatory in UDP. No Error control or
flow control is provided by UDP. Hence UDP depends on IP and ICMP for error
reporting. Also UDP provides port numbers so that is can differentiate between users
requests.
Applications of UDP
 Used for simple request-response communication when the size of data is less and
hence there is lesser concern about flow and error control.
 It is a suitable protocol for multicasting as UDP supports packet switching.
 UDP is used for some routing update protocols like RIP(Routing Information
Protocol).
 Normally used for real-time applications which can not tolerate uneven delays
between sections of a received message.
 VoIP (Voice over Internet Protocol) services, such as Skype and WhatsApp, use UDP
for real-time voice communication. The delay in voice communication can be
noticeable if packets are delayed due to congestion control, so UDP is used to ensure
fast and efficient data transmission.
 DNS (Domain Name System) also uses UDP for its query/response messages. DNS
queries are typically small and require a quick response time, making UDP a suitable
protocol for this application.
 DHCP (Dynamic Host Configuration Protocol) uses UDP to dynamically assign IP
addresses to devices on a network. DHCP messages are typically small, and the delay
caused by packet loss or retransmission is generally not critical for this application.
 Following implementations uses UDP as a transport layer protocol:
o NTP (Network Time Protocol)
o DNS (Domain Name Service)
o BOOTP, DHCP.
o NNP (Network News Protocol)
o Quote of the day protocol
o TFTP, RTSP, RIP.
 The application layer can do some of the tasks through UDP-
o Trace Route
o Record Route
o Timestamp
 UDP takes a datagram from Network Layer , attaches its header, and sends it to the
user. So, it works fast.
Advantages of UDP
 Speed: UDP is faster than TCP because it does not have the overhead of establishing
a connection and ensuring reliable data delivery.
 Lower latency: Since there is no connection establishment, there is lower latency and
faster response time.
 Simplicity: UDP has a simpler protocol design than TCP, making it easier to
implement and manage.
 Broadcast support: UDP supports broadcasting to multiple recipients, making it
useful for applications such as video streaming and online gaming.
 Smaller packet size: UDP uses smaller packet sizes than TCP, which can reduce
network congestion and improve overall network performance.
 User Datagram Protocol (UDP) is more efficient in terms of both latency and
bandwidth.
Disadvantages of UDP
 No reliability: UDP does not guarantee delivery of packets or order of delivery,
which can lead to missing or duplicate data.
 No congestion control: UDP does not have congestion control, which means that it
can send packets at a rate that can cause network congestion.
 Vulnerable to attacks: UDP is vulnerable to denial-of-service attacks , where an
attacker can flood a network with UDP packets, overwhelming the network and
causing it to crash.
 Limited use cases: UDP is not suitable for applications that require reliable data
delivery, such as email or file transfers, and is better suited for applications that can
tolerate some data loss, such as video streaming or online gaming.
How is UDP used in DDoS attacks?
A UDP flood attack is a type of Distributed Denial of Service (DDoS) attack where an
attacker sends a large number of User Datagram Protocol (UDP) packets to a target port.
 UDP Protocol : Unlike TCP, UDP is connectionless and doesn’t require a handshake
before data transfer. When a UDP packet arrives at a server, it checks the specified
port for listening applications. If no app is found, the server sends
an ICMP “destination unreachable” packet to the supposed sender (usually a
random bystander due to spoofed IP addresses).
 Attack Process :
o The attacker sends UDP packets with spoofed IP sender addresses to random
ports on the target system.
o The server checks each incoming packet’s port for a listening application
(usually not found due to random port selection).
o The server sends ICMP “destination unreachable” packets to the spoofed
sender (random bystanders).
o The attacker floods the victim with UDP data packets, overwhelming its
resources.
 Mitigation : To protect against UDP flood attacks, monitoring network traffic for
sudden spikes and implementing security measures are crucial. Organizations often
use specialized tools and services to detect and mitigate such attacks effectively.
UDP Pseudo Header
 The purpose of using a pseudo-header is to verify that the UDP packet has reached its
correct destination
 The correct destination consist of a specific machine and a specific protocol port
number within that machine

UDP pseudo header


UDP Pseudo Header Details
 The UDP header itself specify only protocol port number.thus , to verify the
destination UDP on the sending machine computes a checksum that covers the
destination IP address as well as the UDP packet.
 At the ultimate destination, UDP software verifies the checksum using the destination
IP address obtained from the header of the IP packet that carried the UDP message.
 If the checksum agrees, then it must be true that the packet has reached the intended
destination host as well as the correct protocol port within that host.
User Interface
A user interface should allow the creation of new receive ports, receive operations on the
receive ports that returns the data octets and an indication of source port and source address,
and an operation that allows a datagram to be sent, specifying the data, source and destination
ports and address to be sent.
IP Interface
 The UDP module must be able to determine the source and destination internet
address and the protocol field from internet header
 One possible UDP/IP interface would return the whole internet datagram including
the entire internet header in response to a receive operation
 Such an interface would also allow the UDP to pass a full internet datagram complete
with header to the IP to send. the IP would verify certain fields for consistency and
compute the internet header checksum.
 The IP interface allows the UDP module to interact with the network layer of the
protocol stack, which is responsible for routing and delivering data across the
network.
 The IP interface provides a mechanism for the UDP module to communicate with
other hosts on the network by providing access to the underlying IP protocol.
 The IP interface can be used by the UDP module to send and receive data packets
over the network, with the help of IP routing and addressing mechanisms.

Here are the main differences between TCP and UDP:

Factor TCP UDP

Connection type Requires an established connection No connection is needed to start and


before transmitting data end a data transfer

Can sequence data (send in a specific


Data sequence order) Cannot sequence or arrange data

Can retransmit data if packets fail to No data retransmitting. Lost data


Data retransmission arrive can’t be retrieved

Delivery Delivery is guaranteed Delivery is not guaranteed

Thorough error-checking guarantees data Minimal error-checking covers the


Check for errors arrives in its intended state basics but may not prevent all errors

Broadcasting Not supported Supported


Fast, but at risk of incomplete data
Speed Slow, but complete data delivery delivery

Differences between TCP and UDP


The following table highlights the major differences between TCP and UDP.
Key TCP UDP

It is a communications protocol,
It is same as the TCP protocol
using which the data is transmitted
except this doesnt guarantee
between systems over the network.
the error-checking and data
In this, the data is transmitted in the
Definition recovery. If you use this
form of packets. It includes error-
protocol, the data will be sent
checking, guarantees the delivery
continuously, irrespective of
and preserves the order of the data
the issues in the receiving end.
packets.

TCP is a connection-oriented UDP is a connectionless


Design
protocol. protocol.

UDP, on the other hand,


provides only basic error
TCP is more reliable as it provides
checking support using
error checking support and also
Reliability checksum. So, the delivery of
guarantees delivery of data to the
data to the destination cannot
destination router.
be guaranteed in UDP as in
case of TCP.

In TCP, the data is transmitted in a There is no sequencing of data


Data particular sequence which means in UDP in order to implement
transmission that packets arrive in-order at the ordering it has to be managed
receiver. by the application layer.

TCP is slower and less efficient in


performance as compared to UDP. UDP is faster and more
Performance
Also TCP is heavy-weight as efficient than TCP.
compared to UDP.

Retransmission of data packets is


Retransmission of packets is
Retransmission possible in TCP in case packet get
not possible in UDP.
lost or need to resend.

The Transmission Control Protocol


In UDP, there is no data
has a function that allows data to be
sequencing. The application
Sequencing sequenced (TCP). This implies that
layer must control the order if
packets arrive at the recipient in the
it is necessary.
sequence they were sent.
TCP uses a variable-length (20-60) UDP has a fixed-length header
Header size
bytes header. of 8 bytes.

It's a connectionless protocol,


Handshakes such as SYN, ACK,
Handshake which means it doesn't require
and SYNACK are used.
a handshake.

Broadcasting is not supported by Broadcasting is supported by


Broadcasting
TCP. UDP.

HTTP, HTTPs, FTP, SMTP, and DNS, DHCP, TFTP, SNMP,


Examples
Telnet use TCP. RIP, and VoIP use UDP.

You might also like