Unit 4 CN
Unit 4 CN
Transport Layer
The transport Layer is the second layer in the TCP/IP model and the fourth layer in the OSI
model. It is an end-to-end layer used to deliver messages to a host. It is termed an end-to-end
layer because it provides a point-to-point connection rather than hop-to-hop, between the
source host and destination host to deliver the services reliably. The unit of data
encapsulation in the Transport Layer is a segment.
The data link layer is responsible for delivery of frames between two neighboring nodes over
a link. This is called node-to-node delivery. The network layer is responsible for delivery of
datagrams between two hosts. This is called host-to-host delivery. Real communication takes
place between two processes (application programs). We need process-to-process delivery.
The transport layer is responsible for process-to-process delivery-the delivery of a packet,
part of a message, from one process to another. Figure 4.1 shows these three types of
deliveries and their domains
1. Client/Server Paradigm
Although there are several ways to achieve process-to-process communication, the most
common one is through the client/server paradigm. A process on the local host, called a
client, needs services from a process usually on the remote host, called a server. Both
processes (client and server) have the same name. For example, to get the day and time from
a remote machine, we need a Daytime client process running on the local host and a Daytime
server process running on a remote machine. For communication, we must define the
following:
1. Local host
2. Local process
3. Remote host
4. Remote process
2. Addressing
Whenever we need to deliver something to one specific destination among many, we need an
address. At the data link layer, we need a MAC address to choose one node among several
nodes if the connection is not point-to-point. A frame in the data link layer needs a
Destination MAC address for delivery and a source address for the next node's reply.
3. lANA Ranges
The lANA (Internet Assigned Number Authority) has divided the port numbers into three
ranges: well known, registered, and dynamic (or private), as shown in Figure 4.4.
· Well-known ports. The ports ranging from 0 to 1023 are assigned and controlledby lANA.
These are the well-known ports.
· Registered ports. The ports ranging from 1024 to 49,151 are not assigned orcontrolled by
lANA. They can only be registered with lANA to prevent duplication.
·Dynamic ports. The ports ranging from 49,152 to 65,535 are neither controllednor
registered. They can be used by any process. These are the ephemeral ports.
4. Socket Addresses
Process-to-process delivery needs two identifiers, IP address and the port number, at each end
to make a connection. The combination of an IP address and a port number is called a socket
address. The client socket address defines the client process uniquely just as the server socket
address defines the server process uniquely (see Figure 4.5).
The addressing mechanism allows multiplexing and demultiplexing by the transport layer, as
shown in Figure 4.6.
Multiplexing
At the sender site, there may be several processes that need to send packets. However, there is
only one transport layer protocol at any time. This is a many-to-one relationship and requires
multiplexing.
Demultiplexing
At the receiver site, the relationship is one-to-many and requires demultiplexing. The
transport layer receives datagrams from the network layer. After error checking and dropping
of the header, the transport layer delivers each message to the appropriate process based on
the port number.
Connectionless Service
In a connectionless service, the packets are sent from one party to another with no need for
connection establishment or connection release. The packets are not numbered; they may be
delayed or lost or may arrive out of sequence. There is no acknowledgment either.
Connection~Oriented Service
In a connection-oriented service, a connection is first established between the sender and the
receiver. Data are transferred. At the end, the connection is released.
The transport layer service can be reliable or unreliable. If the application layer program
needs reliability, we use a reliable transport layer protocol by implementing flow and error
control at the transport layer. This means a slower and more complex service.
In the Internet, there are three common different transport layer protocols. UDP is
connectionless and unreliable; TCP and SCTP are connection oriented and reliable. These
three can respond to the demands of the application layer programs.
The network layer in the Internet is unreliable (best-effort delivery), we need to implement
reliability at the transport layer. To understand that error control at the data link layer does not
guarantee error control at the transport layer, let us look at Figure 4.7.
8. Three Protocols
The original TCP/IP protocol suite specifies two protocols for the transport layer: UDP and
TCP. We first focus on UDP, the simpler of the two, before discussing TCP. A new transport
layer protocol, SCTP, has been designed. Figure 4.8 shows the position of these protocols in
the TCP/IP protocol suite.
The Transport Layer in the network architecture is responsible for end-to-end communication
between applications. In this layer, TCP (Transmission Control Protocol) and UDP (User
Datagram Protocol) are the two main protocols that handle the responsibility of moving data
between applications. TCP focuses on reliable, ordered delivery of information, perfect for
tasks where accuracy is improvement. On the other hand, UDP is simpler and faster, often
used for activities like live streaming or online gaming, where speed matters more than
absolute reliability.
Together, TCP and UDP provide a flexible foundation for various network applications and
diverse communication needs. Understanding the basics of TCP and UDP helps us choose the
right approach for efficient and effective data delivery.
What is Transmission Control Protocol (TCP)?
TCP is a layer 4 core communication protocol within the internet protocol which ensures
reliable, ordered, and error-checked delivery of data between devices. When two devices
establish a TCP connection, they perform a three-way handshake to confirm each other’s
presence and agree on parameters for data exchange. TCP breaks information into packets,
sends them, and then ensures all packets arrive in the correct order. If any packets are lost or
damaged during transmission, TCP automatically requests them to be re-sent. This approach
makes TCP ideal for applications where data accuracy is more important than speed, such as
browsing web pages, sending emails, and downloading files because users receive complete
and correctly sequenced information.
It is better than UDP but due to these features, it has an additional overhead. Also, application
protocols like HTTP and FTP use it.
Transmission Control Protocol
Use Cases of TCP Protocol
TCP (Transmission Control Protocol) is one of the main parts of the internet which provides
reliable and ordered data delivery. Here are some of its key use cases:
1. Web Browsing:
When you type a URL into your browser, your computer uses TCP to establish a
connection with the web server.
TCP ensures that the HTML, CSS, and JavaScript files that make up the webpage are
delivered accurately and in the correct order.
2. Email:
Protocols like SMTP (Simple Mail Transfer Protocol) and IMAP (Internet Message
Access Protocol) rely on TCP for sending and receiving emails.
TCP guarantees that your emails are delivered completely and in the correct sequence.
3. File Transfer:
Protocols like FTP (File Transfer Protocol) and SFTP (Secure File Transfer Protocol)
utilize TCP for transferring files between computers.
TCP’s reliability ensures that files are transferred accurately and without data
corruption.
4. Remote Access:
Protocols like Telnet and SSH (Secure Shell) use TCP for remote access to other
computers.
TCP ensures that commands and data are transmitted reliably, allowing you to interact
with remote systems securely.
5. Online Banking and Financial Transactions:
TCP’s reliability and security are crucial for online banking and financial transactions.
It ensures that sensitive data is transmitted securely and accurately, preventing data
loss or corruption.
What is User Datagram Protocol (UDP)?
User Datagram Protocol (UDP) is a layer 4 communication protocol used in the internet’s
network layer, transport layer, and session layer. Unlike TCP it sends data as independent
packets called datagrams without first establishing a dedicated connection. This means UDP
does not guarantee delivery, order, or error correction, it simply sends data and hopes it
arrives. Because it skips these checks, UDP has very low overhead and latency which makes
it ideal for applications where speed is more important than perfect reliability. Examples
include live video streaming, online gaming, and voice calls, where a few missed packets are
often less noticeable than the delay that comes from waiting for perfect delivery.
Multiplexing-
Multiplexing and Demultiplexing services are provided in almost every protocol architecture
ever designed. UDP and TCP perform the demultiplexing and multiplexing jobs by including
two special fields in the segment headers: the source port number field and the destination
port number field.
Multiplexing –
Gathering data from multiple application processes of the sender, enveloping that data with a
header, and sending them as a whole to the intended receiver is called multiplexing.
Demultiplexing –
Delivering received segments at the receiver side to the correct app layer processes is called
demultiplexing.
Figure – Abstract view of multiplexing and demultiplexing
Multiplexing and demultiplexing are the services facilitated by the transport layer of the OSI
model.
Figure – Transport layer- junction for multiplexing and demultiplexing
There are two types of multiplexing and Demultiplexing:
Connection management-
TCP is a connection-oriented protocol and every connection-oriented protocol needs to
establish a connection in order to reserve resources at both the communicating ends.
Connection Establishment –
TCP connection establishment involves a three-way handshake to ensure reliable
communication between devices. Understanding each step of this handshake process is
critical for networking professionals.
1. Sender starts the process with the following:
• Sequence number (Seq=521): contains the random initial sequence number generated at the
sender side.
• Syn flag (Syn=1): request the receiver to synchronize its sequence number with the above-
provided sequence number.
• Maximum segment size (MSS=1460 B): sender tells its maximum segment size, so that
receiver sends datagram which won’t require any fragmentation. MSS field is present inside
Option field in TCP header.
• Window size (window=14600 B): sender tells about his buffer capacity in which he has to
store messages from the receiver.
2. TCP is a full-duplex protocol so both sender and receiver require a window for receiving
messages from one another.
• Sequence number (Seq=2000): contains the random initial sequence number generated at
the receiver side.
• Syn flag (Syn=1): request the sender to synchronize its sequence number with the above-
provided sequence number.
• Maximum segment size (MSS=500 B): receiver tells its maximum segment size, so that
sender sends datagram which won’t require any fragmentation. MSS field is present inside
Option field in TCP header.
Since MSS receiver < MSS sender , both parties agree for minimum MSS i.e., 500 B to avoid
fragmentation of packets at both ends.
Therefore, receiver can send maximum of 14600/500 = 29 packets.
This is the receiver's sending window size.
• Window size (window=10000 B): receiver tells about his buffer capacity in which he has to
store messages from the sender.
Therefore, sender can send a maximum of 10000/500 = 20 packets.
This is the sender's sending window size.
• Acknowledgement Number (Ack no.=522): Since sequence number 521 is received by the
receiver so, it makes a request for the next sequence number with Ack no.=522 which is the
next packet expected by the receiver since Syn flag consumes 1 sequence no.
• ACK flag (ACk=1): tells that the acknowledgement number field contains the next
sequence expected by the receiver.
3. Sender makes the final reply for connection establishment in the following way:
• Sequence number (Seq=522): since sequence number = 521 in 1 st step and SYN flag
consumes one sequence number hence, the next sequence number will be 522.
• Acknowledgement Number (Ack no.=2001): since the sender is acknowledging SYN=1
packet from the receiver with sequence number 2000 so, the next sequence number expected
is 2001.
• ACK flag (ACK=1): tells that the acknowledgement number field contains the next
sequence expected by the sender.
The connection is established in TCP using the three-way handshake as discussed earlier to
create a connection. One side, say the server, passively stays for an incoming link by
implementing the LISTEN and ACCEPT primitives, either determining a particular other side
or nobody in particular.
The other side performs a connect primitive specifying the I/O port to which it wants to join.
The maximum TCP segment size available, other options are optionally like some private
data (example password).
The CONNECT primitive transmits a TCP segment with the SYN bit on and the ACK bit off
and waits for a response.
The sequence of TCP segments sent in the typical case, as shown in the figure below −
When the segment sent by Host-1 reaches the destination, i.e., host -2, the receiving server
checks to see if there is a process that has done a LISTEN on the port given in the destination
port field. If not, it sends a response with the RST bit on to refuse the connection. Otherwise,
it governs the TCP segment to the listing process, which can accept or decline (for example,
if it does not look similar to the client) the connection.
Call Collision
If two hosts try to establish a connection simultaneously between the same two sockets, then
the events sequence is demonstrated in the figure under such circumstances. Only one
connection is established. It cannot select both the links because their endpoints identify
connections.
Suppose the first set up results in a connection identified by (x, y) and the second connection
are also released up. In that case, only tail enter will be made, i.e., for (x, y) for the initial
sequence number, a clock-based scheme is used, with a clock pulse coming after every 4
microseconds. For ensuring additional safety when a host crashes, it may not reboot for sec,
which is the maximum packet lifetime. This is to make sure that no packets from previous
connections are roaming around.
Let's discuss the TCP termination process with the help of six steps that includes the sent
requests and the waiting states. The steps are as follows:
Step 1: FIN
FIN refers to the termination request sent by the client to the server. The first FIN
termination request is sent by the client to the server. It depicts the start of the termination
process between the client and server.
Step 2: FIN_ACK_WAIT
The client waits for the ACK of the FIN termination request from the server. It is a waiting
state for the client.
Step 3: ACK
The server sends the ACK (Acknowledgement) segment when it receives the FIN termination
request. It depicts that the server is ready to close and terminate the connection.
Step 4: FIN _WAIT_2
The client waits for the FIN segment from the server. It is a type of approved signal sent by
the server that shows that the server is ready to terminate the connection.
Step 5: FIN
The FIN segment is now sent by the server to the client. It is a confirmation signal that the
server sends to the client. It depicts the successful approval for the termination.
Step 6: ACK
The client now sends the ACK (Acknowledgement) segment to the server that it has received
the FIN signal, which is a signal from the server to terminate the connection. As soon as the
server receives the ACK segment, it terminates the connection.
Retransmission:
Purpose:
To ensure reliable data delivery by re-sending lost or corrupted data segments.
Mechanism:
The sender uses techniques like acknowledgements (ACKs) and sequence numbers to detect
missing or damaged segments.
Example:
If the receiver doesn't receive an expected segment or receives a corrupted one, it may send a
negative acknowledgement (NAK) or a duplicate ACK, prompting the sender to retransmit
the lost or damaged segment.
Benefits:
Guarantees that data reaches the destination accurately and completely, even in the presence
of network errors or congestion.
Relationship between Flow Control and Retransmission:
Flow control and retransmission work together to ensure reliable and efficient data
transfer. Flow control helps prevent data loss due to a fast sender, while retransmission
ensures that any lost or damaged data is recovered and delivered. Together, they provide a
robust and reliable mechanism for data transmission in the transport layer.
Key Differences:
Feature Flow Control Retransmission
Purpose Prevent overwhelming the receiver Ensure reliable delivery by resending lost
data
Mechanis Receiver informs sender of buffer Sender waits for ACK, resends if not
m capacity received
TCP windowing is a flow control mechanism used by the sender and receiver to manage the
amount of data that can be transmitted at any given time.
The basic idea of windowing is that the sender can only transmit data up to a certain point,
and the receiver will acknowledge receipt of that data.
The sender will then send more data, and the process continues until the entire data stream
has been transmitted.
TCP Windowing Mechanisms
TCP windowing is implemented using a sliding window algorithm, which is a method for
managing data flow between two endpoints. The sliding window algorithm is used to manage
both the sending and receiving window sizes.
Sliding Window Algorithm
The sliding window algorithm works by dividing the data stream into smaller segments, or
packets. Each packet is assigned a sequence number, which is used to ensure that packets are
received in the correct order. The sender maintains a sliding window, which is a range of
sequence numbers that can be transmitted at any given time. The size of the window is
determined by the receiver's buffer size and the network conditions.
Sending and Receiving Window Sizes
The sending window size is the amount of data that the sender can transmit before receiving
an acknowledgment from the receiver. The receiving window size is the amount of data that
the receiver can buffer before sending an acknowledgment to the sender. The sender and
receiver negotiate the window sizes during the TCP handshake process.
Window Scaling
Window scaling is a mechanism used to increase the maximum window size beyond the
default limit of 64KB. Window scaling is negotiated during the TCP handshake process when
the receiver sends a Window Scale option to the sender.
Benefits and Limitations of Window Scaling
Window scaling can improve network performance by allowing more data to be transmitted
at once. However, it can also increase the risk of congestion and packet loss if not managed
properly.
Congestion Control and TCP Windowing
TCP windowing is closely tied to congestion control, which is a mechanism used to prevent
network congestion and packet loss. There are two major congestion control algorithms used
by TCP: Slow Start and Congestion Avoidance.
Slow Start Algorithm
The Slow Start algorithm is used to initially establish the transmission rate of a data stream.
The sender gradually increases the window size until it reaches a point where packet loss is
detected.
Impact of Slow Start on TCP Windowing
Slow Start can impact TCP windowing by limiting the amount of data that can be transmitted
initially. As the sender gradually increases the window size, the receiver's buffer may become
full, resulting in packet loss and decreased network performance.
Congestion Avoidance Algorithm
The Congestion Avoidance algorithm is used to maintain an optimal transmission rate while
avoiding network congestion. The sender gradually increases the window size until packet
loss is detected, and then reduces the window size to alleviate congestion.
Effects of Congestion Avoidance on TCP Windowing
Congestion Avoidance can impact TCP windowing by reducing the window size when
congestion is detected. This can result in decreased network performance, but it is necessary
to prevent network congestion and packet loss.
Enhancing TCP Windowing
There are several mechanisms used to enhance TCP windowing, including Selective
Acknowledgment (SACK) and Explicit Congestion Notification (ECN).
Selective Acknowledgment (SACK)
Selective Acknowledgment (SACK) is a mechanism used to improve data transmission
efficiency by allowing the receiver to acknowledge receipt of non-contiguous data segments.
This allows the sender to retransmit only the missing data segments, rather than the entire
window.
Improving Data Transmission Efficiency with SACK
SACK can improve network performance by reducing the amount of data that needs to be
retransmitted in the event of packet loss.
Explicit Congestion Notification (ECN)
Explicit Congestion Notification (ECN) is a mechanism used to notify the sender of network
congestion before packet loss occurs. ECN is implemented using a bit in the TCP header,
which is set by the router when congestion is detected.
ECN and its Role in TCP Windowing
ECN can impact TCP windowing by allowing the sender to reduce the window size before
congestion occurs, preventing packet loss and improving network performance.
TCP Windowing Strategies
There are several strategies used to optimize TCP windowing, including the use of larger
window sizes and dynamic vs fixed window sizes.
Advantages and Disadvantages of Larger Window Sizes
Larger window sizes can improve network performance by allowing more data to be
transmitted at once. However, they can also increase the risk of congestion and packet loss if
not managed properly.
Dynamic vs Fixed Window Sizes
Dynamic window sizes allow for more flexibility in managing data flow, but they require
more processing overhead. Fixed window sizes are easier to manage but may not be optimal
for all network conditions.
Choosing the Right Window Size for Your Network
Choosing the right window size for your network requires careful consideration of network
conditions, traffic patterns, and hardware capabilities.
TCP Windowing Best Practices
There are several best practices for optimizing TCP windowing, including fine-tuning TCP
windowing parameters and troubleshooting TCP windowing errors.
Fine-tuning TCP Windowing Parameters
Fine-tuning TCP windowing parameters requires a thorough understanding of network
conditions, traffic patterns, and hardware capabilities. Important parameters to consider
include buffer sizes and round-trip time (RTT).
Importance of Buffer Sizes and RTT in TCP Windowing
Buffer sizes and RTT can impact TCP windowing by affecting the amount of data that can be
transmitted and the speed at which data is transmitted.
Common TCP Windowing Issues and Solutions
Common TCP windowing issues include slow performance, packet loss, and network
congestion. Solutions to these issues may include adjusting window sizes, implementing
congestion control algorithms, and optimizing network hardware.
Troubleshooting TCP Windowing Errors
Troubleshooting TCP windowing errors requires a systematic approach that involves testing
network conditions, analyzing performance metrics, and identifying potential hardware or
software issues.
TCP congestion control is a method used by the TCP protocol to manage data flow over a
network and prevent congestion. TCP uses a congestion window and congestion policy that
avoids congestion. Previously, we assumed that only the receiver could dictate the sender’s
window size. We ignored another entity here, the network. If the network cannot deliver the
data as fast as it is created by the sender, it must tell the sender to slow down. In other words,
in addition to the receiver, the network is a second entity that determines the size of the
sender’s window.
Example: If the initial congestion window size is 1 segment, and the first segment is
successfully acknowledged, the congestion window size becomes 2 segments. If the next
transmission is also acknowledged, the congestion window size doubles to 4 segments. This
exponential growth continues as long as all segments are successfully acknowledged.
Initially cwnd = 1
After 1 RTT, cwnd = 2^(1) = 2
2 RTT, cwnd = 2^(2) = 4
3 RTT, cwnd = 2^(3) = 8
Congestion Avoidance Phase
Additive Increment: This phase starts after the threshold value also denoted as ssthresh. The
size of CWND (Congestion Window) increases additive. After each RTT cwnd = cwnd + 1.
For example: if the congestion window size is 20 segments and all 20 segments are
successfully acknowledged within an RTT, the congestion window size would be increased
to 21 segments in the next RTT. If all 21 segments are again successfully acknowledged, the
congestion window size will be increased to 22 segments, and so on.
Initially cwnd = i
After 1 RTT, cwnd = i+1
2 RTT, cwnd = i+2
3 RTT, cwnd = i+3
Congestion Detection Phase
Multiplicative Decrement: If congestion occurs, the congestion window size is decreased.
The only way a sender can guess that congestion has happened is the need to retransmit a
segment. Retransmission is needed to recover a missing packet that is assumed to have been
dropped by a router due to congestion. Retransmission can occur in one of two cases: when
the RTO timer times out or when three duplicate ACKs are received.
Case 1: Retransmission due to Timeout – In this case, the congestion possibility is high.
Quality of service-
Quality-of-service (QoS) refers to traffic control mechanisms that seek to differentiate
performance based on application or network-operator requirements or provide predictable or
guaranteed performance to applications, sessions, or traffic aggregates. The basic phenomenon
for QoS is in terms of packet delay and losses of various kinds.
QoS Specification
Delay
Delay Variation(Jitter)
Throughput
Error Rate
Types of Quality of Service
Stateless Solutions – Routers maintain no fine-grained state about traffic, one positive
factor of it is that it is scalable and robust. But it has weak services as there is no
guarantee about the kind of delay or performance in a particular application which we
have to encounter.
Stateful Solutions – Routers maintain a per-flow state as flow is very important in
providing the Quality-of-Service i.e. providing powerful services such as guaranteed
services and high resource utilization, providing protection, and is much less scalable and
robust.
QoS Parameters
Packet loss: This occurs when network connections get congested, and routers
and switches begin losing packets.
Jitter: This is the result of network congestion, time drift, and routing changes. Too
much jitter can reduce the quality of voice and video communication.
Latency: This is how long it takes a packet to travel from its source to its destination. The
latency should be as near to zero as possible.
Bandwidth: This is a network communications link’s ability to transmit the majority of
data from one place to another in a specific amount of time.
Mean opinion score: This is a metric for rating voice quality that uses a five-point scale,
with five representing the highest quality.
How does QoS Work?
Quality of Service (QoS) ensures the performance of critical applications within limited network
capacity.
Packet Marking: QoS marks packets to identify their service types. For example, it
distinguishes between voice, video, and data traffic.
Virtual Queues: Routers create separate virtual queues for each application based on
priority. Critical apps get reserved bandwidth.
Handling Allocation: QoS assigns the order in which packets are processed, ensuring
appropriate bandwidth for each application
Benefits of QoS
Improved Performance for Critical Applications
Enhanced User Experience
Efficient Bandwidth Utilization
Increased Network Reliability
Compliance with Service Level Agreements (SLAs)
Reduced Network Costs
Improved Security
Better Scalability
Why is QoS Important?
Video and audio conferencing require a bounded delay and loss rate.
Video and audio streaming requires a bounded packet loss rate, it may not be so sensitive
to delay.
Time-critical applications (real-time control) in which bounded delay is considered to be
an important factor.
Valuable applications should provide better services than less valuable applications.
Implementing QoS
Planning: The organization should develop an awareness of each department’s service
needs and requirements, select an appropriate model, and build stakeholder support.
Design: The organization should then keep track of all key software and hardware
changes and modify the chosen QoS model to the characteristics of its network
infrastructure.
Testing: The organization should test QoS settings and policies in a secure, controlled
testing environment where faults can be identified.
Deployment: Policies should be implemented in phases. An organization can choose to
deploy rules by network segment or by QoS function (what each policy performs).
Monitoring and analyzing: Policies should be modified to increase performance based
on performance data.
Models to Implement QoS
1. Integrated Services(IntServ)
An architecture for providing QoS guarantees in IP networks for individual application
sessions.
Relies on resource reservation, and routers need to maintain state information of allocated
resources and respond to new call setup requests.
Network decides whether to admit or deny a new call setup request.
2. IntServ QoS Components
Resource reservation: call setup signaling, traffic, QoS declaration, per-element
admission control.
QoS-sensitive scheduling e.g WFQ queue discipline.
QoS-sensitive routing algorithm(QSPF)
QoS-sensitive packet discard strategy.
3. RSVP-Internet Signaling
It creates and maintains distributed reservation state, initiated by the receiver and scales for
multicast, which needs to be refreshed otherwise reservation times out as it is in soft state. Latest
paths were discovered through “PATH” messages (forward direction) and used by RESV
messages (reserve direction).
4. Call Admission
Session must first declare it’s QoS requirement and characterize the traffic it will send
through the network.
R-specification: defines the QoS being requested, i.e. what kind of bound we want on the
delay, what kind of packet loss is acceptable, etc.
T-specification: defines the traffic characteristics like bustiness in the traffic.
A signaling protocol is needed to carry the R-spec and T-spec to the routers where
reservation is required.
Routers will admit calls based on their R-spec, T-spec and based on the current resource
allocated at the routers to other calls.
5. Diff-Serv
Differentiated Service is a stateful solution in which each flow doesn’t mean a different state. It
provides reduced state services i.e. maintaining state only for larger granular flows rather than
end-to-end flows tries to achieve the best of both worlds. Intended to address the following
difficulties with IntServ and RSVP:
Flexible Service Models: IntServ has only two classes, want to provide more qualitative
service classes: want to provide ‘relative’ service distinction.
Simpler signaling: Many applications and users may only want to specify a more
qualitative notion of service.
QoS Tools
Traffic Classification and Marking
Traffic Shaping and Policing
Queue Management and Scheduling
Resource Reservation
Congestion Management
TCP Header- TCP congestion control is a method used by the TCP protocol to manage data flow
over a network and prevent congestion. TCP uses a congestion window and congestion policy
that avoids congestion. Previously, we assumed that only the receiver could dictate the sender’s
window size. We ignored another entity here, the network. If the network cannot deliver the data
as fast as it is created by the sender, it must tell the sender to slow down. In other words, in
addition to the receiver, the network is a second entity that determines the size of the sender’s
window.
Example: If the initial congestion window size is 1 segment, and the first segment is successfully
acknowledged, the congestion window size becomes 2 segments. If the next transmission is also
acknowledged, the congestion window size doubles to 4 segments. This exponential growth
continues as long as all segments are successfully acknowledged.
Initially cwnd = 1
After 1 RTT, cwnd = 2^(1) = 2
2 RTT, cwnd = 2^(2) = 4
3 RTT, cwnd = 2^(3) = 8
Congestion Avoidance Phase
Additive Increment: This phase starts after the threshold value also denoted as ssthresh. The size
of CWND (Congestion Window) increases additive. After each RTT cwnd = cwnd + 1.
For example: if the congestion window size is 20 segments and all 20 segments are successfully
acknowledged within an RTT, the congestion window size would be increased to 21 segments in
the next RTT. If all 21 segments are again successfully acknowledged, the congestion window
size will be increased to 22 segments, and so on.
Initially cwnd = i
After 1 RTT, cwnd = i+1
2 RTT, cwnd = i+2
3 RTT, cwnd = i+3
Congestion Detection Phase
Multiplicative Decrement: If congestion occurs, the congestion window size is decreased. The
only way a sender can guess that congestion has happened is the need to retransmit a segment.
Retransmission is needed to recover a missing packet that is assumed to have been dropped by a
router due to congestion. Retransmission can occur in one of two cases: when the RTO timer
times out or when three duplicate ACKs are received.
TCP Header-The Transmission Control Protocol is the most common transport layer protocol. It
works together with IP and provides a reliable transport service between processes using the
network layer service provided by the IP protocol.
The various services provided by the TCP to the application layer are as follows:
1. Process-to-Process Communication –
TCP provides a process to process communication, i.e, the transfer of data that takes place
between individual processes executing on end systems. This is done using port numbers or port
addresses. Port numbers are 16 bits long that help identify which process is sending or receiving
data on a host.
2. Stream oriented –
This means that the data is sent and received as a stream of bytes(unlike UDP or IP that divides
the bits into datagrams or packets). However, the network layer, that provides service for the
TCP, sends packets of information not streams of bytes. Hence, TCP groups a number of bytes
together into a segment and adds a header to each of these segments and then delivers these
segments to the network layer. At the network layer, each of these segments is encapsulated in an
IP packet for transmission. The TCP header has information that is required for control purposes
which will be discussed along with the segment structure.
3. Full-duplex service –
This means that the communication can take place in both directions at the same time.
4. Connection-oriented service –
Unlike UDP, TCP provides a connection-oriented service. It defines 3 different phases:
• Connection establishment
• Data transfer
• Connection termination
5. Reliability –
TCP is reliable as it uses checksum for error detection, attempts to recover lost or corrupted
packets by re-transmission, acknowledgement policy and timers. It uses features like byte number
and sequence number and acknowledgement number so as to ensure reliability. Also, it uses
congestion control mechanisms.
6. Multiplexing –
TCP does multiplexing and de-multiplexing at the sender and receiver ends respectively as a
number of logical connections can be established between port numbers over a physical
connection.
In this example we see that A sends acknowledgement number1001, which means that it has
received data bytes till byte number 1000 and expects to receive 1001 next, hence B next sends
data bytes starting from 1001. Similarly, since B has received data bytes till byte number 13001
after the first data transfer from A to B, therefore B sends acknowledgement number 13002, the
byte number that it expects to receive from A next.
TCP Segment structure –
A TCP segment consists of data bytes to be sent and a header that is added to the data by TCP as
shown:
The header of a TCP segment can range from 20-60 bytes. 40 bytes are for options. If there are
no options, a header is 20 bytes else it can be of upmost 60 bytes.
Header fields:
• Sequence Number –
A 32-bit field that holds the sequence number, i.e, the byte number of the first byte that is sent in
that particular segment. It is used to reassemble the message at the receiving end of the segments
that are received out of order.
• Acknowledgement Number –
A 32-bit field that holds the acknowledgement number, i.e, the byte number that the receiver
expects to receive next. It is an acknowledgement for the previous bytes being received
successfully.
• Header Length (HLEN) –
This is a 4-bit field that indicates the length of the TCP header by a number of 4-byte words in
the header, i.e if the header is 20 bytes(min length of TCP header), then this field will hold 5
(because 5 x 4 = 20) and the maximum length: 60 bytes, then it’ll hold the value 15(because 15 x
4 = 60). Hence, the value of this field is always between 5 and 15.
• Control flags –
These are 6 1-bit control bits that control connection establishment, connection termination,
connection abortion, flow control, mode of transfer etc. Their function is:
o URG: Urgent pointer is valid
o ACK: Acknowledgement number is valid( used in case of cumulative acknowledgement)
o PSH: Request for push
o RST: Reset the connection
o SYN: Synchronize sequence numbers
o FIN: Terminate the connection
• Window size –
This field tells the window size of the sending TCP in bytes.
• Checksum –
This field holds the checksum for error control. It is mandatory in TCP as opposed to UDP.
• Urgent pointer –
This field (valid only if the URG control flag is set) is used to point to data that is urgently
required that needs to reach the receiving process at the earliest. The value of this field is added to
the sequence number to get the byte number of the last urgent byte.
TCP Connection –
TCP is connection-oriented. A TCP connection is established by a 3-way handshake.
UDP Header-
User Datagram Protocol (UDP) is a Transport Layer protocol. UDP is a part of the Internet
Protocol suite, referred to as UDP/IP suite. Unlike TCP, it is an unreliable and
connectionless protocol. So, there is no need to establish a connection before data
transfer. The UDP helps to establish low-latency and loss-tolerating connections over the
network. The UDP enables process-to-process communication.
What is User Datagram Protocol?
User Datagram Protocol (UDP) is one of the core protocols of the Internet Protocol (IP) suite.
It is a communication protocol used across the internet for time-sensitive transmissions such
as video playback or DNS lookups . Unlike Transmission Control Protocol (TCP), UDP is
connectionless and does not guarantee delivery, order, or error checking, making it a
lightweight and efficient option for certain types of data transmission.
UDP Header
UDP header is an 8-byte fixed and simple header, while for TCP it may vary from 20 bytes
to 60 bytes. The first 8 Bytes contain all necessary header information and the remaining part
consists of data. UDP port number fields are each 16 bits long, therefore the range for port
numbers is defined from 0 to 65535; port number 0 is reserved. Port numbers help to
distinguish different user requests or processes.
UDP Header
Source Port: Source Port is a 2 Byte long field used to identify the port number of
the source.
Destination Port: It is a 2 Byte long field, used to identify the port of the destined
packet.
Length: Length is the length of UDP including the header and the data. It is a 16-bits
field.
Checksum: Checksum is 2 Bytes long field. It is the 16-bit one’s complement of the
one’s complement sum of the UDP header, the pseudo-header of information from the
IP header, and the data, padded with zero octets at the end (if necessary) to make a
multiple of two octets.
Notes – Unlike TCP, the Checksum calculation is not mandatory in UDP. No Error control or
flow control is provided by UDP. Hence UDP depends on IP and ICMP for error
reporting. Also UDP provides port numbers so that is can differentiate between users
requests.
Applications of UDP
Used for simple request-response communication when the size of data is less and
hence there is lesser concern about flow and error control.
It is a suitable protocol for multicasting as UDP supports packet switching.
UDP is used for some routing update protocols like RIP(Routing Information
Protocol).
Normally used for real-time applications which can not tolerate uneven delays
between sections of a received message.
VoIP (Voice over Internet Protocol) services, such as Skype and WhatsApp, use UDP
for real-time voice communication. The delay in voice communication can be
noticeable if packets are delayed due to congestion control, so UDP is used to ensure
fast and efficient data transmission.
DNS (Domain Name System) also uses UDP for its query/response messages. DNS
queries are typically small and require a quick response time, making UDP a suitable
protocol for this application.
DHCP (Dynamic Host Configuration Protocol) uses UDP to dynamically assign IP
addresses to devices on a network. DHCP messages are typically small, and the delay
caused by packet loss or retransmission is generally not critical for this application.
Following implementations uses UDP as a transport layer protocol:
o NTP (Network Time Protocol)
o DNS (Domain Name Service)
o BOOTP, DHCP.
o NNP (Network News Protocol)
o Quote of the day protocol
o TFTP, RTSP, RIP.
The application layer can do some of the tasks through UDP-
o Trace Route
o Record Route
o Timestamp
UDP takes a datagram from Network Layer , attaches its header, and sends it to the
user. So, it works fast.
Advantages of UDP
Speed: UDP is faster than TCP because it does not have the overhead of establishing
a connection and ensuring reliable data delivery.
Lower latency: Since there is no connection establishment, there is lower latency and
faster response time.
Simplicity: UDP has a simpler protocol design than TCP, making it easier to
implement and manage.
Broadcast support: UDP supports broadcasting to multiple recipients, making it
useful for applications such as video streaming and online gaming.
Smaller packet size: UDP uses smaller packet sizes than TCP, which can reduce
network congestion and improve overall network performance.
User Datagram Protocol (UDP) is more efficient in terms of both latency and
bandwidth.
Disadvantages of UDP
No reliability: UDP does not guarantee delivery of packets or order of delivery,
which can lead to missing or duplicate data.
No congestion control: UDP does not have congestion control, which means that it
can send packets at a rate that can cause network congestion.
Vulnerable to attacks: UDP is vulnerable to denial-of-service attacks , where an
attacker can flood a network with UDP packets, overwhelming the network and
causing it to crash.
Limited use cases: UDP is not suitable for applications that require reliable data
delivery, such as email or file transfers, and is better suited for applications that can
tolerate some data loss, such as video streaming or online gaming.
How is UDP used in DDoS attacks?
A UDP flood attack is a type of Distributed Denial of Service (DDoS) attack where an
attacker sends a large number of User Datagram Protocol (UDP) packets to a target port.
UDP Protocol : Unlike TCP, UDP is connectionless and doesn’t require a handshake
before data transfer. When a UDP packet arrives at a server, it checks the specified
port for listening applications. If no app is found, the server sends
an ICMP “destination unreachable” packet to the supposed sender (usually a
random bystander due to spoofed IP addresses).
Attack Process :
o The attacker sends UDP packets with spoofed IP sender addresses to random
ports on the target system.
o The server checks each incoming packet’s port for a listening application
(usually not found due to random port selection).
o The server sends ICMP “destination unreachable” packets to the spoofed
sender (random bystanders).
o The attacker floods the victim with UDP data packets, overwhelming its
resources.
Mitigation : To protect against UDP flood attacks, monitoring network traffic for
sudden spikes and implementing security measures are crucial. Organizations often
use specialized tools and services to detect and mitigate such attacks effectively.
UDP Pseudo Header
The purpose of using a pseudo-header is to verify that the UDP packet has reached its
correct destination
The correct destination consist of a specific machine and a specific protocol port
number within that machine
It is a communications protocol,
It is same as the TCP protocol
using which the data is transmitted
except this doesnt guarantee
between systems over the network.
the error-checking and data
In this, the data is transmitted in the
Definition recovery. If you use this
form of packets. It includes error-
protocol, the data will be sent
checking, guarantees the delivery
continuously, irrespective of
and preserves the order of the data
the issues in the receiving end.
packets.