Unit IV (CN)
Unit IV (CN)
Unit IV (CN)
TRANSPORT LAYER
Transport layer:
o Transport Layer is the second layer in the TCP/IP model and the fourth layer in
the OSI model. It is an end-to-end layer used to deliver messages to a host. It is
termed an end-to-end layer because it provides a point-to-point connection
rather
than hop-to- hop, between the source host and destination host to deliver the services
reliably. The unit of data encapsulation in the Transport Layer is a segment.
o Working of Transport Layer:
o The transport layer takes services from the Network layer and provides services to
the Application layer
o At the sender’s side: The transport layer receives data (message) from the Application
layer and then performs Segmentation, divides the actual message into segments,
adds source and destination’s port numbers into the header of the segment, and
transfers the message to the Network layer.
o At the receiver’s side: The transport layer receives data from the Network layer,
reassembles the segmented data, reads its header, identifies the port number, and
forwards the message to the appropriate port in the Application layer.
◈ Congestion Control:
◈ Congestion is a situation in which too many sources over a network attempt to send
data and the router buffers start overflowing due to which loss of packets occur. As a
result retransmission of packets from the sources increases the congestion further. In
this situation, the Transport layer provides Congestion Control in different ways. It
uses open loop congestion control to prevent the congestion and closed-loop
congestion control to remove the congestion in a network once it occurred. TCP
provides AIMD- additive increase multiplicative decrease, leaky bucket technique for
congestion control.
Flow control:
◈ The transport layer provides a flow control mechanism between the adjacent layers of
the TCP/IP model. TCP also prevents data loss due to a fast sender and slow receiver
by imposing some flow control techniques. It uses the method of sliding window
protocol which is accomplished by the receiver by sending a window back to the
sender informing the size of data it can receive.
UDP:
o UDP stands for User Datagram Protocol.
o UDP is a simple protocol and it provides nonsequenced transport functionality.
o UDP is a connectionless protocol.
o This type of protocol is used when reliability and security are less important than speed
and size.
o UDP is an end-to-end transport level protocol that adds transport-level addresses,
checksum error control, and length information to the data from the upper layer.
o The packet produced by the UDP protocol is known as a user datagram.
o Source port address: It defines the address of the application process that has
delivered a message. The source port address is of 16 bits address.
o Destination port address: It defines the address of the application process that
will receive the message. The destination port address is of a 16-bit address.
o Total length: It defines the total length of the user datagram in bytes. It is a 16-
bit field.
o Checksum: The checksum is a 16-bit field which is used in error detection.
o UDP provides basic functions needed for the end-to-end delivery of a transmission.
o It does not provide any sequencing or reordering functions and does not specify the
damaged packet when reporting an error.
o UDP can discover that an error has occurred, but it does not specify which packet has
been lost as it does not contain an ID or sequencing number of a particular data
segment.
o Reliability: TCP assigns a sequence number to each byte transmitted and expects a
positive acknowledgement from the receiving TCP. If ACK is not received within a
timeout interval, then the data is retransmitted to the
destination. The receiving TCP uses the sequence number to reassemble the segments if
they arrive out of order or to eliminate the duplicate segments.
o Flow Control: When receiving TCP sends an acknowledgement back to the sender
indicating the number the bytes it can receive without overflowing its internal buffer.
The number of bytes is sent in ACK in the form of the highest sequence number that it
can receive without any problem. This mechanism is also referred to as a window
mechanism.
o Source port address: It is used to define the address of the application program
in a source computer. It is a 16-bit field.
o Destination port address: It is used to define the address of the application
program in a destination computer. It is a 16-bit field.
o Sequence number: A stream of data is divided into two or more TCP segments.
The 32-bit sequence number field represents the position of the data in an
original data stream.
o Acknowledgement number: A 32-field acknowledgement number acknowledge
the data from other communicating devices. If ACK field is set to 1, then it
specifies the sequence number that the receiver is expecting to receive.
o Header Length (HLEN): It specifies the size of the TCP header in 32-bit words.
The minimum size of the header is 5 words, and the maximum size of the header
is 15 words. Therefore, the maximum size of the TCP header is 60 bytes, and the
minimum size of the TCP header is 20 bytes.
o Reserved: It is a six-bit field which is reserved for future use.
o Control bits: Each bit of a control field functions individually and independently.
A control bit defines the use of a segment or serves as a validity check for other
fields.
o RST: The reset bit is used to reset the TCP connection when there is any confusion
occurs in the sequence numbers.
o SYN: The SYN field is used to synchronize the sequence numbers in three types of
segments: connection request, connection confirmation ( with the ACK bit set ), and
confirmation acknowledgement.
o FIN: The FIN field is used to inform the receiving TCP module that the sender has
finished sending data. It is used in connection termination in three types of segments:
termination request, termination confirmation, and acknowledgement of termination
confirmation.
o Window Size: The window is a 16-bit field that defines the size of the window.
o Checksum: The checksum is a 16-bit field used in error detection.
o Urgent pointer: If URG flag is set to 1, then this 16-bit field is an offset from the
sequence number indicating that it is a last urgent data byte.
o Options and padding: It defines the optional fields that convey the additional
information to the receiver.
Features of SCTP:
Packets
o In TCP, a segment carries data and control information. Data is carried as a collection of
bytes; control information is defined by six control flags in the header. The design of SCTP
is totally different: data is carried as data chunks; control information is carried as control
chunks.
Flow Control
o Like TCP, SCTP implements flow control to avoid overwhelming the receiver.
o Error Control
o Like TCP, SCTP implements error control to provide reliability. TSN numbers
and acknowledgement numbers are used for error control.
Congestion Control
o Like TCP, SCTP implements congestion control to determine how many data chunks can
be injected into the network.
QOS(Quality of Service)
o Quality of Service (QoS) is an important concept, particularly when working with multimedia
applications. Multimedia applications, such as video conferencing, streaming services, and VoIP
(Voice over IP), require certain bandwidth, latency, jitter, and packet loss parameters. QoS methods
help ensure that these requirements are satisfied, allowing for seamless and reliable communication.
o What is Quality of Service?
o Quality-of-service (QoS) refers to traffic control mechanisms that seek to differentiate performance
based on application or network-operator requirements or provide predictable or guaranteed
performance to applications, sessions, or traffic aggregates. The basic phenomenon for QoS is in terms
of packet delay and losses of various kinds.
QoS Specification
Delay
Delay Variation(Jitter)
Throughput
Error Rate
Types of Quality of Service
Stateless Solutions – Routers maintain no fine-grained state about traffic, one positive factor of it is that it is
scalable and robust. But it has weak services as there is no guarantee about the kind of delay or performance in
a particular application which we have to encounter.
Stateful Solutions – Routers maintain a per-flow state as flow is very important in providing the Quality-of-
Service i.e. providing powerful services such as guaranteed services and high resource utilization, providing
protection, and is much less scalable and robust.
QoS Parameters
Packet loss: This occurs when network connections get congested, and routers and switches begin losing
packets.
Jitter: This is the result of network congestion, time drift, and routing changes. Too much jitter can reduce the
quality of voice and video communication.
Latency: This is how long it takes a packet to travel from its source to its destination. The latency should be as
near to zero as possible.
Bandwidth: This is a network communications link’s ability to transmit the majority of data from one place to
another in a specific amount of time.
Mean opinion score: This is a metric for rating voice quality that uses a five-point scale, with five representing
the highest quality
Benefits of QoS
o Improved Performance for Critical Applications
o Enhanced User Experience
o Efficient Bandwidth Utilization
o Increased Network Reliability
o Compliance with
o Reduced Network Costs
o Improved Security
o Better Scalability
o
◈ Leaky Bucket Algorithm mainly controls the total amount and the rate of the traffic
sent to the network.
◈ Step 1 − Let us imagine a bucket with a small hole at the bottom where the rate at
which water is poured into the bucket is not constant and can vary but it leaks from
the bucket at a constant rate.
◈ Step 2 − So (up to water is present in the bucket), the rate at which the water leaks
does not depend on the rate at which the water is input to the bucket.
◈ Step 3 − If the bucket is full, additional water that enters into the bucket that spills
over the sides and is lost.
◈ Step 4 − Thus the same concept applied to packets in the network. Consider that data
is coming from the source at variable speeds. Suppose that a source sends data at 10
Mbps for 4 seconds. Then there is no data for 3 seconds. The source again transmits
data at a rate of 8 Mbps for 2 seconds. Thus, in a time span of 8 seconds, 68 Mb data
has been transmitted.
◈ That’s why if a leaky bucket algorithm is used, the data flow would be 8 Mbps for 9
seconds. Thus, the constant flow is maintained.
◈ The leaky bucket algorithm enforces output patterns at the average rate, no matter
how busy the traffic is. So, to deal with the more traffic, we need a flexible algorithm
so that the data is not lost. One such approach is the token bucket algorithm.
◈ Let us understand this algorithm step wise as given below −
• Step 1 − In regular intervals tokens are thrown into the bucket f.
• Step 2 − The bucket has a maximum capacity f.
• Step 3 − If the packet is ready, then a token is removed from the bucket, and
the packet is sent.
• Step 4 − Suppose, if there is no token in the bucket, the packet cannot b
•
Working of Token Bucket Algorithm
It allows bursty traffic at a regulated maximum rate. It allows idle hosts to accumulate credit for the
future in the form of tokens. The system removes one token for every cell of data sent. For each tick of
the clock the system send n tokens to the bucket. If n is 100 and host is idle for 100 ticks, bucket collects
10000 tokens. Host can now consume all these tokens with 10 cells per tick.
Token bucket can be easily implemented with a counter. The token is initiated to zero. Each time a token
is added, counter is incremented to 1. Each time a unit of data is sent, counter is decremented by 1. When
the counter is zero, host cannot send data.
Process depicting how token bucket algorithm works
Steps Involved in Token Bucket Algorithm
Step 1: Creation of Bucket: An imaginative bucket is assigned a fixed capacity, known as "rate limit". It
can hold up to a certain number of tokens.
Step 2: Refill the Bucket: The bucket is dynamic; it gets periodically filled with tokens. Tokens are
added to the bucket at a fixed rate.
Step 3: Incoming Requests: Upon receiving a request, we verify the presence of tokens in the bucket.
Step 4: Consume Tokens: If there are tokens in the bucket, we pick one token from it. This means the
request is allowed to proceed. The time of token consumption is also recorded.
Step 5: Empty Bucket: If the bucket is depleted, meaning there are no tokens remaining, the request is
denied. This precautionary measure prevents server or system overload, ensuring operation stays within
predefined limits.
Advantage of Token Bucket over Leaky Bucket
If a bucket is full in tokens, then tokens are discarded and not the packets. While in leaky bucket
algorithm, packets are discarded.
Token bucket can send large bursts at a faster rate while leaky bucket always sends packets at
constant rate.
Token bucket ensures predictable traffic shaping as it allows for setting token arrival rate and
maximum token count. In leaky bucket, such control may not be present.
Premium Quality of Service(QoS) is provided by prioritizing different traffic types through distinct
token arrival rates. Such flexibility in prioritization is not provided by leaky bucket.
Token bucket is suitable for high-speed data transfer or streaming video content as it
allows transmission of large bursts of data. As leaky bucket operates at a constant rate, it can lead to
less efficient bandwidth utilization.
Token Bucket provides more granular control as administrators can adjust token arrival rate and
maximum token count based on network requirements. Leaky Bucket has limited granularity in
controlling traffic compared to Token Bucket.
Disadvantages of Token Bucket Algorithm
Token Bucket has the tendency to generate tokens at a fixed rate, even when the network traffic is not
present. This is leads of accumulation of unused tokens during times when there is no traffic, hence
leading to wastage.
Due to token accumulation, delays can introduced in the packet delivery. If the token bucket happens
to be empty, packets will have to wait for new tokens, leading to increased latency and potential
packet loss.
Token Bucket happens to be less flexible than leaky bucket when it comes to network traffic shaping.
The fixed token generation rate cannot be easily altered to meet changing network requirements,
unlike the adaptable nature of leaky bucket.
The implementation involved in token bucket can be more complex, especially due to the fact that
different token generation rates are used for different traffic types. Configuration and management
might be more difficult due to this.
Usage of large bursts of data may lead to inefficient use of bandwidth, and may cause congestion.
Leaky bucket algorithm, on the other hand helps prevent congestion by limiting the amount of data
sent at any given time, promoting more efficient bandwidth utilization.