0% found this document useful (0 votes)
21 views12 pages

Unit IV (CN)

Download as docx, pdf, or txt
Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1/ 12

UNIT-IV

TRANSPORT LAYER

Transport layer:

o Transport Layer is the second layer in the TCP/IP model and the fourth layer in
the OSI model. It is an end-to-end layer used to deliver messages to a host. It is
termed an end-to-end layer because it provides a point-to-point connection
rather
than hop-to- hop, between the source host and destination host to deliver the services
reliably. The unit of data encapsulation in the Transport Layer is a segment.
o Working of Transport Layer:
o The transport layer takes services from the Network layer and provides services to
the Application layer
o At the sender’s side: The transport layer receives data (message) from the Application
layer and then performs Segmentation, divides the actual message into segments,
adds source and destination’s port numbers into the header of the segment, and
transfers the message to the Network layer.
o At the receiver’s side: The transport layer receives data from the Network layer,
reassembles the segmented data, reads its header, identifies the port number, and
forwards the message to the appropriate port in the Application layer.

o The transport layer is represented by two protocols: TCP and UDP.


o The IP protocol in the network layer delivers a datagram from a source host to
the destination host.
o Nowadays, the operating system supports multiuser and multiprocessing
environments, an executing program is called a process. When a host sends a
message to other host means that source process is sending a process to a
destination process. The transport layer protocols define some connections to
individual ports known as protocol ports.
o An IP protocol is a host-to-host protocol used to deliver a packet from source
host to the destination host while transport layer protocols are port-to-port
protocols that work on the top of the IP protocols to deliver the packet from the
originating port to the IP services, and from IP services to the destination port.
o Each port is defined by a positive integer address, and it is of 16 bits.
Transport Layer: Process to Process Communication:

Process to process delivery:


◈ While Data Link Layer requires the MAC address (48 bits address contained inside
the Network Interface Card of every host machine) of source-destination hosts to
correctly deliver a frame and the Network layer requires the IP address for appropriate
routing of packets, in a similar way Transport Layer requires a Port number to
correctly deliver the segments of data to the correct process amongst the multiple
processes running on a particular host. A port number is a 16-bit address used to
identify any client-server program uniquely.

End-to-end Connection between hosts:


◈ The transport layer is also responsible for creating the end-to-end Connection between
hosts for which it mainly uses TCP and UDP. TCP is a secure, connection-orientated
protocol that uses a handshake protocol to establish a robust connection between two
end hosts. TCP ensures reliable delivery of messages and is used in various
applications. UDP, on the other hand, is a stateless and unreliable protocol that
ensures best-effort delivery. It is suitable for applications that have little concern with
flow or error control and requires sending the bulk of data like video conferencing. It
is often used in multicasting protocols.

Multiplexing and Demultiplexing:


◈ Multiplexing allows simultaneous use of different applications over a network that is
running on a host. The transport layer provides this mechanism which enables us to
send packet streams from various applications simultaneously over a network. The
transport layer accepts these packets from different processes differentiated by their
port numbers and passes them to the network layer after adding proper headers.
Similarly, Demultiplexing is required at the receiver side to obtain the data coming
from various processes. Transport receives the segments of data from the network
layer and delivers it to the appropriate process running on the receiver’s machine.

◈ Congestion Control:
◈ Congestion is a situation in which too many sources over a network attempt to send
data and the router buffers start overflowing due to which loss of packets occur. As a
result retransmission of packets from the sources increases the congestion further. In
this situation, the Transport layer provides Congestion Control in different ways. It
uses open loop congestion control to prevent the congestion and closed-loop
congestion control to remove the congestion in a network once it occurred. TCP
provides AIMD- additive increase multiplicative decrease, leaky bucket technique for
congestion control.

Data integrity and Error correction:


◈ The transport layer checks for errors in the messages coming from the application
layer by using error detection codes, computing checksums, it checks whether the
received data is not corrupted and uses the ACK and NACK services to inform
the
sender if the data has arrived or not and checks for the integrity of data.

Flow control:
◈ The transport layer provides a flow control mechanism between the adjacent layers of
the TCP/IP model. TCP also prevents data loss due to a fast sender and slow receiver
by imposing some flow control techniques. It uses the method of sliding window
protocol which is accomplished by the receiver by sending a window back to the
sender informing the size of data it can receive.

Transport Layer protocols:

The transport layer is represented by two protocols: TCP and UDP.

◈ UDP(User Datagram Protocol):

UDP:
o UDP stands for User Datagram Protocol.
o UDP is a simple protocol and it provides nonsequenced transport functionality.
o UDP is a connectionless protocol.
o This type of protocol is used when reliability and security are less important than speed
and size.
o UDP is an end-to-end transport level protocol that adds transport-level addresses,
checksum error control, and length information to the data from the upper layer.
o The packet produced by the UDP protocol is known as a user datagram.

User Datagram Format:

The user datagram has a 16-byte header which is shown below:


Where,

o Source port address: It defines the address of the application process that has
delivered a message. The source port address is of 16 bits address.
o Destination port address: It defines the address of the application process that
will receive the message. The destination port address is of a 16-bit address.
o Total length: It defines the total length of the user datagram in bytes. It is a 16-
bit field.
o Checksum: The checksum is a 16-bit field which is used in error detection.

Disadvantages of UDP protocol

o UDP provides basic functions needed for the end-to-end delivery of a transmission.
o It does not provide any sequencing or reordering functions and does not specify the
damaged packet when reporting an error.
o UDP can discover that an error has occurred, but it does not specify which packet has
been lost as it does not contain an ID or sequencing number of a particular data
segment.

TCP: Transmission Control Protocol


o TCP stands for Transmission Control Protocol.
o It provides full transport layer services to applications.
o It is a connection-oriented protocol means the connection established between both the
ends of the transmission. For creating the connection, TCP generates a virtual circuit
between sender and receiver for the duration of a transmission.
Features Of TCP protocol:
o Stream data transfer: TCP protocol transfers the data in the form of contiguous stream
of bytes. TCP group the bytes in the form of TCP segments and then passed it to the IP
layer for transmission to the destination. TCP itself segments the data and forward to
the IP.

o Reliability: TCP assigns a sequence number to each byte transmitted and expects a
positive acknowledgement from the receiving TCP. If ACK is not received within a
timeout interval, then the data is retransmitted to the
destination. The receiving TCP uses the sequence number to reassemble the segments if
they arrive out of order or to eliminate the duplicate segments.
o Flow Control: When receiving TCP sends an acknowledgement back to the sender
indicating the number the bytes it can receive without overflowing its internal buffer.
The number of bytes is sent in ACK in the form of the highest sequence number that it
can receive without any problem. This mechanism is also referred to as a window
mechanism.

o Multiplexing: Multiplexing is a process of accepting the data from different applications


and forwarding to the different applications on different computers. At the receiving
end, the data is forwarded to the correct application. This process is known as
demultiplexing. TCP transmits the packet to the correct application by using the logical
channels known as ports.
o Logical Connections: The combination of sockets, sequence numbers, and window
sizes, is called a logical connection. Each connection is identified by the pair of sockets
used by sending and receiving processes.
o Full Duplex: TCP provides Full Duplex service, i.e., the data flow in both the directions
at the same time. To achieve Full Duplex service, each TCP should have sending and
receiving buffers so that the segments can flow in both the directions. TCP is a
connection- oriented protocol. Suppose the process A wants to send and receive the data
from process
B. The following steps occur:
o Establish a connection between two TCPs.
o Data is exchanged in both the directions.
o The Connection is terminated.

TCP Segment Format:


Where,

o Source port address: It is used to define the address of the application program
in a source computer. It is a 16-bit field.
o Destination port address: It is used to define the address of the application
program in a destination computer. It is a 16-bit field.
o Sequence number: A stream of data is divided into two or more TCP segments.
The 32-bit sequence number field represents the position of the data in an
original data stream.
o Acknowledgement number: A 32-field acknowledgement number acknowledge
the data from other communicating devices. If ACK field is set to 1, then it
specifies the sequence number that the receiver is expecting to receive.
o Header Length (HLEN): It specifies the size of the TCP header in 32-bit words.
The minimum size of the header is 5 words, and the maximum size of the header
is 15 words. Therefore, the maximum size of the TCP header is 60 bytes, and the
minimum size of the TCP header is 20 bytes.
o Reserved: It is a six-bit field which is reserved for future use.
o Control bits: Each bit of a control field functions individually and independently.
A control bit defines the use of a segment or serves as a validity check for other
fields.

There are total six types of flags in control field:


o URG: The URG field indicates that the data in a segment is urgent.
o ACK: When ACK field is set, then it validates the acknowledgement number.
o PSH: The PSH field is used to inform the sender that higher throughput is needed so if
possible, data must be pushed with higher throughput.

o RST: The reset bit is used to reset the TCP connection when there is any confusion
occurs in the sequence numbers.
o SYN: The SYN field is used to synchronize the sequence numbers in three types of
segments: connection request, connection confirmation ( with the ACK bit set ), and
confirmation acknowledgement.
o FIN: The FIN field is used to inform the receiving TCP module that the sender has
finished sending data. It is used in connection termination in three types of segments:
termination request, termination confirmation, and acknowledgement of termination
confirmation.
o Window Size: The window is a 16-bit field that defines the size of the window.
o Checksum: The checksum is a 16-bit field used in error detection.
o Urgent pointer: If URG flag is set to 1, then this 16-bit field is an offset from the
sequence number indicating that it is a last urgent data byte.
o Options and padding: It defines the optional fields that convey the additional
information to the receiver.

SCTP Congestion Control;

o SCTP stands for Stream Control Transmission Protocol.


o It is a connection- oriented protocol in computer networks which provides a full-duplex
association i.e., transmitting multiple streams of data between two end points at the same
time that have established a connection in network. It is sometimes referred to as next
generation TCP or TCPng, SCTP makes it easier to support telephonic conversation on
Internet. A telephonic conversation requires transmitting of voice along with other data at the
same time on both ends, SCTP protocol makes it easier to establish reliable connection.
o SCTP is also intended to make it easier to establish connection over wireless network and
managing transmission of multimedia data. SCTP is a standard protocol (RFC 2960) and
is developed by Internet Engineering Task Force (IETF).

Features of SCTP:

o There are various features of SCTP, which are as follows −


o Transmission Sequence Number
o The unit of data in TCP is a byte. Data transfer in TCP is controlled by numbering bytes by
using a sequence number. On the other hand, the unit of data in SCTP is a DATA chunk
that may or may not have a one-to-one relationship with the message coming from the
process because of fragmentation.
o Stream Identifier
o In TCP, there is only one stream in each connection. In SCTP, there may be several streams
in each association. Each stream in SCTP needs to be identified by using a stream identifier
(SI). Each data chunk must carry the SI in its header so that when it arrives at the
destination, it can be properly placed in its stream. The 51 is a 16-bit number starting from
O.
o Stream Sequence Number
o When a data chunk arrives at the destination SCTP, it is delivered to the appropriate stream
and in the proper order. This means that, in addition to an SI, SCTP defines each data
chunk in each stream with a stream sequence number (SSN).

Packets
o In TCP, a segment carries data and control information. Data is carried as a collection of
bytes; control information is defined by six control flags in the header. The design of SCTP
is totally different: data is carried as data chunks; control information is carried as control
chunks.

Flow Control
o Like TCP, SCTP implements flow control to avoid overwhelming the receiver.
o Error Control
o Like TCP, SCTP implements error control to provide reliability. TSN numbers
and acknowledgement numbers are used for error control.

Congestion Control
o Like TCP, SCTP implements congestion control to determine how many data chunks can
be injected into the network.

QOS(Quality of Service)

o Quality of Service (QoS) is an important concept, particularly when working with multimedia
applications. Multimedia applications, such as video conferencing, streaming services, and VoIP
(Voice over IP), require certain bandwidth, latency, jitter, and packet loss parameters. QoS methods
help ensure that these requirements are satisfied, allowing for seamless and reliable communication.
o What is Quality of Service?
o Quality-of-service (QoS) refers to traffic control mechanisms that seek to differentiate performance
based on application or network-operator requirements or provide predictable or guaranteed
performance to applications, sessions, or traffic aggregates. The basic phenomenon for QoS is in terms
of packet delay and losses of various kinds.
QoS Specification
 Delay
 Delay Variation(Jitter)
 Throughput
 Error Rate
Types of Quality of Service
 Stateless Solutions – Routers maintain no fine-grained state about traffic, one positive factor of it is that it is
scalable and robust. But it has weak services as there is no guarantee about the kind of delay or performance in
a particular application which we have to encounter.
 Stateful Solutions – Routers maintain a per-flow state as flow is very important in providing the Quality-of-
Service i.e. providing powerful services such as guaranteed services and high resource utilization, providing
protection, and is much less scalable and robust.
QoS Parameters
 Packet loss: This occurs when network connections get congested, and routers and switches begin losing
packets.
 Jitter: This is the result of network congestion, time drift, and routing changes. Too much jitter can reduce the
quality of voice and video communication.
 Latency: This is how long it takes a packet to travel from its source to its destination. The latency should be as
near to zero as possible.
 Bandwidth: This is a network communications link’s ability to transmit the majority of data from one place to
another in a specific amount of time.
 Mean opinion score: This is a metric for rating voice quality that uses a five-point scale, with five representing
the highest quality
Benefits of QoS
o Improved Performance for Critical Applications
o Enhanced User Experience
o Efficient Bandwidth Utilization
o Increased Network Reliability
o Compliance with
o Reduced Network Costs
o Improved Security
o Better Scalability
o

QoS improving techniques: Leaky Bucket.:

◈ Leaky Bucket Algorithm:


◈ Let see the working condition of Leaky Bucket Algorithm −

◈ Leaky Bucket Algorithm mainly controls the total amount and the rate of the traffic
sent to the network.
◈ Step 1 − Let us imagine a bucket with a small hole at the bottom where the rate at
which water is poured into the bucket is not constant and can vary but it leaks from
the bucket at a constant rate.
◈ Step 2 − So (up to water is present in the bucket), the rate at which the water leaks
does not depend on the rate at which the water is input to the bucket.
◈ Step 3 − If the bucket is full, additional water that enters into the bucket that spills
over the sides and is lost.
◈ Step 4 − Thus the same concept applied to packets in the network. Consider that data
is coming from the source at variable speeds. Suppose that a source sends data at 10
Mbps for 4 seconds. Then there is no data for 3 seconds. The source again transmits
data at a rate of 8 Mbps for 2 seconds. Thus, in a time span of 8 seconds, 68 Mb data
has been transmitted.
◈ That’s why if a leaky bucket algorithm is used, the data flow would be 8 Mbps for 9
seconds. Thus, the constant flow is maintained.

Token bucket algorithm:

◈ Token bucket algorithm is one of the techniques for congestion control


algorithms. When too many packets are present in the network it causes packet delay
and loss of packet which degrades the performance of the system. This situation is
called congestion.
◈ The network layer and transport layer share the responsibility for handling
congestions. One of the most effective ways to control congestion is trying to
reduce the load that transport layer is placing on the network. To maintain this
network and
transport layers have to work together.

◈ The leaky bucket algorithm enforces output patterns at the average rate, no matter
how busy the traffic is. So, to deal with the more traffic, we need a flexible algorithm
so that the data is not lost. One such approach is the token bucket algorithm.
◈ Let us understand this algorithm step wise as given below −
• Step 1 − In regular intervals tokens are thrown into the bucket f.
• Step 2 − The bucket has a maximum capacity f.
• Step 3 − If the packet is ready, then a token is removed from the bucket, and
the packet is sent.
• Step 4 − Suppose, if there is no token in the bucket, the packet cannot b

Working of Token Bucket Algorithm
It allows bursty traffic at a regulated maximum rate. It allows idle hosts to accumulate credit for the
future in the form of tokens. The system removes one token for every cell of data sent. For each tick of
the clock the system send n tokens to the bucket. If n is 100 and host is idle for 100 ticks, bucket collects
10000 tokens. Host can now consume all these tokens with 10 cells per tick.
Token bucket can be easily implemented with a counter. The token is initiated to zero. Each time a token
is added, counter is incremented to 1. Each time a unit of data is sent, counter is decremented by 1. When
the counter is zero, host cannot send data.
Process depicting how token bucket algorithm works
Steps Involved in Token Bucket Algorithm
Step 1: Creation of Bucket: An imaginative bucket is assigned a fixed capacity, known as "rate limit". It
can hold up to a certain number of tokens.
Step 2: Refill the Bucket: The bucket is dynamic; it gets periodically filled with tokens. Tokens are
added to the bucket at a fixed rate.
Step 3: Incoming Requests: Upon receiving a request, we verify the presence of tokens in the bucket.
Step 4: Consume Tokens: If there are tokens in the bucket, we pick one token from it. This means the
request is allowed to proceed. The time of token consumption is also recorded.
Step 5: Empty Bucket: If the bucket is depleted, meaning there are no tokens remaining, the request is
denied. This precautionary measure prevents server or system overload, ensuring operation stays within
predefined limits.
Advantage of Token Bucket over Leaky Bucket
 If a bucket is full in tokens, then tokens are discarded and not the packets. While in leaky bucket
algorithm, packets are discarded.
 Token bucket can send large bursts at a faster rate while leaky bucket always sends packets at
constant rate.
 Token bucket ensures predictable traffic shaping as it allows for setting token arrival rate and
maximum token count. In leaky bucket, such control may not be present.
 Premium Quality of Service(QoS) is provided by prioritizing different traffic types through distinct
token arrival rates. Such flexibility in prioritization is not provided by leaky bucket.
 Token bucket is suitable for high-speed data transfer or streaming video content as it
allows transmission of large bursts of data. As leaky bucket operates at a constant rate, it can lead to
less efficient bandwidth utilization.
 Token Bucket provides more granular control as administrators can adjust token arrival rate and
maximum token count based on network requirements. Leaky Bucket has limited granularity in
controlling traffic compared to Token Bucket.
Disadvantages of Token Bucket Algorithm
 Token Bucket has the tendency to generate tokens at a fixed rate, even when the network traffic is not
present. This is leads of accumulation of unused tokens during times when there is no traffic, hence
leading to wastage.
 Due to token accumulation, delays can introduced in the packet delivery. If the token bucket happens
to be empty, packets will have to wait for new tokens, leading to increased latency and potential
packet loss.
 Token Bucket happens to be less flexible than leaky bucket when it comes to network traffic shaping.
The fixed token generation rate cannot be easily altered to meet changing network requirements,
unlike the adaptable nature of leaky bucket.
 The implementation involved in token bucket can be more complex, especially due to the fact that
different token generation rates are used for different traffic types. Configuration and management
might be more difficult due to this.
 Usage of large bursts of data may lead to inefficient use of bandwidth, and may cause congestion.
Leaky bucket algorithm, on the other hand helps prevent congestion by limiting the amount of data
sent at any given time, promoting more efficient bandwidth utilization.

You might also like