0% found this document useful (0 votes)
9 views14 pages

Unit Iv

The transport layer, the fourth layer of the OSI model, facilitates communication services between application processes on different hosts, utilizing protocols like TCP and UDP. It ensures process-to-process delivery through multiplexing and demultiplexing, while also providing reliable or unreliable services based on application needs. Key features of transport layer protocols include connection-oriented and connectionless services, with TCP offering reliability and flow control, whereas UDP prioritizes speed and simplicity.

Uploaded by

nrk31091
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views14 pages

Unit Iv

The transport layer, the fourth layer of the OSI model, facilitates communication services between application processes on different hosts, utilizing protocols like TCP and UDP. It ensures process-to-process delivery through multiplexing and demultiplexing, while also providing reliable or unreliable services based on application needs. Key features of transport layer protocols include connection-oriented and connectionless services, with TCP offering reliability and flow control, whereas UDP prioritizes speed and simplicity.

Uploaded by

nrk31091
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 14

Transport Layer

o The transport layer is a 4th layer from the top.


o The main role of the transport layer is to provide the communication services directly to the
application processes running on different hosts.
o The transport layer provides a logical communication between application processes running
on different hosts. Although the application processes on different hosts are not physically
connected, application processes use the logical communication provided by the transport
layer to send the messages to each other.
o The transport layer protocols are implemented in the end systems but not in the network
routers.
o A computer network provides more than one protocol to the network applications. For
example, TCP and UDP are two transport layer protocols that provide a different set of
services to the network layer.
o All transport layer protocols provide multiplexing/demultiplexing service. It also provides
other services such as reliable data transfer, bandwidth guarantees, and delay guarantees.

23.1 PROCESS-TO-PROCESSDELIVERY
The data link layer is responsible for delivery of frames between two
neighboring nodes over a link. This is called node-to-node delivery. The
network layer is responsible for delivery of datagrams between two hosts.
This is called host-to-host delivery. Communication on the Internet is not
defined as the exchange of data between two nodes or between two
hosts. Real communication takes place between two processes
(application programs). We need process-to-process delivery. However,at
any moment, several processes may be running on the source host and
several on the destination host. To complete
thedelivery,weneedamechanismtodeliverdatafromoneoftheseprocessesrunn
ingon the source host to the corresponding process running on the
destinationhost.
The transport layer is responsible for process-to-process delivery-the
delivery of a packet, part of a message, from one process to another. Two
processes communicate in a client/server relationship, as we will see later.
Figure 23.1 shows these three types of deliveries and their domains.

The transport layer is responsible for process-to-process delivery.


Addressing
Whenever we need to deliver something to ones pacific destination
among many, we need an address. At the data link layer, we need a MAC
address to choose one node among several nodes if the connection is not
point-to-point. A frame in the data link layer needs a destination MAC
address for delivery and a source address for the next node's reply.
At the network layer, we need an IP address to choose one host
among millions. A datagram in the network layer needs a destination IP
address for delivery and a source IP address for the destination's reply.
At the transport layer, we need a transport layer address, called a port
number, to choose among multiple processes running on the destination
host. The destination port number is needed for delivery; the source port
number is needed for the reply.
In the Internet model, the port numbers are 16-bit integers between 0
and 65,535. The client program defines itself with a port number, chosen
randomly by the transport layer software running on the client host. This
is the ephemeral port number.

Socket Addresses
Process-to-process delivery needs two identifiers, IP address and the port
number, at each end to make a connection. The combination of an IP
address and a port number is called a socket address. The client socket
address defines the client process uniquely just as the server socket
address defines the server process uniquely (see Figure 23.5).
A transport layer protocol needs a pair of socket addresses: the client
socket address and the server socket address. These four pieces of
information are part of the IP header and the transport layer protocol
header. The IP header contains the IP addresses; the UDP or TCP header
contains the port numbers.

Figure 23.5 Socket address

Port number
Multiplexing and Demultiplexing
The addressing mechanism allows multiplexing and demultiplexing by the
transport layer, as shown in Figure 23.6.

Figure23.6 Multiplexing anddemultiplexing

Processes Processes

Multiplexing
At the sender site, there may be several processes that need to send packets.
However, there is only one transport layer protocol at any time. This is a
many-to-one relationship and requires multiplexing. The protocol accepts
messages from different processes, differentiated by their assigned port
numbers. After adding the header, the transport layer passes the packet to the
network layer.

Demultiplexing
At the receiver site, the relationship is one-to-many and requires
demultiplexing. The transport layer receives datagrams from the network
layer. After error checking and dropping of the header, the transport layer
delivers each message to the appropriate process based on the port number.

Connectionless Versus Connection-Oriented Service


A transport layer protocol can either be connectionless orconnection-oriented.

ConnectionlessService
In a connectionless service, the packets are sent from one party to another
with no need for connection establishment or connection release. The packets
are not numbered; they may be delayed or lost or may arrive out of
sequence. There is no acknowledgment either. We will see shortly that one of
the transport layer protocols in the Internet model, UDP, isconnectionless.
Service
In a connection-oriented service, a connection is first established between the
sender and the receiver. Data are transferred. At the end, the connection is
released. We will see shortly that TCP and SCTP are connection-oriented
protocols.

Reliable Versus Unreliable


The transport layer service can be reliable or unreliable. If the application
layer program needs reliability, we use a reliable transport layer protocol by
implementing flow and error control at the transport layer. This means a
slower and more complex service. On the other hand, if the application
program does not need reliability becauseit uses its own flow and error
control mechanism or it needs fast service or the nature of the service does
not demand flow and error control (real-time applications), then an
unreliable protocol can beused.
In the Internet, there are three common different transport layer protocols,
as we have already mentioned. UDP is connectionless and unreliable; TCP
and SCTP are connection• oriented and reliable. These three can respond to
the demands of the application layer programs.

Transport Layer protocols

o The transport layer is represented by two protocols: TCP and UDP.

UDP

o UDP stands for User Datagram Protocol.


o UDP is a simple protocol and it provides non sequenced transport functionality.
o UDP is a connectionless protocol.
o This type of protocol is used when reliability and security are less important than speed and
size.
o UDP is an end-to-end transport level protocol that adds transport-level addresses, checksum
error control, and length information to the data from the upper layer.
o The packet produced by the UDP protocol is known as a user datagram.

User Datagram Format

The user datagram has a 16-byte header which is shown below:


Where,

o Source port address: It defines the address of the application process that has delivered a
message. The source port address is of 16 bits address.
o Destination port address: It defines the address of the application process that will receive
the message. The destination port address is of a 16-bit address.
o Total length: It defines the total length of the user datagram in bytes. It is a 16-bit field.
o Checksum: The checksum is a 16-bit field which is used in error detection.

TCP

o TCP stands for Transmission Control Protocol.


o It provides full transport layer services to applications.
o It is a connection-oriented protocol means the connection established between both the ends
of the transmission. For creating the connection, TCP generates a virtual circuit between
sender and receiver for the duration of a transmission.

Features Of TCP protocol

o Stream data transfer: TCP protocol transfers the data in the form of contiguous stream of
bytes. TCP group the bytes in the form of TCP segments and then passed it to the IP layer for
transmission to the destination. TCP itself segments the data and forward to the IP.
o Reliability: TCP assigns a sequence number to each byte transmitted and expects a positive
acknowledgement from the receiving TCP. If ACK is not received within a timeout interval,
then the data is retransmitted to the destination.
The receiving TCP uses the sequence number to reassemble the segments if they arrive out of
order or to eliminate the duplicate segments.
o Flow Control: When receiving TCP sends an acknowledgement back to the sender indicating
the number the bytes it can receive without overflowing its internal buffer. The number of
bytes is sent in ACK in the form of the highest sequence number that it can receive without
any problem. This mechanism is also referred to as a window mechanism.
o Multiplexing: Multiplexing is a process of accepting the data from different applications and
forwarding to the different applications on different computers. At the receiving end, the data
is forwarded to the correct application. This process is known as demultiplexing. TCP
transmits the packet to the correct application by using the logical channels known as ports.
o Logical Connections: The combination of sockets, sequence numbers, and window sizes, is
called a logical connection. Each connection is identified by the pair of sockets used by
sending and receiving processes.
o Full Duplex: TCP provides Full Duplex service, i.e., the data flow in both the directions at
the same time. To achieve Full Duplex service, each TCP should have sending and receiving
buffers so that the segments can flow in both the directions. TCP is a connection-oriented
protocol. Suppose the process A wants to send and receive the data from process B. The
following steps occur:
o Establish a connection between two TCPs.
o Data is exchanged in both the directions.
o The Connection is terminated.

TCP Segment Format

Where,

o Source port address: It is used to define the address of the application program in a source
computer. It is a 16-bit field.
o Destination port address: It is used to define the address of the application program in a
destination computer. It is a 16-bit field.
o Sequence number: A stream of data is divided into two or more TCP segments. The 32-bit
sequence number field represents the position of the data in an original data stream.
o Acknowledgement number: A 32-field acknowledgement number acknowledge the data
from other communicating devices. If ACK field is set to 1, then it specifies the sequence
number that the receiver is expecting to receive.
o Header Length (HLEN): It specifies the size of the TCP header in 32-bit words. The
minimum size of the header is 5 words, and the maximum size of the header is 15 words.
Therefore, the maximum size of the TCP header is 60 bytes, and the minimum size of the
TCP header is 20 bytes.
o Reserved: It is a six-bit field which is reserved for future use.
o Control bits: Each bit of a control field functions individually and independently. A control
bit defines the use of a segment or serves as a validity check for other fields.

There are total six types of flags in control field:


o URG: The URG field indicates that the data in a segment is urgent.
o ACK: When ACK field is set, then it validates the acknowledgement number.
o PSH: The PSH field is used to inform the sender that higher throughput is needed so if
possible, data must be pushed with higher throughput.
o RST: The reset bit is used to reset the TCP connection when there is any confusion occurs in
the sequence numbers.
o SYN: The SYN field is used to synchronize the sequence numbers in three types of segments:
connection request, connection confirmation (with the ACK bit set), and confirmation
acknowledgement.
o FIN: The FIN field is used to inform the receiving TCP module that the sender has finished
sending data. It is used in connection termination in three types of segments: termination
request, termination confirmation, and acknowledgement of termination confirmation.
o Window Size: The window is a 16-bit field that defines the size of the window.
o Checksum: The checksum is a 16-bit field used in error detection.
o Urgent pointer: If URG flag is set to 1, then this 16-bit field is an offset from the
sequence number indicating that it is a last urgent data byte.
o Options and padding: It defines the optional fields that convey the additional
information to the receiver.

Stream Control Transmission Protocol.


It is a connection- oriented protocol in computer networks which provides a full-duplex association
i.e., transmitting multiple streams of data between two end points at the same time that have
established a connection in network. It is sometimes referred to as next generation TCP or TCPng,
SCTP makes it easier to support telephonic conversation on Internet. A telephonic conversation
requires transmitting of voice along with other data at the same time on both ends, SCTP protocol
makes it easier to establish reliable connection.
SCTP is also intended to make it easier to establish connection over wireless network and
managing transmission of multimedia data. SCTP is a standard protocol (RFC 2960) and is
developed by Internet Engineering Task Force (IETF).

Characteristics of SCTP:
1. Unicast with Multiple properties –
It is a point-to-point protocol which can use different paths to reach end host.
2. Reliable Transmission –
It uses SACK and checksums to detect damaged, corrupted, discarded, duplicate and reordered
data. It is similar to TCP but SCTP is more efficient when it comes to reordering of data.
3. Message oriented –
Each message can be framed and we can keep order of datastream and tabs on structure. For
this, In TCP, we need a different layer for abstraction.
4. Multi-homing –
It can establish multiple connection paths between two end points and does not need to rely on
IP layer for resilience.

Advantages of SCTP:
1. It is a full- duplex connection i.e. users can send and receive data simultaneously.
2. It allows half- closed connections.
3. The message’s boundaries are maintained and application doesn’t have to split messages.
4. It has properties of both TCP and UDP protocol.
5. It doesn’t rely on IP layer for resilience of paths.

Disadvantages of SCTP:
1. One of key challenges is that it requires changes in transport stack on node.
2. Applications need to be modified to use SCTP instead of TCP/UDP.
3. Applications need to be modified to handle multiple simultaneous streams.

CONGESTION

An important issue in a packet-switched network is congestion. Congestion in a network


may occur if the load on the network-the number of packets sent to the network-is greater
than the capacity of the network-the number of packets a network can handle. Congestion
control refers to the mechanisms and techniques to control the congestion and keep the
load below the capacity.
We may ask why there is congestion on a network. Congestion happens in any system
that involves waiting. For example, congestion happens on a freeway because any anomality
in the flow, such as an accident during rush hour, creates blockage.

Congestion in a network or internetwork occurs because routers and switches have queues-
buffers that hold the packets before and after processing. CONGESTIONCONTROL
Congestion control refers to techniques and mechanisms that can either prevent congestion, before it
happens, or remove congestion, after it has happened. In general, we can divide congestion control
mechanisms into two broad categories: open-loop congestion control(prevention) and closed loop
congestion control (removal)as shown in Figure24.5.

Figure 24.5 Congestion control categories

Retransmission A
policy Window c
policy k
nowledgment
Back pressure
policy Discarding Choke packet
policy Admission
policy Implicit
signaling
Explicit
signaling
Open-Loop Congestion Control
In open-loop congestion control, policies are applied to prevent congestion before it happens. In
these mechanisms, congestion control is handled by either the source or the destination. We give a
brief list of policies that can prevent congestion.

Retransmission Policy
Retransmission is sometimes unavoidable. If the sender feels that a sent packet is lost or
corrupted, the packet needs to be retransmitted. Retransmission in general may increase
congestion in the network. However, a good retransmission policy can prevent congestion.
Window Policy
The type of window at the sender may also affect congestion. The Selective Repeat window is
better than the Go-Back-N window for congestion control. In the Go-Back-N window, when the
timer for a packet times out, several packets may be resent, although some may have arrived safe
and sound at the receiver.
Acknowledgment Policy
The acknowledgment policy imposed by the receiver may also affect congestion. If the receiver
does not acknowledge every packet it receives, it may slow down the sender and help prevent
congestion.

Discarding Policy
A good discarding policy by the routers may prevent congestion and at the same time may not
harm the integrity of the transmission. For example, in audio transmission, if the policy is to
discard less sensitive packets when congestion is likely to happen, the quality of sound is still
preserved and congestion is prevented or alleviated.

Admission Policy
An admission policy, which is a quality-of-service mechanism, can also prevent congestion in
virtual-circuit networks. Switches in a flow first check the resource requirement of a flow before
admitting it to the network.
Closed-Loop Congestion Control
Closed-loop congestion control mechanisms try to alleviate congestion after it happens. Several
mechanisms have been used by different protocols. We describe a few of them here.
Backpressure
The technique of backpressure refers to a congestion control mechanism in which a congested node
stops receiving data from the immediate upstream node or nodes. This may cause the upstream node
or nodes to become congested, and they, in turn, reject data from their upstream nodes or nodes. And
so on. Backpressure is a node-to-node congestion control that starts with a node and propagates, in
the opposite direction of data flow, to the source. Figure 24.6 shows the idea ofbackpressure.

Figure24.6 Backpressure method for alleviating congestion


Backpressure

Source Congestion Destination


Dataflow

Node III in the figure has more input data than it can handle. It drops some packets in its input buffer
and informs node II to slow down. Node II, in turn, may be congested because it is slowing down the
output flow of data. If node II is congested, it informs node I to slow down, which in turn may create
congestion. If so, node I informs the source of data to slow down. This, in time, alleviates the
congestion. Note that the pressure on node III is moved backward to the source to remove the
congestion.

Choke Packet
A choke packet is a packet sent by a node to the source to inform it of congestion. Note the difference
between the backpressure and choke packet methods. In backpressure, the warning is from one node
to its upstream node, although the warning may eventually reach the source station. In the choke
packet method, the warning is from the router, which has encountered congestion, to the source
station directly.
Figure 24.7 Choke packet

choke packet

Source Congestion
Data flow destination
Implicit Signaling
In implicit signaling, there is no communication between the congested node or nodes and the
source. The source guesses that there is a congestion somewhere in the network from other
symptoms. For example, when a source sends several packets and there is no acknowledgment for
a while, one assumption is that the network is congested. The delay in receiving an
acknowledgment is interpreted as congestion in the network; the source should slow down. We
will see this type of signaling when we discuss TCP congestion control later in the chapter.

Explicit Signaling
The node that experiences congestion can explicitly send a signal to the source or desti• nation.
The explicit signaling method, however, is different from the choke packet method. In the choke
packet method, a separate packet is used for this purpose; in the explicit signaling method, the
signal is included in the packets that carry data. Explicit signaling, as we will see in Frame Relay
congestion control, can occur in either the forward or the backward direction.
Backward Signaling A bit can be set in a packet moving in the direction opposite to the
congestion. This bit can warn the source that there is congestion and that it needs to slow down to
avoid the discarding of packets.
Forward Signaling A bit can be set in a packet moving in the direction of the congestion. This
bit can warn the destination that there is congestion.

Quality of Service
QoS is an overall performance measure of the computer network.
Quality-of-Service (QoS) refers to traffic control mechanisms that seek to either differentiate
performance based on application or network-operator requirements or provide predictable or
guaranteed performance to applications, sessions or traffic aggregates. Basic phenomenon for QoS
means in terms of packet delay and losses of various kinds.
Need for QoS –
Video and audio conferencing require bounded delay and loss rate.
Video and audio streaming requires bounded packet loss rate, it may not be so sensitive to delay.
Time-critical applications (real-time control) in which bounded delay is considered to be an
important factor.
Valuable applications should be provided better services than less valuable applications.
Important flow characteristics of the QoS are given below:

1. Reliability
If a packet gets lost or acknowledgement is not received (at sender), the re-transmission of data will
be needed. This decreases the reliability.
The importance of the reliability can differ according to the application.
For example:
E- mail and file transfer need to have a reliable transmission as compared to that of an audio
conferencing.

2. Delay
Delay of a message from source to destination is a very important characteristic. However, delay can
be tolerated differently by the different applications.
For example:
The time delay cannot be tolerated in audio conferencing (needs a minimum time delay), while the
time delay in the e-mail or file transfer has less importance.

3. Jitter
The jitter is the variation in the packet delay. If the difference between delays is large, then it is
called as high jitter. On the contrary, if the difference between delays is small, it is known as low
jitter.
Example:
Case1: If 3 packets are sent at times 0, 1, 2 and received at 10, 11, 12. Here, the delay is same for all
packets and it is acceptable for the telephonic conversation.
Case2: If 3 packets 0, 1, 2 are sent and received at 31, 34, 39, so the delay is different for all packets.
In this case, the time delay is not acceptable for the telephonic conversation.

4. Bandwidth
Different applications need the different bandwidth. For example: Video conferencing needs more
bandwidth in comparison to that of sending an e-mail

Integrated Services and Differentiated Service

These two models are designed to provide Quality of Service (QoS) in the network.

1. Integrated Services( IntServ)

Integrated service is flow-based QoS model and designed for IP.


In integrated services, user needs to create a flow in the network, from source to destination and
needs to inform all routers (every router in the system implements IntServ) of the resource
requirement.

Following are the steps to understand how integrated services works.

I) Resource Reservation Protocol (RSVP)


An IP is connectionless, datagram, packet-switching protocol. To implement a flow-based model, a
signaling protocol is used to run over IP, which provides the signaling mechanism to make
reservation (every applications need assurance to make reservation), this protocol is called
as RSVP.

ii) Flow Specification


While making reservation, resource needs to define the flow specification. The flow specification
has two parts:
a) Resource specification
It defines the resources that the flow needs to reserve. For example: Buffer, bandwidth, etc.
b) Traffic specification
It defines the traffic categorization of the flow.

iii) Admit or deny


After receiving the flow specification from an application, the router decides to admit or deny the
service and the decision can be taken based on the previous commitments of the router and current
availability of the resource.

Classification of services

The two classes of services to define Integrated Services are:

a) Guaranteed Service Class


This service guarantees that the packets arrive within a specific delivery time and not discarded, if
the traffic flow maintains the traffic specification boundary.
This type of service is designed for real time traffic, which needs a guaranty of minimum end to end
delay.
For example: Audio conferencing.

b) Controlled Load Service Class


This type of service is designed for the applications, which can accept some delays, but are sensitive
to overload network and to the possibility to lose packets.
For example: E-mail or file transfer.
Problems with Integrated Services.

The two problems with the Integrated services are:

i) Scalability
In Integrated Services, it is necessary for each router to keep information of each flow. But, this is
not always possible due to growing network.

ii) Service- Type Limitation


The integrated services model provides only two types of services, guaranteed and control-load.

You might also like