Module 4
Module 4
TRANSPORT LAYER
• The transport layer, or layer 4 of the OSI model, controls network traffic between
hosts and end systems to guarantee full data flows.
• It is positioned between the network and session layers in the OSI paradigm. The
data packets must be taken and sent to the appropriate machine by the network layer.
• After that, the transport layer receives the packets, sorts them, and looks for faults.
• Subsequently, it directs them to the session layer of the appropriate computer
program.
• Now, the properly structured packets are used by the session layer to hold the data
for the application.
Working of Transport Layer
• Communication between end systems is dependable and effective thanks to the
Transport Layer.
• Apart from regulating flow and accommodating numerous applications concurrently,
it guarantees data delivery in a manner that guarantees accuracy and minimises
mistakes.
• It accomplishes this by utilising a collection of methods and protocols that provide
data transport.
• The primary function of the transport layer is to give application processes operating
on several hosts direct access to communication services.
• Logical communication between application processes operating on separate hosts is
facilitated by the transport layer.
• Application processes use the logical communication offered by the transport layer
to deliver messages to one other even when they are running on different hosts and
are not physically connected.
• The network routers do not implement the transport layer protocols; only the end
systems do.
• For instance, the network layer receives services from TCP and UDP, two transport
layer protocols, which offer distinct functionalities.
• Protocols at the transport layer offer multiplexing and demultiplexing capabilities.
• Every application at the application layer is capable of sending a message via either
TCP or UDP. Either of these two protocols can be used by the application to interact.
The internet protocol on the internet layer will then be communicated with by both
TCP and UDP.
• The transport layer is readable and writeable by the applications.
Process-to-process Delivery : UDP, TCP, SCTP
• The transport layer is responsible for process-to-process delivery-the delivery of a
packet, part of a message, from one process to another.
• Two processes communicate in a client/server relationship
UDP (User Datagram Protocol)
• The User Datagram Protocol (UDP) is called a connectionless, unreliable transport
protocol.
• It does not add anything to the services of IP except to provide process-to-process
communication instead of host-to-host communication.
• Also, it performs very limited error checking.
• UDP is a very simple protocol using a minimum of overhead.
• If a process wants to send a small message and does not care much about reliability,
it can use UDP.
• Sending a small message by using UDP takes much less interaction between the
sender and receiver than using TCP or SCTP.
User Datagram
• UDP packets, called user datagrams, have a fixed-size header of 8 bytes. Figure
shows the format of a user datagram.
TCP (Transmission Control Protocol)
• TCP, like UDP, is a process-to-process (program-to-program) protocol. TCP,
therefore, like UDP, uses port numbers.
• Unlike UDP, TCP is a connection-oriented protocol; it creates a virtual connection
between two TCPs to send data.
• In addition, TCP uses flow and error control mechanisms at the transport level.
• In brief, TCP is called a connection-oriented, reliable transport protocol. It adds
connection-oriented and reliability features to the services of IP.
SCTP (Stream Control Transmission Protocol)
• Stream Control Transmission Protocol (SCTP) is a new reliable, message-oriented
transport layer protocol.
• SCTP is a dependable transport protocol that facilitates data transfer over the
network in scenarios involving one or more IP addresses.
• SCTP, however, is mostly designed for Internet applications that have recently been
introduced.
Congestion Control and Quality of Service
• The main focus of congestion control and quality of service is data traffic.
• In congestion control we try to avoid traffic congestion.
• In quality of service, we try to create an appropriate environment for the traffic.
Congestion
• An important issue in a packet-switched network is congestion.
• Congestion in a network may occur if the load on the network-the number of packets
sent to the network-is greater than the capacity of the network-the number of packets
a network can handle.
• Congestion control refers to the mechanisms and techniques to control the
congestion and keep the load below the capacity.
• Congestion in a network or internetwork occurs because routers and switches have
queues-buffers that hold the packets before and after processing.
Network Performance
• Congestion control involves two factors that measure the performance of a network:
delay and throughput
Congestion Control Algorithm
• Congestion Control is a mechanism that controls the entry of data packets into the
network, enabling a better use of a shared network infrastructure and avoiding
congestive collapse.
• Congestive-avoidance algorithms (CAA) are implemented at the TCP layer as the
mechanism to avoid congestive collapse in a network.
QUALITY OF SERVICE
Quality of service is something a flow seeks to attain
• Flow Characteristics
• Traditionally, four types of characteristics are attributed to a flow: reliability, delay,
jitter, and bandwidth
Reliability
• Reliability is a characteristic that a flow needs. Lack of reliability means losing a
packet or acknowledgment, which entails retransmission.
Delay
• Source-to-destination delay is another flow characteristic. Again applications can
tolerate delay in different degrees.
Jitter
• Jitter is the variation in delay for packets belonging to the same flow. High jitter
means the difference between delays is large; low jitter means the variation is small.
If the jitter is high, some action is needed in order to use the received data.
Bandwidth
• Different applications need different bandwidths.
Leaky Bucket Algorithm
The leaky bucket algorithm discovers its use in the context of network traffic shaping or
rate-limiting.
• A leaky bucket execution and a token bucket execution are predominantly used for
traffic shaping algorithms.
• This algorithm is used to control the rate at which traffic is sent to the network and
shape the burst traffic to a steady traffic stream.
• The disadvantages compared with the leaky-bucket algorithm are the inefficient use
of available network resources.
• The large area of network resources such as bandwidth is not being used effectively.
• Let us consider an example to understand Imagine a bucket with a small hole in the
bottom. No matter at what rate water enters the bucket, the outflow is at constant
rate. When the bucket is full with Similarly, each network interface contains a leaky
bucket and the following steps are involved in leaky bucket algorithm:
• When host wants to send packet, packet is thrown into the bucket.
• The bucket leaks at a constant rate, meaning the network interface transmits packets
at a constant rate.
• Bursty traffic is converted to a uniform traffic by the leaky bucket.
• In practice the bucket is a finite queue that outputs at a finite rate.
• After additional water entering spills over the sides and is lost.
Token Bucket Algorithm
• The leaky bucket algorithm has a rigid output design at an average rate independent
of the bursty traffic.
• In some applications, when large bursts arrive, the output is allowed to speed up.
This calls for a more flexible algorithm, preferably one that never loses information.
Therefore, a token bucket algorithm finds its uses in network traffic shaping or rate-
limiting.
• It is a control algorithm that indicates when traffic should be sent. This order comes
based on the display of tokens in the bucket.
• The bucket contains tokens. Each of the tokens defines a packet of predetermined
size. Tokens in the bucket are deleted for the ability to share a packet.
• When tokens are shown, a flow to transmit traffic appears in the display of tokens.
• No token means no flow sends its packets. Hence, a flow transfers traffic up to its
peak burst rate in good tokens in the bucket.
• The token bucket can easily be implemented with a counter.
• The token is initialized to zero.
• Each time a token is added, the counter is incremented by 1.
• Each time a unit of data is sent, the counter is decremented by 1.
• When the counter is zero, the host cannot send data.
Need of Token Bucket Algorithm
• The leaky bucket algorithm enforces output pattern at the average rate, no matter
how bursty the traffic is. So in order to deal with the bursty traffic we need a flexible
algorithm so that the data is not lost. One such algorithm is token bucket algorithm.
• Steps of this algorithm can be described as follows:
• In regular intervals tokens are thrown into the bucket.
• The bucket has a maximum capacity.
• If there is a ready packet, a token is removed from the bucket, and the packet is sent.
• If there is no token in the bucket, the packet cannot be sent.
Application Layer
• The application layer enables the user, whether human or software, to access the
network.
• It provides user interfaces and support for services such as electronic mail, file
access and transfer, access to system resources, surfing the world wide web, and
network management
Functions of Application layer
• The application layer being the topmost layer in OSI model, performs functions
required in any kind of application or communication process
Working of Application Layer
• At first, client sends a command to server and when server receives that command, it
allocates port number to client.
• Thereafter, the client sends an initiation connection request to server and when server
receives request, it gives acknowledgement (ACK) to client through client has
successfully established a connection with the server.
• Therefore, now client has access to server through which it may either ask server to
send any types of files or other documents or it may upload some files or documents
on server itself
Domain Name System (DNS)
• The Domain Name System (DNS) is a supporting program that is used by other
programs such as e-mail.
• Figure shows an example of how a DNS client/server program can support an e-mail
program to find the IP address of an e-mail recipient. A user of an e-mail program
may know the e-mail address of the recipient; however, the IP protocol needs the IP
address. The DNS client program sends a request to a DNS server to map the e-mail
address to the corresponding IP address.
• To identify an entity, TCP/IP protocols use the IP address, which uniquely identifies
the connection of a host to the Internet.
• However, people prefer to use names instead of numeric addresses. Therefore, we
need a system that can map a name to an address or an address to a name.
• When the Internet was small, mapping was done by using a host file. The host file
had only two columns: name and address.
• Today, however, it is impossible to have one single host file to relate every address
with a name and vice versa.
• A solution, the one used today, is to divide this huge amount of information into
smaller parts and store each part on a different computer. In this method, the host that
needs mapping can contact the closest computer holding the needed information.
This method is used by the Domain Name System (DNS).
REMOTE LOGGING
• In the Internet, users may want to run application programs at a remote site and
create results that can be transferred to their local site.
• The better solution is a general-purpose client/server program that lets a user access
any application program on a remote computer; in other words, allow the user to log
on to a remote computer. After logging on, a user can use the services available on
the remote computer and transfer the results back to the local computer.
ELECTRONIC MAIL
• One of the most popular Internet services is electronic mail (e-mail).
• The designers of the Internet probably never imagined the popularity of this
application program. Its architecture consists of several components.
• The general architecture of an e-mail system including the three main components:
user agent, message transfer agent, and message access agent.
User Agent
• The first component of an electronic mail system is the user agent (UA).
• It provides service to the user to make the process of sending and receiving a
message easier.
User Agent Types
• There are two types of user agents: command-driven and GUI-based.
• Command-Driven
• A command-driven user agent normally accepts a one-character command from the
keyboard to perform its task. For example, a user can type the character r, at the
command prompt, to reply to the sender of the message, or type the character R to
reply to the sender and all recipients.
• Some examples of command-driven user agents are mail, pine, and elm.
• GUI-Based: Modern user agents are GUI-based.
• They contain graphical-user interface (GUI) components that allow the user to
interact with the software by using both the keyboard and the mouse.
• They have graphical components such as icons, menu bars, and windows that make
the services easy to access.
• Some examples of GUI-based user agents are Eudora, Microsoft's Outlook, and
Netscape.
Message Transfer Agent: SMTP
• The actual mail transfer is done through message transfer agents.
• To send mail, a system must have the client MTA, and to receive mail, a system must
have a server MTA. The formal protocol that defines the MTA client and server in
the Internet is called the Simple Mail Transfer Protocol (SMTP). As we said before,
two pairs of MTA client/server programs are used in the most common situation
• SMTP is used two times, between the sender and the sender's mail server and
between the two mail servers
• SMTP simply defines how commands and responses must be sent back and forth.
• Each network is free to choose a software package for implementation
Message Access Agent: POP and IMAP
• Post Office Protocol, version 3 (POP3) is simple and limited in functionality. The
client POP3 software is installed on the recipient computer; the server POP3
software is installed on the mail server.
• Mail access starts with the client when the user needs to download e-mail from the
mailbox on the mail server. The client opens a connection to the server on TCP port
110. It then sends its user name and password to access the mailbox. The user can
then list and retrieve the mail messages, one by one.
• POP3 has two modes: the delete mode and the keep mode. In the delete mode, the
mail is deleted from the mailbox after each retrieval.
File Transfer Protocol (FTP)
• FTP: stands for File Transfer Protocol.
• This protocol helps to transfer different files from one device to another.
• FTP promotes sharing of files via remote computer devices with reliable, efficient
data transfer.
• FTP uses port number 20 for data access and port number 21 for data control.
Hyper Text Transfer Protocol (HTTP)
• HTTP: stands for Hyper Text Transfer Protocol.
• It is the foundation of the World Wide Web (WWW).
• HTTP works on the client server model. This protocol is used for transmitting
hypermedia documents like HTML.
• This protocol was designed particularly for the communications between the web
browsers and web servers, but this protocol can also be used for several other
purposes.
• HTTP is a stateless protocol (network protocol in which a client sends requests to
server and server responses back as per the given state), which means the server is
not responsible for maintaining the previous client’s requests.
• HTTP uses port number 80.
Simple Network Management Protocol (SNMP)
• SNMP was developed for use as a network management tool for networks and
internetworks operating TCP/IP.
• It has since been expanded for use in all types of networking environments.
• The term simple network management protocol (SNMP) is actually used to refer to a
collection of specifications for network management that include the protocol itself,
the definition of a database, and associated concepts.
• The functions performed by a network management system can be divided into five
broad categories: configuration management, fault management, performance
management, security management, and accounting management.
• SNMP uses the concept of manager and agent. That is, a manager, usually a host,
controls and monitors a set of agents, usually routers
• SNMP is an application-level protocol in which a few manager stations control a set
of agents.
• The protocol is designed at the application level so that it can monitor devices made
by different manufacturers and installed on different physical networks.
• In other words, SNMP frees management tasks from both the physical characteristics
of the managed devices and the underlying networking technology.
• It can be used in a heterogeneous internet made of different LANs and WANs
connected by routers made by different manufacturers.