DCN Unit 3
DCN Unit 3
Network Performance
Transport Layer
o The main role of the transport layer is to provide the communication services
directly to the application processes running on different hosts.
o The transport layer protocols are implemented in the end systems but not in the
network routers.
o A computer network provides more than one protocol to the network applications.
For example, TCP and UDP are two transport layer protocols that provide a
different set of services to the network layer.
o Each of the applications in the application layer has the ability to send a message
by using TCP or UDP. The application communicates by using either of these two
protocols. Both TCP and UDP will then communicate with the internet protocol ithe
internet layer. The applications can read and write to the transport layer.
Therefore, we can say that communication is a two-way process.
Transport Layer Primitives :
A service is specified by a set of primitives. A primitive means operation. To access the service
a user process can access these primitives. These primitives are different for connection
oriented service and connectionless service.
There are five types of service primitives:
1. LISTEN : When a server is ready to accept an incoming connection it executes the LISTEN
primitive. It blocks waiting for an incoming connection.
2. CONNECT : It connects the server by establishing a connection. Response is awaited.
3. RECIEVE: Then the RECIEVE call blocks the server.
4. SEND : Then the client executes SEND primitive to transmit its request followed by the
execution of RECIEVE to get the reply. Send the message.
5. DISCONNECT : This primitive is used for terminating the connection. After this primitive one
can’t send any message. When the client sends DISCONNECT packet then the server also
sends the DISCONNECT packet to acknowledge the client. When the server package is
received by client then the process is terminated.
1. Addressing
2. Connection Establishment
3. Connection Release
5. Multiplexing
6. Crash Recovery
3. The application process then sends over a request for the time.
3. Connection Release
Two styles of connection release are Asymmetric Release and Symmetric Release: Asymmetric
release is the way the telephone system works: when one party hangs up, the connection is
broken. Symmetric release treats the connection as two separate unidirectional connections and
strategy
6. Crash Recovery
The sending host can employ four strategies for crash recovery
1) Always retransmit
2) Never retransmit
3) Had send the TPDU , but not received the acknowledgement. Those TPDUs are not
resend.
4) Has some TPDUs to be send .those TPDUs are resend.
The main functionality of the TCP Is to take the data from the
application layer. Then it divides the data into a several packets,
provides numbering to these packets, and finally transmits these
packets to the destination. The TCP, on the other side, will
reassemble the packets and transmits them to the application
layer. As we know that TCP is a connection-oriented protocol, so
the connection will remain established until the communication is
not completed between the sender and the receiver.
Features of TCP/IP
Some of the most prominent features of Transmission control protocol are
• Itmeans sender and receiver are connected to each other till the
completion of the process.
• The order of the data is maintained i.e. order remains same before and
after transmission.
3. Full Duplex
• In TCP data can be transmitted from receiver to the sender or vice – versa
at the same time.
• It increases efficiency of data flow between sender and receiver.
4. Flow Control
• Flow control limits the rate at which a sender transfers data. This is done
to ensure reliable delivery.
• The receiver continually hints to the sender on how much data can be
received (using a sliding window)
5. Error Control
6. Congestion Control
Advantages
• It is a reliable protocol.
• It provides an error-checking mechanism as well as one for recovery.
• It gives flow control.
• It makes sure that the data reaches the proper destination in the exact
order that it was sent.
• Open Protocol, not owned by any organization or individual.
• It assigns an IP address to each computer on the network and a domain
name to each site thus making each device site to be distinguishable
over the network.
Disadvantages
• TCP is made for Wide Area Networks, thus its size can become an issue
for small networks with low resources.
• TCP runs several layers so it can slow down the speed of the network.
• It is not generic in nature. Meaning, it cannot represent any protocol stack
other than the TCP/IP suite. E.g., it cannot work with a Bluetooth
connection.
• No modifications since their development around 30 years ago.
UDP Protocol
• The UDP stands for User Datagram Protocol. The David P. Reed
developed the UDP protocol in 1980. It is defined in RFC 768, and
it is a part of the TCP/IP protocol, so it is a standard protocol over
the internet.
• Like TCP, UDP provides a set of rules that governs how the data
should be exchanged over the internet. The UDP works by
encapsulating the data into the packet and providing its own
header information to the packet. Then, this UDP packet is
encapsulated to the IP packet and sent off to its destination.
• Both the TCP and UDP protocols send the data over the internet
protocol network, so it is also known as TCP/IP and UDP/IP. There
are many differences between these two protocols. UDP enables
the process to process communication, whereas the TCP provides
host to host communication. Since UDP sends the messages in the
form of datagrams, it is considered the best-effort mode of
communication.
o Connectionless
The UDP is a connectionless protocol as it does not create a virtual path to transfer the
data. It does not use the virtual path, so packets are sent in different paths between the
sender and the receiver, which leads to the loss of packets or received out of order.
Ordered delivery of data is not guaranteed.
In the case of UDP, the datagrams are sent in some order will be received in the same
order is not guaranteed as the datagrams are not numbered.
o Ports
The UDP protocol uses different port numbers so that the data can be sent to the correct
destination. The port numbers are defined between 0 and 1023.
o Faster transmission
o Acknowledgment mechanism
The UDP does have any acknowledgment mechanism, i.e., there is no handshaking
between the UDP sender and UDP receiver. If the message is sent in TCP, then the receiver
acknowledges that I am ready, then the sender sends the data. In the case of TCP, the
handshaking occurs between the sender and the receiver, whereas in UDP, there is no
handshaking between the sender and the receiver.
Each UDP segment is handled individually of others as each segment takes different path
to reach the destination. The UDP segments can be lost or delivered out of order to reach
the destination as there is no connection setup between the sender and the receiver.
o Stateless
It is a stateless protocol that means that the sender does not get the acknowledgement
for the packet which has been sent.
Network Layer
o The Network Layer is the third layer of the OSI model.
o It handles the service requests from the transport layer and further forwards
the service request to the data link layer.
o The network layer translates the logical addresses into physical addresses
o It determines the route from the source to the destination and also manages
the traffic problems such as switching, routing and controls the congestion of
data packets.
o The main role of the network layer is to move the packets from sending host
to the receiving host.
o The Internet Protocol (IP) is a protocol, or set of rules, for routing and
addressing packets of data so that they can travel across networks and arrive at
the correct destination. Data traversing the Internet is divided into smaller pieces,
called packets. IP information is attached to each packet, and this information
helps routers to send packets to the right place. Every device or domain that
connects to the Internet is assigned an IP address, and as packets are directed to
the IP address attached to them, data arrives where it is needed.
o Once the packets arrive at their destination, they are handled differently
depending on which transport protocol is used in combination with IP. The most
common transport protocols are TCP and UDP.
Connection Oriented :-
A connection-oriented service is one that establishes a dedicated connection between
the communicating entities before data communication commences. It is modeled after
the telephone system. To use a connection-oriented service, the user first establishes a
connection, uses it and then releases it. In connection-oriented services, the data
streams/packets are delivered to the receiver in the same order in which they have been
sent by the sender.
Connectionless service :-
Service Example
Acknowledged Datagram Registered mail, text messages along with delivery report, etc.
o Whether the network layer provides datagram service or virtual circuit service,
the main job of the network layer is to provide the best route. The routing
protocol provides this job.
o The routing protocol is a routing algorithm that provides the best path from
the source to the destination. The best path is the path that has the "least-
cost path" from source to the destination.
o Routing is the process of forwarding the packets from source to the destination
but the best route to send the packets is determined by the routing algorithm.
The labelling of arcs can be done with mean queuing, transmission delay for a standard test
packet on an hourly basis, or computed as a function of bandwidth, average distance traffic,
communication cost, mean queue length, measured delay or some other factors.
In shortest path routing, the topology communication network is defined using a directed
weighted graph. The nodes in the graph define switching components and the directed arcs in
the graph define communication connection between switching components. Each arc has a
weight that defines the cost of sharing a packet between two nodes in a specific direction.
This cost is usually a positive value that can denote such factors as delay, throughput, error rate,
financial costs, etc. A path between two nodes can go through various intermediary nodes and
arcs. The goal of shortest path routing is to find a path between two nodes that has the lowest
total cost, where the total cost of a path is the sum of arc costs in that path.
Classification of a Routing algorithm
The Routing algorithm is divided into two categories:
o This algorithm makes the routing decisions based on the topology and network
traffic.
o The main parameters related to this algorithm are hop count, distance and
estimated transit time.
o Flooding: In case of flooding, every incoming packet is sent to all the outgoing links
except the one from it has been reached. The disadvantage of flooding is that node may
contain several copies of a particular packet.
o Random walks: In case of random walks, a packet sent by the node to one of its neighbors
randomly. An advantage of using random walks is that it uses the alternative routes very
efficiently.
Congestion Control :-
A state occurring in network layer when the message traffic is so heavy that it
slows down network response time.
Effects of Congestion
Imagine a bucket with a small hole in the bottom.No matter at what rate water
enters the bucket, the outflow is at constant rate.When the bucket is full with
water additional water entering spills over the sides and is lost.
1. When host wants to send packet, packet is thrown into the bucket.
2. The bucket leaks at a constant rate, meaning the network interface
transmits packets at a constant rate.
3. Bursty traffic is converted to a uniform traffic by the leaky bucket.
4. In practice the bucket is a finite queue that outputs at a finite rate.
• Token bucket Algorithm
• The leaky bucket algorithm has a rigid output design at an average rate
independent of the bursty traffic.
• In some applications, when large bursts arrive, the output is allowed to
speed up. This calls for a more flexible algorithm, preferably one that
never loses information. Therefore, a token bucket algorithm finds its
uses in network traffic shaping or rate-limiting.
• It is a control algorithm that indicates when traffic should be sent. This
order comes based on the display of tokens in the bucket.
• The bucket contains tokens. Each of the tokens defines a packet of
predetermined size. Tokens in the bucket are deleted for the ability to
share a packet.
• When tokens are shown, a flow to transmit traffic appears in the display
of tokens.
• No token means no flow sends its packets. Hence, a flow transfers
traffic up to its peak burst rate in good tokens in the bucket.
The leaky bucket algorithm enforces output pattern at the average rate, no
matter how bursty the traffic is. So in order to deal with the bursty traffic we
need a flexible algorithm so that the data is not lost. One such algorithm is
token bucket algorithm.
Ways in which token bucket is superior to leaky bucket: The leaky bucket
algorithm controls the rate at which the packets are introduced in the network,
but it is very conservative in nature. Some flexibility is introduced in the token
bucket algorithm. In the token bucket, algorithm tokens are generated at each
tick (up to a certain limit). For an incoming packet to be transmitted, it must
capture a token and the transmission takes place at the same rate. Hence
some of the busty packets are transmitted at the same rate if tokens are
available and thus introduces some amount of flexibility in the system.
The sliding window is further divided into two categories, i.e., Go Back N, and
Selective Repeat. Based on the usage, the people select the error control
mechanism whether it is stop and wait or sliding window.
The idea behind the usage of this frame is that when the sender sends the
frame then he waits for the acknowledgment before sending the next frame.
Sender side
Rule 2: Sender sends the next packet only when it receives the
acknowledgment of the previous packet.
Therefore, the idea of stop and wait protocol in the sender's side is very
simple, i.e., send one packet at a time, and do not send another packet before
receiving the acknowledgment.
Receiver side
Rule 1: Receive and then consume the data packet.
Therefore, the idea of stop and wait protocol in the receiver's side is also very
simple, i.e., consume the packet, and once the packet is consumed, the
acknowledgment is sent. This is known as a flow control mechanism.
Working of Stop and Wait protocol
The above figure shows the working of the stop and wait protocol. If there is
a sender and receiver, then sender sends the packet and that packet is
known as a data packet. The sender will not send the second packet without
receiving the acknowledgment of the first packet. The receiver sends the
acknowledgment for the data packet that it has received. Once the
acknowledgment is received, the sender sends the next packet. This process
continues until all the packet are not sent. The main advantage of this
protocol is its simplicity but it has some disadvantages also. For example, if
there are 1000 data packets to be sent, then all the 1000 packets cannot be
sent at a time as in Stop and Wait protocol, one packet is sent at a time.
In this technique, each frame has sent from the sequence number. The
sequence numbers are used to find the missing data in the receiver end. The
purpose of the sliding window technique is to avoid duplicate data, so it uses
the sequence number.
Types of Sliding Window Protocol
Sliding window protocol has two types:
1. Go-Back-N ARQ
Go-Back-N ARQ
Go-Back-N ARQ protocol is also known as Go-Back-N Automatic Repeat
Request. It is a data link layer protocol that uses a sliding window method. In
this, if any frame is corrupted or lost, all subsequent frames have to be sent
again.
The size of the sender window is N in this protocol. For example, Go-Back-8,
the size of the sender window, will be 8. The receiver window size is always 1
If the receiver receives a corrupted frame, it cancels it. The receiver does not
accept a corrupted frame. When the timer expires, the sender sends the
correct frame again. The design of the Go-Back-N ARQ protocol is shown below
.
Selective Repeat ARQ
Selective Repeat ARQ is also known as the Selective Repeat Automatic Repeat
Request. It is a data link layer protocol that uses a sliding window method.
The Go-back-N ARQ protocol works well if it has fewer errors. But if there is a
lot of error in the frame, lots of bandwidth loss in sending the frames again.
So, we use the Selective Repeat ARQ protocol. In this protocol, the size of the
sender window is always equal to the size of the receiver window. The size of
the sliding window is always greater than 1.
If the receiver receives a corrupt frame, it does not directly discard it. It sends
a negative acknowledgment to the sender. The sender sends that frame again
as soon as on the receiving negative acknowledgment. There is no waiting for
any time-out to send that frame. The design of the Selective Repeat ARQ
protocol is shown below.
Difference between the Go-Back-N ARQ and Selective Repeat
ARQ?
Go-Back-N ARQ Selective Repeat ARQ
frame within a Link layer frame before the transmission across the link. A
frame consists of a data field in which network layer datagram is inserted and
a number of data fields. It specifies the structure of the frame as well as a
channel access protocol by which frame is to be transmitted over the link.
o Error detection: Errors can be introduced by signal attenuation and noise. Data
Link Layer protocol provides a mechanism to detect one or more errors. This
is achieved by adding error detection bits in the frame and then receiving node
can perform an error check.
o Error correction: Error correction is similar to the Error detection, except that
receiving node not only detect the errors but also determine where the errors
have occurred in the frame.
o Flow control: A receiving node can receive the frames at a faster rate than it
can process the frame. Without flow control, the receiver's buffer can overflow,
and frames can get lost. To overcome this problem, the data link layer uses
the flow control to prevent the sending node on one side of the link from
overwhelming the receiving node on another side of the link.