0% found this document useful (0 votes)
28 views22 pages

CN Mod3 NOTES

Uploaded by

amayavk118
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
28 views22 pages

CN Mod3 NOTES

Uploaded by

amayavk118
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 22

Network layer is responsible for source to destination (end to end) delivery of a


packet across multiple network links.
MODULE 3 ●
Route the packets to final destination.
Design issues - network layer

 Store-and-forward packet switching

 Services provided to transport layer

NETWORK LAYER ●
 Implementation of connectionless service

 Implementation of connection-oriented service

 Comparison of virtual-circuit and datagram networks

1. Store-and-forward packet switching


2. Services provided to transport layer

A host with a packet to send, transmits it to the nearest router, either on its own LAN or over a point-to-point link to
the carrier

The services designed with the following goals in mind:

The packet is stored there until it has fully arrived and the link has finished its processing by verifying the ●
The services provided should be independent of the underlying
checksum.
technology: Users of the service need not be aware of the physical

Then it is forwarded to the next router along the path until it reaches the destination host, where it is delivered. This implementation of the network – for all they know, their messages could be
mechanism is store-and-forward packet switching
transported via carrier

The fig shows the environment of the network layer protocols :-

The transport layer should be shielded from the number, type, and
topology of the routers present. : ie, All the transport layer wants is a
communication link,it need not know how the link is made

The network addresses made available to the transport layer should use
some uniform addressing scheme for network addresses

The two classes of service the network layer can provide are:

The major components of the system are carriers equipment (routers connected by transmission lines) and – connectionless network service.
customers’ equipment , shown outside the oval.
– connection-oriented network service.

Host H1 is directly connected to one of the carrier’s routers, A, by a leased line. In contrast, H2 is on a LAN, with a
router, F, owned and operated by the customer. This router also has a leased line to the carrier’s equipment.
3. Implementation of connectionless service : Datagram approach ●
Each packet is routed based on the information contained in its header:
source and destination addresses

No advance setup of path is needed.

The destination address defines where it should go; the source address defines

If connectionless service is offered, packets are injected into the network where it comes from.
individually and routed independently of each other through available paths

The router in this case routes the packet based only on the destination address.

There is no relationship between packets belonging to the same message.

The source address may be used to send an error message to the source if the

In this context, the packets are frequently called datagrams and the network is packet is discarded
called a datagram network
Forwarding process in a router when used in a Datagram approach

4. Implementation of connection oriented service : Virtual circuit approach



A path from the source router to the destination router must be established before any data packets
can be sent.

This connection is called a VC (virtual circuit), (eg. telephone system)

After connection setup, all the datagrams can all follow the same path.

There is a relationship between all packets belonging to a message.

In this type of service, not only must the packet contain the source and destination addresses, it must
also contain a flow label, a virtual circuit identifier (VCI) that defines the virtual path the packet
should follow.

Each packet is forwarded based on the label in the packet
Datagram Networks
5. Comparison of virtual-circuit and datagram networks ●
It is a connection-less service. There is no need for reservation of resources as there is no
dedicated path for a connection session.

All packets are free to use any available path. As a result, intermediate routers calculate routes
Virtual Circuits
on the go due to dynamically changing routing tables on routers.

It is connection-oriented, meaning that there is a reservation of resources like buffers, CPU, bandwidth,
etc. for the time in which the newly set VC is going to be used by a data transfer session.

Since every packet is free to choose any path, all packets must be associated with a header with
proper information about the source and the upper layer data.

The first sent packet reserves resources at each server along the path. Subsequent packets will follow the
same path as the first sent packet for the connection time. ●
The connection-less property makes data packets reach the destination in any order, which

Since all the packets are going to follow the same path, a global header is required. Only the first packet of means that they can potentially be received out of order at the receiver’s end.
the connection requires a global header, the remaining packets generally don’t require global headers. ●
Datagram networks are not as reliable as Virtual Circuits.

Since all packets follow a specific path, packets are received in order at the destination. ●
The major drawback of Datagram Packet switching is that a packet can only be forwarded if

Virtual Circuit Switching ensures that all packets successfully reach the Destination. No packet will be resources such as the buffer, CPU, and bandwidth are available. Otherwise, the packet will be
discarded due to the unavailability of resources. discarded.

From the above points, it can be concluded that Virtual Circuits are a highly reliable method of data ●
But it is always easy and cost-efficient to implement datagram networks as there is no extra
transfer.
headache of reserving resources and making a dedicated each time an application has to

The issue with virtual circuits is that each time a new connection is set up, resources and extra information communicate.
have to be reserved at every router along the path, which becomes problematic if many clients are trying
to reserve a router’s resources simultaneously. ●
It is generally used by the IP network, which is used for Data services like the Internet.

Criteria Virtual Circuit Networks Datagram Networks Router



It is a Network layer device

Routers are devices that connect two or more networks
Connection Prior to data transmission, a connection is established No connection setup is required.
Establishment between sender and receiver. ●
Whenever a router encounters a packet, it must decide where to pass that packet.

Routing Routing decisions are made once during connection setup Routing decisions are made independently for each packet and can vary ●
A router might be connected to multiple routers
and remain fixed throughout the duration of the based on network conditions.
connection. ●
Each router has a routing table that decides the route to be followed for each packet

Flow Control Uses explicit flow control, where the sender adjusts its rate Uses implicit flow control, where the sender assumes a certain level of

Routers look into routing table when a new packet arrives.
of transmission based on feedback from the receiver. available bandwidth and sends packets accordingly. ●
It helps the network devices in deciding the best path for data packets as they move from their source to a destination.

Routing table
Congestion Uses end-to-end congestion control, where the sender Uses network-assisted congestion control, where routers monitor network
Control adjusts its rate of transmission based on feedback from the conditions and may drop packets or send congestion signals to the sender.
network.

Error Control Provides reliable delivery of packets by detecting and Provides unreliable delivery of packets and does not guarantee delivery or
retransmitting lost or corrupted packets. correctness.

Routing table consists of atleast 3 fields

Overhead Requires less overhead per packet because connection Requires more overhead per packet because each packet contains ●
1. Network ID: Destination Network ID
setup and state maintenance are done only once. information about its destination address and other routing information.

2. Cost : Cost or metric of the path through which the packet is to be sent

Example ATM, Frame Relay IP (Internet Protocol) ●


3. Next hop : The next hop or gateway is the address of the next station to which the packet is to be sent on the way to its destination
Protocol

If virtual circuit approach is used, routing decisions are made only when a new virtual circuit is being setup.Thus data
Routing Algorithms packets just follow previously established route

A router is having two processes inside it.
(1) One of them handles each packet as it arrives, looking up the outgoing line to use for it in the routing tables. This process
is forwarding.

Routing is the main function of network layer
(2) The other process is responsible for filling in and updating the routing tables. That is where the routing algorithm comes

It is the process of moving packets across a network from source machine to destination into play.
machine.
Properties of Routing algorithms

It is usually performed by dedicated devices called routers. ●
Optimality – capability of routing algorithm to select the best route

Routing is concerned with the problem of determining feasible paths or packets to follow
from each source to destination

Simplicity and low overhead -With increasing complexity of the routing algorithms the overhead also
increases

Network layer must determine the route or path taken by each packets as they flow from a
sender to receiver. The algorithms that calculate these paths are referred to as routing

Robustness – means they should perform correctly in the face of unusual circumstances such as
algorithms hardware failure

It is responsible for deciding which output line an incoming packet should be transmitted on.

Stability -The routing algorithms should be stable under all possible circumstances.

This decision must be made each time a new packet arrives, since the best route may have

Flexibility – they should quickly & accurately adapt to a variety of network circumstances
changed since last time. ●
Fairness - Every node connected to the network should get a fair chance of transmitting their packets.

Routing vs. Forwarding


Types of Routing algorithms
1. Non adaptive algorithms

Routing

Routing decision is not based on measurements or estimates of the current traffic
– Routing refers to the process of determining the optimal path from source to destination for data transmission and topology.
in the computer network. ●
The choice of the route is computed in advance, off-line, and downloaded to the
– Whenever an edge device needs to transmit data packets to another device located on another network or
subnet, a routing phenomenon is used.
routers when the network is booted.
– This mechanism is achieved through a router. ●
This procedure is sometimes called static routing.

Forwardinga 2. Adaptive algorithms
– Forwarding refers to the process of actually transmitting the data packets from one network device to the next
hop in their route.

Routing decisions are based on measurements or estimates of the current traffic
– This route has been predetermined by the previous routers in the network. and topology.
– Therefore, forwarding doesn't involve any real decision-making and is a straightforward real-time operation ●
Stability is an important goal for the routing algorithm.
that occurs when data packets arrive at a router.

This procedure is called dynamic routing.
1. The Optimality Principle
Different Routing Algorithms ●
The optimal path from a particular router to another may be the least cost path, the

The Optimality Principle least distance path, the least time path, least number of hops or a combination of any
of the above.

Shortest Path Routing

Flooding

The optimality principle states that if router J is on the optimal path from router I to
router K, then the optimal path from J to K also falls along the same route.

Distance Vector Routing

As a direct consequence of the optimality principle, we can see that the set of

Link State Routing
optimal routes from all sources to a given destination form a tree rooted at the

Hierarchical Routing destination.

Broadcast Routing ●


Multicast Routing

Routing for Mobile Hosts

Routing in Ad Hoc Networks


The set of optimal routes from all sources to a given destination form a tree
rooted at the destination. This tree is called a sink tree 2. shortest-paths algorithms

The distance metric is the number of hops.

It is one of the simple static routing algorithms that are widely used for routing in the

The goal of all routing algorithms is to discover and use the sink trees for all network.
routers ●
The basic idea of it is to build a graph with each node representing a router and each
line representing a communication link.

To choose a route between any two nodes in the graph, the algorithm simply finds the
shortest path between the nodes.

Shortest Path means that the path in which anyone or more metrics is minimized.

The metric may be distance, bandwidth, average traffic, communication cost, mean
queue length, measured delay or any other factor.

Common Shortest Path Algorithms
– Bellman Ford’s Algorithm
– Dijkstra’s Algorithm
– Floyd Warshall’s Algorithm

Consider the graph given below
2.1. Dijkstra's Algorithm

Dijkstra’s algorithm is a single-source shortest path algorithm.

Here, single-source means that only one source is given, and we have to find the shortest path from
the source to all the nodes.

The Dijkstra’s Algorithm is a greedy algorithm that is used to find the minimum distance between a
single node and all other nodes in a given graph. ●
First, we have to consider any vertex as a source vertex.

In this Algorithm , the criteria for shortest path is distance.

Note : A node has zero cost w.r.ts itself

Here we assume that 0 as a source vertex, and distance to all the other vertices is

Note: Dijkstra's Algorithm is applicable only when cost of all the nodes is non-negative.
infinity.
Algorithm for Dijkstra’s Algorithm:
1. Mark the source node with a current distance of 0 and the rest with infinity.
2. Set the non-visited node with the smallest current distance as the current node.
3. For each neighbor, N of the current node adds the current distance of the adjacent node with the
weight of the edge connecting x->y. If it is smaller than the current distance of Node, set it as the new ●
Initially, we do not know the distances. First, we will find out the vertices which are
current distance of N.
directly connected to the vertex 0.
4. Mark the current node y as visited.
5. Go to step 2 if there are any nodes are unvisited.

As we can observe in the below graph that two vertices are directly connected to vertex
0.

Till now, two nodes have been selected, i.e., 0 and 1.

Now we have to compare the nodes except the node 0 and 1.

The node 4 has the minimum distance, i.e., 8. Therefore, vertex 4 is selected.

Since vertex 4 is selected, so we will consider all the direct paths from the vertex 4.

The direct paths from vertex 4 are 4 to 0, 4 to 1, 4 to 8, and 4 to 5.

Since the vertices 0 and 1 have already been selected so we will not consider the vertices 0 and 1.

We will consider only two vertices, i.e., 8 and 5.

First, we consider the vertex 8. First, we calculate the distance between the vertex 4 and 8.

Consider the vertex 4 as 'x', and the vertex 8 as 'y'.

d(x, y) = d(x) + c(x, y) < d(y) = (8 + 7) < ∞ = 15 < ∞

Since 15 is less than the infinity so we update d(8) from infinity to 15.


Now, we consider the vertex 5. First, we calculate the distance between the vertex 4 ●
Since the vertex 5 is selected, so we will consider all the direct paths from vertex
and 5.
5. The direct paths from vertex 5 are 5 to 8, and 5 to 6.

Consider the vertex 4 as 'x', and the vertex 5 as 'y'. ●
First, we consider the vertex 8. First, we calculate the distance between the vertex

d(x, y) = d(x) + c(x, y) < d(y) = (8 + 1) < ∞ =9<∞ 5 and 8. Consider the vertex 5 as 'x', and the vertex 8 as 'y'.

Since 5 is less than the infinity, we update d(5) from infinity to 9.

d(x, y) = d(x) + c(x, y) < d(y) = (9 + 15) < 15 = 24 < 15

Since 24 is not less than 15 so we will not update the value d(8) from 15 to 24.

The node 5 has the minimum value, i.e., 9. Therefore, vertex 5 is selected.

Now, we consider the vertex 6. First, we calculate the distance between the vertex
5 and 6. Consider the vertex 5 as 'x', and the vertex 6 as 'y'.

d(x, y) = d(x) + c(x, y) < d(y) = (9 + 2) < ∞ = 11 < ∞

Since 11 is less than infinity, we update d(6) from infinity to 11

Till now, nodes 0, 1, 4 and 5 have been selected.

We will compare the nodes except the selected nodes.

The node 6 has the lowest value as compared to other nodes.

Therefore, vertex 6 is selected.
Since vertex 6 is selected, we consider all the direct paths from vertex 6. The direct paths from vertex 6 are 6 to 2, 6 to 3, and 6 to 7.
Till now, nodes 0, 1, 2, 4, 5, and 6 have been selected. We compare all the



First, we consider the vertex 2. Consider the vertex 6 as 'x', and the vertex 2 as 'y'.

d(x, y) = d(x) + c(x, y) < d(y) = (11 + 4) < 12 = 15 < 12 unvisited nodes, i.e., 3, 7, and 8. Among nodes 3, 7, and 8, node 8 has the

Since 15 is not less than 12, we will not update d(2) from 12 to 15 minimum value.

Now we consider the vertex 3. Consider the vertex 6 as 'x', and the vertex 3 as 'y'.

d(x, y) = d(x) + c(x, y) < d(y) = (11 + 14) < ∞ = 25 < ∞

The nodes which are directly connected to node 8 are 2, 4, and 5.
Since 25 is less than ∞, so we will update d(3) from ∞ to 25.
Since all the directly connected nodes are selected so we will not consider



Now we consider the vertex 7. Consider the vertex 6 as 'x', and the vertex 7 as 'y'.

d(x, y) = d(x) + c(x, y) < d(y) = (11 + 10) < ∞ = 22 < ∞ any node for the updation.

Since 22 is less than ∞ so, we will update d(7) from ∞ to 22.

Till now, nodes 0, 1, 4, 5, and 6 have been selected. Now we have to compare all the unvisited nodes, i.e., 2, 3, 7, and 8. Since node 2 has

The unvisited nodes are 3 and 7. Among the nodes 3 and 7, node 3 has the
the minimum value, i.e., 12 among all the other unvisited nodes. Therefore, node 2 is selected. minimum value, i.e., 19.

Since node 2 is selected, so we consider all the direct paths from node 2. The direct paths from node 2 are 2 to 8, 2 to 6, and 2 to 3.

First, we consider the vertex 8. Consider the vertex 2 as 'x' and 8 as 'y'.

Therefore, the node 3 is selected.

d(x, y) = d(x) + c(x, y) < d(y) = (12 + 2) < 15 = 14 < 15

Since 14 is less than 15, we will update d(8) from 15 to 14.

The nodes which are directly connected to the node 3 are 2, 6, and 7. Since

Now, we consider the vertex 6. Consider the vertex 2 as 'x' and 6 as 'y'. the nodes 2 and 6 have been selected so we will consider these two nodes.

d(x, y) = d(x) + c(x, y) < d(y) = (12 + 4) < 11 = 16 < 11

Since 16 is not less than 11 so we will not update d(6) from 11 to 16.

Now, we consider the vertex 7. Consider the vertex 3 as 'x' and 7 as 'y'.

Now, we consider the vertex 3. Consider the vertex 2 as 'x' and 3 as 'y'.

d(x, y) = d(x) + c(x, y) < d(y) = (19 + 9) < 21 = 28 < 21

d(x, y) = d(x) + c(x, y) < d(y) = (12 + 7) < 25 = 19 < 25

Since 19 is less than 25, we will update d(3) from 25 to 19.

Since 28 is not less than 21, so we will not update d(7) from 28 to 21.

3. Flooding Using flooding technique −



An incoming packet to A, will be sent to B, C and D.

Flooding is the static routing algorithm. ●
B will send the packet to C and E.

In this algorithm, every incoming packet is sent on all outgoing lines except the line on ●
C will send the packet to B, D and F.
which it has arrived ●
D will send the packet to C and F.

Flooding is a way to distribute routing information updates quickly to every node in a large ●
E will send the packet to F.
network. It is also sometimes used in multicast packets. ●
F will send the packet to C and E.

Flooding, which is similar to broadcasting, occurs when source packets (without routing data) Types of Flooding
are transmitted to all attached network nodes. Because flooding uses every path in the
network, the shortest path is also used.

Uncontrolled flooding − Here, each router unconditionally transmits the incoming data
packets to all its neighbours.

The flooding algorithm is easy to implement. When a packet is received, the routers send it to ●
Controlled flooding − They use some methods to control the transmission of packets to the
all the interfaces except the one on which it was received. neighbouring nodes. The two popular algorithms for controlled flooding are Sequence Number

This creates too much burden on the network and lots of duplicate packets wandering in Controlled Flooding (SNCF) and Reverse Path Forwarding (RPF).
the network. ●
Selective flooding − Here, the routers transmit the incoming packets to only along those paths
which are heading towards approximately in the right direction, instead of every available

Requires no network information like topology, load condition, cost of different paths
paths.
APPLICATIONS Distance Vector Routing
Flooding is not practical in most applications, but it does have some uses. For example,

In military applications, the tremendous robustness of flooding is highly desirable.

The Distance vector algorithm is a dynamic algorithm.

In distributed database applications, it is sometimes necessary to update all the databases concurrently, in which case ●
It is also called Bellman-Ford routing algorithm and the Ford-Fulkerson
flooding can be useful.
algorithm

In wireless networks, all messages transmitted by a station can be received by all other stations within its radio range,
flooding is useful. ●
Each router maintains a distance table.
Advantages of Flooding

It is very simple to setup and implement, since a router may know only its neighbours.

Each node receives information from one or more of its directly attached
neighbours, performs calculation and then distributes the result back to its

It is extremely robust. Even in case of malfunctioning of a large number routers, the packets find a way to reach the
destination. neighbours.

All nodes which are directly or indirectly connected are visited. So, there are no chances for any node to be left out. ●
Information sharing at regular intervals - Within 30 seconds, the router sends

The shortest path is always chosen by flooding. the information to the neighboring routers.
Limitations of Flooding

Flooding tends to create an infinite number of duplicate data packets

The network may be clogged with unwanted and duplicate data packets. This may hamper delivery of other data packets.

It is wasteful if a single destination needs the packet, since it delivers the data packet to all nodes irrespective of the
destination.

Distance Vector (DV) Algorithm ●


In DV routing, the least cost route between any two nodes is the route with

1. A router transmits its distance vector to each of its neighbors in a routing packet. minimum distance.

2. Each router receives and saves the most recently received distance vector from ●
Each node maintains a vector (table) of minimum distances to every node.
each of its neighbors. ●
The table at each node also guides the packets to the desired node by

3. A router recalculates its distance vector when: showing the next stop in the route (next-hop routing).
– It receives a distance vector from a neighbor containing different information than before. ●
The table for node A shows how we can reach any node from this node.
– It discovers that a link to a neighbor has gone down.
– For example, our least cost to reach node E is 6. The route passes through C.

The DV calculation is based on minimizing the cost to each Destination

From time-to-time, each node sends its own distance vector estimate to neighbors.

When a node x receives new DV estimate from any neighbor v, it saves v’s distance
vector and it updates its own DV using Bellman-Ford equation:

Dx(y) = min { (C(x,v) + Dv(y)), Dx(y) } for each node y ∈ N
– Where Dx(y) = Estimate of least cost from x to y
– C(x,v) = Node x knows cost to each neighbor v
– Dv(y)=least cost from v to y
Step 1 : Initialization

At the beginning, each node can know only the distance between itself and
its immediate neighbors, those directly connected to it.
– Assume that each node can send a message to the immediate neighbors
and find the distance between itself and these neighbors.

The distance for any entry that is not a neighbor is marked as infinite
(unreachable).


Step 2 : Updation When to Share DVs to neighbours?

When does a node send its partial routing table (only two
columns) to all its immediate neighbors?
– The table is sent both periodically and when there is a change in the
table.
– a. Periodic Update: A node sends its routing table, normally every
30 s, in a periodic update.
– The period depends on the protocol that is using distance vector routing.
– b. Triggered Update: A node sends its two-column routing table to
its neighbours anytime there is a change in its routing table.
– The change can result from the following.
– 1) A node receives a table from a neighbor, resulting in changes in its own table after
updating.
– 2) A node detects some failure in the neighboring links which results in a distance
change to infinity.

Q1: Consider 3-routers X, Y and Z as shown in figure. Each router have their
routing table. Every routing table will contain distance to the destination nodes.


Q2: Consider the following subnet. Distance vector routing is used, and the
following vectors have just come in to router C: from B: (5, 0, 8, 12, 6, 2); from D:
(16, 12, 6, 0, 9, 10); and from E: (7, 6, 3, 9, 0, 4). The measured delays to B, D, and
E, are 6, 3, and 5, respectively. What is C’s new routing table? Give both the
outgoing line to use and the expected delay.


Updated routing table of C is:
To Cost Next
A 11 B
B 6 B
C 0 C
D 3 D
E 5 E
F 8 B
Count to infinity problem


At the beginning, both nodes A and B know how to reach node X.

But suddenly, the link between A and X fails. Node A changes its table.

If A can send its table to B immediately, everything is fine.

System becomes unstable if B sends its routing table to A before receiving A's routing table.

Node A receives the update and, assuming that B has found a way to reach X, immediately updates its routing table.

Based on the triggered update strategy,

A sends its new update to B.

Now B thinks that something has been changed around A and updates its routing table.

The cost of reaching X increases gradually until it reaches infinity.

It is also known as count to infinity problem

Node A thinks that the route to X is via B; node B thinks that the route to X is via A.

Packets bounce between A and B, creating a two-node loop problem.


Split Horizon
Solution for Count to Infinity problem:- – The split horizon is a method for resolving instability.

Defining Infinity: – Each node in this technique delivers a portion of its table across each interface rather than flooding the table
through all of them.
– Solution is to redefine infinity to a smaller number, such as 16. – If node B believes that the best way to go to X is through node A, then node B need not inform node A of this
information because it has already been provided by node A (A already knows).
– The system will be stable in less than a few updates. – Confusion is brought on by receiving data from node A, altering it, then sending it back to node A.

Route Poisoning: – In our example, node B trims the end of its forwarding table before transmitting it to node A. In this instance,
node A maintains a distance to X of infinity.
– When a route fails, distance vector protocols spread the bad news about a – Later, node B similarly makes corrections to its forwarding table when node A transmits it to it.
route failure by poisoning the route. – After the initial update, the system is stable because both nodes A and B are aware that X cannot be reached.

Poison Reverse
– Route poisoning refers to the practice of advertising a route, but with a
– The split-horizon approach has one disadvantage.
special metric value called Infinity.
– If there is no news about a route after a certain amount of time, the corresponding protocol typically employs a
– Routers consider routes advertised with an infinite metric to have failed. timer and instructs the node to remove the route from its table.
– In the previous example, node A is unable to determine whether node B's decision to remove the route to X
– Each distance vector routing protocol uses the concept of an actual metric from its advertisement to node A is the result of the split-horizon technique (the information came from A) or
value that represents infinity. the fact that B has not recently received any news concerning X.
– In the poison reverse technique, B can still state the value for X, but if the information came from A, it can
– RIP defines infinity as 16. substitute infinite for the distance as a warning: " Do not utilise this value; you are the source of my knowledge
regarding this route."
Link state routing ●
Link state routing is a technique in which each router shares the knowledge of its

A method in which each router shares its neighbourhood information with every neighborhood with every other router in the internetwork.
other router in the internetwork. ●
The three keys to understand the Link State Routing algorithm:

Each node has a complete map of the topology – Knowledge about the neighborhood: Instead of sending its routing table, a

Used in packet switching networks. router sends the information about its neighborhood only. A router broadcast its
identities and cost of the directly attached links to other routers.

Each router must do the following – Flooding: Each router sends the information to every other router on the
– Discover its neighbors and learn their network addresses. internetwork except its neighbors. This process is known as Flooding. Every
router that receives the packet sends the copies to all its neighbors. Finally, each
– Measure the distance or cost metric to each of its neighbors. and every router receives a copy of the same information.
– Construct a packet telling all it has just learned. – Information sharing: A router sends the information to every other router only
– Send this packet to and receive packets from all other routers. when the change occurs in the information.
– Compute the shortest path to every other router.

Step 1: Learning about the neighbours


Step 4: Distributing the link state packets

When a router is booted,its first task is to learn who its neighbours are by sending a special HELLO packet
on each point-to-point line. ●
Use flooding to distribute the link state packets to all routers.

The router on other end is expected to send back a reply giving its unique name. ●
Router keep track of all (source router,sequence no)pairs they see.
Step 2: Setting link costs ●
When a new link state packet comes in, it is checked against the list of packets already seen.

The algorithm requires each link to have a distance or cost metric for finding shortest paths. ●
If it is new, it is forwarded on all lines except the one it arrived on. If it is a duplicate, it is

The cost to reach neighbour can be set automatically or configured by network operator. discarded.
Step 3: Building link state packets. Step 5: computing the new routes

Each router build a packet containing all the data . ●
Once a router has accumulated a full set of link state packets, it can construct the entire

The packet starts with the identity of sender, then a sequence number, age, a list of neighbours and cost to network graph because every link is represented.
each neighbour. ●
Dijkstra’s algorithm can be used to construct the shortest paths to all possible destinations.

When to build packet? ●
The result of this algo tell the router which link to use to reach each destination. This info is
– One possibility is to build them periodically at regular intervals. installed in the routing table.
– Another possibility is to build them when some significant event occurs,such as a line or neighbour
going down or coming back up again or changing its properties etc.

Link state routing is used by routers to make their routing tables and update
them regularly.

Initially each router will make link state table.

Consider R1. R2 and R3 are its neighbours.R1 will send them HELLO
message.
– After sending HELLO message, R1 will get to know who it is connected with and
what is its distance
– In DV, only DV is shared with routers.
– But in LS routing algo, linkstate packet will contain more information

Step 2: Flooding

Eg: R1 will flood the packet(link state table) to all the other routers

Now, R1 will receive link state table from R2,R3,R4,R5 and R6.

With the help of Single source shortest path algorithm, ie, Dijkstra’s algorithm, R1
will find shortest distance to reach R2,R3,R4,R5,R6.
Comparison between Distance Vector Routing and Link State Routing MULTICAST ROUTING
Distance Vector Routing Link State Routing

Multicast routing is a protocol that sends one copy of data to multiple users
Each router periodically share its knowledge about the entire network with
its neighbours
Router share the knowledge of only their neighbouring routers to entire
network
simultaneously on a closed network. Eg: video conferencing
Updates full routing table Updates only the link state

Sending a message to a group is called multicasting, and its routing algorithm is called
multicast routing
Bandwidth required is less due to local sharing, small packets and no Bandwidth required is more due to flooding and sending of large link state
flooding. packets. ●
Sending a packet to all destinations simultaneously is called broadcasting.
Make use of Bellman Ford Algorithm. . Make use of Dijkstra’s algorithm.

In broadcast routing, packets are sent to all nodes even if they do not want it.
Simple to implement and manage Complex and requires trained network administrator ●
But in Multicast routing, the data is sent to only nodes which wants to receive the
Traffic is less. Traffic is more. packets.
Converges slowly Converges faster.

The router must know that there are nodes, which wish to receive the multicast packets
(or stream). Then only it should forward.
Count of infinity problem. No count of infinity problem.

Multicast routing protocols use trees, i.e. spanning tree to avoid loops.
Persistent looping problem i.e, loop will be there forever. No persistent loops, only transient loops.

When a process sends a multicast packet to a group, the first router examines its
Practical implementation is RIP and IGRP. Practical implementation is OSPF and ISIS. spanning tree and prunes it, removing all lines that do not lead to hosts that are
members of the group.

To do multicast routing, each router computes a spanning tree covering all other routers.

For example, in Fig.(a) we have two groups, 1 and 2.

Some routers are attached to hosts that belong to one or both of these groups, as indicated in the
figure.

In our example, Fig.(c) shows the pruned spanning tree for group 1.

A spanning tree for the leftmost router is shown in Fig. (b).

Similarly, Fig. (d) shows the pruned spanning tree for group 2.

Multicast packets are forwarded only along the appropriate pruned spanning tree.


When a process sends a multicast packet to a group, the first router examines its spanning tree
and prunes it, removing all lines that do not lead to hosts that are members of the group.

ROUTING FOR MOBILE HOSTS Mobile Host:



Millions of people have portable computers nowadays, and they generally want to read

By the term mobile host, all hosts that are away from home and still want to be connected.
their e-mail and access their normal file systems wherever in the world they may be. ●
All hosts are assumed to have a permanent home location that never changes.

These mobile hosts introduce a new complication: to route a packet to a mobile host, ●
The routing goal in systems with mobile hosts is to make it possible to send packets to mobile hosts
the network first has to find it. using their home addresses and have the packets efficiently reach them wherever they may be.

The task, of course, is to find them.
Model of the World:

The world is divided up (geographically) into small units called areas, where an area is typically a
LAN or wireless cell.

Each area has one or more foreign agents, which are processes that keep track of all mobile hosts

Here we have a WAN consisting of routers and hosts. visiting the area.

Connected to the WAN are LANs, MANs, and wireless cells.

In addition, each area has a home agent, which keeps track of hosts whose home is in that area, but
who are currently visiting another area.

Hosts that never move are said to be stationary. Eg: computers in a lab ●
When a new host enters an area, either by connecting to it (e.g., plugging into the LAN) or just

They are connected to the network by copper wires or fiber optics. wandering into the cell, his computer must register itself with the foreign agent there.
Registration process: CONGESTION CONTROL

Periodically, each foreign agent broadcasts a packet announcing its existence and address. ●
Synonymous to traffic jam in networks

A newly-arrived mobile host may wait for one of these messages, but if none arrives ●
When too many packets are present in the subnet, performance degrades. This situation
quickly enough, the mobile host can broadcast a packet saying: Are there any foreign is called congestion.
agents around? ●
Congestion in a network may occur when the load on the network (no. of packets a

The mobile host registers with the foreign agent, giving its home address, current data link network can handle) is greater than capacity of the network.
layer address, and some security information. ●
Congestion control refers to the mechanisms and techniques to control the congestion

The foreign agent contacts the mobile host's home agent and says: One of your hosts is and keep the load below the capacity
over here.
How Congestions Happens ?

The message from the foreign agent to the home agent contains the foreign agent's network
address. It also includes the security information to convince the home agent that the

The following factors responsible for congestion:
mobile host is really there. – Slow network links

The home agent examines the security information, which contains a timestamp, to prove – Slow processors
that it was generated within the past few seconds. If it is happy, it tells the foreign agent to – Insufficient memory to store arriving packets
proceed. – Shortage of buffer space

When the foreign agent gets the acknowledgement from the home agent, it makes an entry ●
Congestion control refers to techniques and mechanisms that can either prevent
in its tables and informs the mobile host that it is now registered. congestion, before it happens, or remove congestion, after it has happened.

Open-Loop Congestion Control



In open-loop congestion control, policies are applied to prevent
congestion before it happens.

In these mechanisms, congestion control is handled by either the
source or the destination.

Open loop control is exercised by using the tools such as
deciding when to accept the new packets,when to discard the
packets, which packets are to be discarded and making the
scheduling decisions at various points.
1. Retransmission Policy
3. Acknowledgment Policy

Retransmission is sometimes unavoidable. If the sender feels that a sent packet is lost or
corrupted, the packet needs to be retransmitted. ●
The acknowledgment policy imposed by the receiver may also affect congestion.

Retransmission in general may increase congestion in the network. ●
If the receiver does not acknowledge every packet it receives, it may slow down the

However, a good retransmission policy can prevent congestion. sender and help prevent congestion.

The retransmission policy and the retransmission timers must be designed to optimize ●
Several approaches are used in this case. A receiver may send an acknowledgment
efficiency and at the same time prevent congestion. only if it has a packet to be sent or a special timer expires.

The retransmission policy deals with how fast a sender times out. If a sender times out
early, then it will retransmit all the packets which can lead to congestion. ●
A receiver may decide to acknowledge only N packets at a time. We need to know
that the acknowledgments are also part of the load in a network.
2. Window Policy

The type of window at the sender may also affect congestion. ●
Sending fewer acknowledgments means imposing less load on the Network.

The Selective Repeat window is better than the Go-Back-N window for congestion control. ●
If each received packet is acknowledged immediately, then the acknowledgement

In the Go-Back-N window, when the timer for a packet times out, several packets may be packets will increase the traffic.
resent, although some of them may have arrived safely at the receiver. This duplication
may make the congestion worse.

If the acknowledgement is delayed(piggybacking), then there is a possibility of
timeout and retransmission.

The Selective Repeat window, on the other hand, tries to send only the specific packets that
have been lost or corrupted. ●
Thus, a tight flow control has to be exercised to avoid congestion

What is Load Shedding in Computer Networks?


4. Discarding Policy ●
Load shedding is one of the techniques used for congestion control.

A network router consists of a buffer. This buffer is used to store the packets and then route them to

A good discarding policy by the routers may prevent congestion and at the their destination.
same time may not harm the integrity of the transmission. ●
Load shedding is defined as an approach of discarding the packets when the buffer is full

For example, in audio transmission, if the policy is to discard less sensitive according to the strategy implemented in the data link layer.
packets when congestion is likely to happen, the quality of sound is still ●
The selection of packets to discard is an important task.
preserved and congestion is prevented or alleviated. Selection of Packets to be Discarded
5. Admission Policy ●
In the process of load shedding, the packets need to be discarded in order to avoid congestion.

An admission policy, which is a quality-of-service mechanism, can also ●
Therefore which packet needs to be discarded is a question. Below are the approaches used to discard
the packets.
prevent congestion in virtual-circuit networks.
– 1. Random Selection of packets

Switches in a flow first check the resource requirement of a flow before – 2. Selection of packets based on applications
admitting it to the network. – 3. Selection of packets based on priority

A router can deny establishing a virtual circuit connection if there is – 4. Random early detection : Randomly early detection is an approach in which packets are
congestion in the network or if there is a possibility of future congestion. discarded before the buffer space becomes full. Therefore the situation of congestion is controlled
earlier.
Closed-Loop Congestion Control 2. Choke Packet

Closed-loop congestion control mechanisms try to alleviate congestion after it happens. ●
A choke packet is a packet sent by a node to the source to inform it of

Several mechanisms have been used by different protocols. congestion.
1. Backpressure ●
In backpressure, the warning is from one node to its upstream node, although

Node 3 in the figure has more input data than it can handle. It drops some packets in its input buffer the warning may eventually reach the source station.
and informs node 2 to slow down.

Node 2, in turn, may be congested because it is slowing down the output flow of data. If node 2 is

In the choke packet method, the warning is from the router, which has
congested, it informs node 1 to slow down, which in turn may create congestion in node 1. encountered congestion, to the source station directly.

If so, node 1 informs the source of data to slow down. This, in time, alleviates the congestion. ●
The intermediate nodes through which the packet has travelled are not warned.

The pressure on node 3 is moved backward to the source to remove the congestion.

3. Implicit Signaling

In implicit signaling, there is no communication between the congested node or nodes
and the source.

The source guesses that there is a congestion somewhere in the network from other ●
Backward Signaling -
symptoms without performing any additional communications.
– A bit can be set in a packet moving in the direction opposite to the

For example, when a source sends several packets and there is no acknowledgment for congestion.
a while, one assumption is that the network is congested.
– This bit can warn the source that there is congestion and that it needs to

The delay in receiving an acknowledgment is interpreted as congestion in the network;
slow down to avoid the discarding of packets.
the source should slow down.
4. Explicit Signaling

Forward Signaling -

The node that experiences congestion can explicitly send a signal to the source or
– A bit can be set in a packet moving in the direction of the congestion.
destination. – This bit can warn the destination that there is congestion.

In the choke packet method, a separate packet is used for this purpose. But, in the
explicit signaling method, a signal is included in the packets that carry data.

Explicit signaling can occur in either the forward or the backward direction.

Reliability - Reliability is a characteristic that a flow needs. Lack of
reliability means losing a packet or acknowledgment, which entails
retransmission.

Delay - Source-to-destination delay is another flow characteristic.

Jitter -Jitter is the variation in delay for packets belonging to the same
flow.
– For example, if four packets depart at times 0, 1, 2, 3 and arrive at
20, 21, 22, 23, all have the same delay, 20 units of time.
– On the other hand, if the above four packets arrive at 21, 23, 21,
and 28, they will have different delays: 21,22, 19, and 25.

Bandwidth - Different applications need different bandwidths.
– Eg: In video conferencing, we need to send millions of bits per
second to refresh a color screen.

TECHNIQUES FOR ACHIEVING GOOD QUALITY OF SERVICE


TECHNIQUES FOR ACHIEVING GOOD QUALITY OF SERVICE
1. Scheduling

Packets from different flows arrive at a switch or router for processing.
(1). Scheduling ●
A good scheduling technique treats the different flows in a fair and appropriate manner.

Following are some scheduling techniques
(2). Traffic shaping a) FIFO Queuing
(3). Resource reservation ●
In first-in, first-out (FIFO) queuing, packets wait in a buffer (queue) until the node
(router or switch) is ready to process them.
(4). Admission control ●
If the average arrival rate is higher than the average processing rate, the queue will fill
up and new packets will be discarded.
b) Priority Queuing c). Weighted Fair Queuing

In priority queuing, packets are first assigned to a priority class. ●
In this technique, the packets are still assigned to different classes and admitted to
different queues.

Each priority class has its own queue.

The queues are weighted based on the priority of the queues; higher priority means a

The packets in the highest-priority queue are processed first. higher weight.

Packets in the lowest-priority queue are processed last ●
The system processes the packets in each queue in a round-robin fashion with the number

A priority queue can provide better QoS than the FIFO queue because higher priority traffic, such of packets selected from each queue based on the corresponding weight.
as multimedia, can reach the destination with less delay. ●
For example, if the weights are 3, 2, and 1, three packets are processed from the first

If there is a continuous flow in a high-priority queue, the packets in the lower-priority queues will queue, two from the second queue, and one from the third queue.
never have a chance to be processed. This is a condition called starvation. ●
If the system does not impose priority on the classes, all weights can be equal

2. Traffic Shaping

Traffic shaping is a mechanism to control the amount and the rate of the traffic sent to
the network. Two techniques can shape traffic: leaky bucket and token bucket.
a) Leaky Bucket
– If a bucket has a small hole at the bottom, the water leaks from the bucket at a constant rate as
long as there is water in the bucket.
– Flow of water from bucket is at a constant rate which is independent of water entering the bucket
– If bucket is full, any additional water entering in the bucket is thrown out

Same technique is applied to control congestion in network traffic

In practice the bucket is a finite queue that outputs at a fixed rate

Every host in the network is having a buffer with finite queue length.

Packets which are put in the buffer when buffer is full are thrown away.

The buffer may drain onto the subnet either by some no. of packets per unit time, or by
some total no. of bytes per unit time.
b) Token Bucket

Token bucket algorithm is a variant of the leaky bucket

The leaky bucket is very restrictive. It does not credit an idle host.

For example, if a host is not sending for a while, its bucket becomes empty.

Now if the host has bursty data, the leaky bucket allows only an average rate.

The time when the host was idle is not taken into account.

A FIFO queue holds the packets. If the traffic consists of fixed-size packets (e.g., cells in ATM ●
On the other hand, the token bucket algorithm allows idle hosts to accumulate credit for the future in
networks), the process removes a fixed number of packets from the queue at each tick of the clock. the form of tokens.

If the traffic consists of variable-length packets, the fixed output rate must be based on the number of – For each tick of the clock, the system sends n tokens to the bucket.
bytes or bits. – The system removes one token for every cell (or byte) of data sent.

This mechanism turns an uneven flow of packets from the user processes inside the host into an even – For example, if n is 100 and the host is idle for 100 ticks, the bucket collects 10,000 tokens.
flow of packets onto the network, smoothing out bursts and greatly reducing the chances of congestion. ●
Ie, Token bucket is similar to the leaky bucket, but it allows for varying output rates

The Leaky Bucket algorithm can be implemented for packets or a constant amount of bytes, send ●
This is useful when larger burst of traffic arrive
within each time interval. ●
In this approach, a token bucket is used to manage the queue regulator that controls the rate of packet

Conceptually each network interface contains a leaky bucket. And the following steps are performed: flow into the network.
– When the host has to send a packet, the packet is thrown into the bucket. ●
Each token grants the ability to transmit a fixed no. of bytes, if the token bucket fills,newly generated
– The bucket leaks at a constant rate, meaning the network interface transmits packets at a constant rate. tokens are discarded
– Burst traffic is converted to a uniform traffic by the leaky bucket. ●
If the flow delivers more packets than the queue can store, the excess packets are discarded


The leaky bucket algorithm enforces a rigid output pattern at the average rate, no matter how bursty the
traffic is. For many applications, it is better to allow the output to speed up somewhat when large bursts
3) Resource Reservation
arrive. One such algorithm is the token bucket algorithm. ●
A flow of data needs resources such as a buffer space, bandwidth, CPU time,

Eg: the leaky bucket holds tokens, generated by a clock at the rate of one token every T sec. and so on.

In Fig. 4 (a) a bucket is holding three tokens, with five packets waiting to be transmitted. ●
The quality of service is improved if these resources are reserved beforehand.

For a packet to be transmitted, it must capture and destroy one token.
4) Admission Control

In Fig. (b) three of the five packets have gotten through, but the other two are stuck waiting for two more
tokens to be generated. ●
Admission control refers to the mechanism used by a router or a switch, to
accept or reject a flow based on predefined parameters called flow
specifications.

Before a router accepts a flow for processing, it checks the flow specifications
to see if its capacity (in terms of bandwidth, buffer size, CPU speed, etc.) and its
previous commitments to other flows can handle the new flow.

You might also like