Basic of IP Routing Explained With Example
Basic of IP Routing Explained With Example
Example
This tutorial explains the basic concepts of IP routing. Learn what IP routing is, how
IP routing works, and how computer networks use IP routing.
What is IP routing?
The process of IP routing starts when the host creates the IP packet for another
host. Once an IP packet is created for another host, the host takes the following
steps.
To determine the IP subnet of the destination host, the host compares the address of
the destination host to its address.
If the network address is the same in both addresses, both addresses belong to the
same IP subnet. If the network address is different in both addresses, both
addresses belong to different IP subnets.
IP routing works differently in both cases. We will discuss both cases separately.
Let's start with the first case.
If the destination host belongs to the same IP subnet, the host takes the following
steps.
The host finds the MAC address of the destination host. To find the MAC address of
the destination host, it uses the ARP protocol.
Before we move to the next step, let’s briefly discuss how ARP protocol works.
If a host knows the IP address of the destination host but does not know the MAC
address of the destination host, it sends an ARP message to the broadcast address
of the local network. This ARP message contains the IP address of the destination
host. A broadcast address belongs to all hosts of the LAN network. All hosts receive
the ARP message and examine the message to find out whether the message is
intended for them. When the destination host finds its IP address in the ARP
message, it sends a reply message to the broadcast address. The reply message
contains the MAC address of the destination host. Since the destination host sends
the reply message to the broadcast address, all hosts of the LAN network receive
the reply message. From the reply message, the first host learns the MAC address
of the destination host.
The host encapsulates the IP packet with the MAC address of the destination host in
a data-link frame and sends the frame to the destination host. The destination host
receives the frame and checks the frame condition.
If the frame contains no error, the destination host de-encapsulates it to extract the
IP packet.
Since the IP packet belongs to the destination host, the IP protocol running on the
destination host processes the IP packet and transfers the packet's data to the
corresponding application for further processing.
If the destination host belongs to a different IP subnet, the host takes the following
steps.
The host finds the MAC address of the default gateway. To find the MAC address of
the default gateway, it uses the ARP protocol.
The host encapsulates the IP packet with the MAC address of the destination host in
a data-link frame and sends the frame to the default gateway.
The default gateway router receives the frame and checks the frame condition. If the
frame contains no error, the gateway router de-encapsulates the frame and extracts
the IP packet.
The gateway router checks the destination address of the IP packet and makes a
routing decision. To make a routing decision, the router uses the routing table. A
routing table entry contains a network address and the name of the interface that is
connected to the network.
If the router finds no entry for the destination network, it discards the packet.
If the router finds an entry for the destination network, it transfers the packet to the
interface that is connected to the destination network.
If the interface is connected to the router (next-hop) that knows how to reach the
destination network, the interface encapsulates the packet in a data-link frame and
forwards the frame to the next-hop router.
The next-hop router receives a frame, removes the packet from inside the frame,
decides where to forward the packet, puts the packet into another frame, and sends
the frame to the next-hop router.
The last router on the path forwards the frame to the destination network.
The destination host on the destination network receives the frame and checks the
condition of the frame. If the frame contains no error, it de-encapsulates the frame
and extracts the packet from the frame. The IP protocol running on the destination
host processes the IP packet. It extracts data from the packet and transfers it to the
corresponding application for further processing.
To explain basic router show commands, I will use packet tracer network simulator
software. You can use any network simulator software or can use a real Cisco router
to follow this guide. There is no difference in output as long as your selected
software contains the commands explained in this tutorial.
If require, you can download the latest as well as earlier version of Packet Tracer from here. Download
Packet Tracer
Use enable command to enter in privilege exec mode. Cisco IOS supports unique
context sensitive help features. We can use this features to list all available
commands and parameters those are associated with show command.
Enter show command with ? ( Question mark ) to list all available commands
If prompt returns with parameters excluding <CR> that means it requires more
parameters to complete this command.
If prompt returns with <CR> only as option, that means router does not need any
additional parameters to complete this command. You can execute this command in
current condition.
Router#show interfaces
This command shows the status and configuration of interfaces. By default it will
display all interfaces. But you can limit it to particular interface. To view the detail of
specific interface you can use the following command.
Output of this command provides several details about interface including its status,
encapsulation, interface type, MTU, last input and output packet etc. First line of
output shows the status of interface. First up indicates the status of physical layer.
Second up refers the data link layer status.
Possible status of physical layer
UP and Down Check data link layer for possible reasons given
Down and Down Check physical layer for possible reasons given
Second line shows the Hardware type and MAC address of interface.
Third line shows the IP address of interface.
MTU indicates the Ethernet frame size.
BW parameter refers to bandwidth link
Router#show version
This command will display information about software version of running IOS. It also
provides information about configuration setting. It shows current configuration
register setting that is used to reset the password of router.
Router#show startup-config
Routers load configuration from NVRAM in startup. This command will display the
configuration stored in NVRAM.
Router#show running-config
Router keeps all running configuration in RAM. This command will display the
configuration currently running in RAM.
Router#show clock
This command displays current time on router.
Router#show hosts
This command displays names and addresses of the hosts on the network that you
can connect.
Router#show users
This command displays users currently connected to the router.
Router#show arp
This command displays ARP cache table. ARP table is used to resolve the hardware
MAC addresses.
Router#show protocols
This command shows the status of configured layer three protocols on the device.
Router#show history
Router keeps a history of used command. This command will list the used command
on that level.
Router#show ip route
Routers use routing table to take packet forward decision. This command displays
routing table.
Let's understand how each type of routing protocol works and how it differs from
others.
Distance is measured in terms of hops. Each instance where a packet goes through
a router is called a hop. For example, if a packet crosses four routers to reach its
destination, the number of hops is 4. The route with the least number of hops is
selected as the best route.
The vector indicates the direction that a packet uses to reach its destination.
The following figure shows an example of a network running distance victor protocol.
In this network, the router R1 has three routes to reach the destination network.
These routes are the following.
Since the second route has the lowest hop count, the router R1 uses this route to
forward all packets of the destination network.
Key points: -
Distance-vector protocols do not use any mechanism to know who their neighbors
are.
Distance-vector protocols learn about their neighbors by receiving their broadcasts.
Distance-vector protocols do not perform any formal handshake or hello process with
neighbors before broadcasting routing information.
Distance-vector protocols do not verify whether neighbors received routing updates
or not.
Distance-vector protocols assume that if a neighbor misses an update, it will learn
about the change in the next broadcast update.
RIPv1 and IGRP are examples of distance-vector routing protocols.
Unlike distance-vector routing protocols, the link-state routing protocols do not share
routing and reachability information with anyone. Routers running link-state protocols
share routing information only with neighbors. To discover neighbors, link-state
protocols use a special protocol known as the Hello protocol.
After discovering all neighbors, the link-state protocols create three separate tables.
One of these tables keeps track of directly attached neighbors, one determines the
topology of the entire internetwork, and one is used as the routing table.
From all available routes, to select the best route for each destination in the network,
the link-state protocols use an algorithm called the Shortest Path First (SPF)
algorithm.
Distance-vector protocols use local broadcasts, which are processed by every router
on the same segment, while link-state protocols use multicasts which are processed
only by the routers running the link-state protocol.
Distance-vector protocols do not verify routing broadcasts. They don't care whether
the neighboring routers received them or not. Link-state protocols verify routing
updates. A destination router running link-state protocol responds to the source
router with an acknowledgment when it receives a routing update.
Hybrid routing protocols are the combination of both distance-vector and link-state
protocols. Hybrid routing protocols are based on distance-vector routing protocols
but contain many of the features and functions of link-state routing protocols.
Hybrid routing protocols are built upon the basic principles of a distance-vector
protocol but act like a link-state routing protocol. Hybrid protocols use the Hello
protocol to discover neighbors and form neighbor relationships. Hybrid protocols also
send updates only when a change occurs.
Hybrid routing protocols reduce the CPU and memory overhead by functioning like a
distance-vector protocol when it comes to processing routing updates; but instead of
sending out periodic updates like a distance-vector protocol, hybrid routing protocols
send out incremental, reliable updates via multicast messages, providing a more
secure network environment.
The 'ip route' command is a global configuration command. This command defines
a static route for a specific destination network. To define a static route, this
command needs the following information.
We have already discussed the above information in the previous part of this tutorial.
In this part, we will learn how to specify the above information to the 'ip
route' command to create and add a static route to the routing table.
This tutorial is the third part of the tutorial "Static Routing Configuration,
Commands, and Concepts Explained". The other parts of this tutorial are the
following.
The 'ip route' command uses two syntaxes. It uses the first syntax to specify the
local interface to forward data packets to the destination network and the second
syntax to specify the IP address of the next-hop router that can send data packets to
the destination network.
If you specify the local interface, the router assumes the destination network is
directly connected to the local interface and forwards the destination network's data
packets to the destination network. If you specify the IP address of the next-hop
router, the router assumes the destination network is available on another router. In
this case, the router sends the destination network's data packets to the next-hop
router. A next-hop router is a router that knows how to reach the destination network.
To specify the name of the local interface, uses the following syntax.
ip route
This is the main command. This command defines a static route. To define a static
route, it needs the following parameters.
destination_network_#
This is the destination network address for which you are creating a static route.
subnet_mask
This is the subnet mask of the destination network. This is an optional parameter. If
you skip this parameter, the command uses the default subnet mask for the
destination network. The default subnet masks for class A, B, and C are 255.0.0.0,
255.255.0.0, and 255.255.255, respectively. For example, if the destination network
belongs to class B, the command will use the subnet mask 255.255.0.0.
interface_to_exit or ip_address_of_next_hop_neighbor
You have two options for specifying how to reach the destination network. You can
specify the name of the local interface or the IP address of the next-hop router. If you
specify the name of a local interface, the router uses the local interface to forward
the destination network's data packets. You should use this option when the
destination network is directly connected to the interface.
If you specify the IP address of the next-hop router, the router sends the destination
network's data packets to the next-hop router. You should use this option when the
destination network is connected via the next-hop router.
You can connect multiple networks to the same interface of the router. If you connect
more than one networks to the same interface, the interface works as a multi-
access link. If the interface is connected to only one network, the interface works as
a point-to-point link.
On a multi-access link, you should always use the IP address of the next-hop. On a
point-to-point link, you can use any one option from both options.
If you use the name of a local interface, the router shows the same recommendation
on the console prompt and inserts the route as the connected route in the routing
table.
administrative_distance
Administrative distance is the trustworthiness of the route. If multiple routes for the
same destination are available, the router always choose the route with the lowest
administrative distance value.
You can use this option to create multiple static routes to the same destination
network. For example, if you have two routes for a destination network, and want to
use the first route as the primary route and the second route as the backup route,
you can set the administrative distance value of the second route to higher than the
first route. If you configure two static routes in this way, the router always selects the
first route to forward data packets to the destination network. If the first route fails,
the router automatically switches to the second route.
This option is optional. If you skip this option, the router automatically sets the
administrative distance value depending on the value of the previous parameter. If
you specified the name of a local interface in the previous parameter, the router sets
the administrative distance value to 0. And if you specified the IP address of the
next-hop router, the router sets the administrative distance value to 1.
Permanent
This is also an optional option. If you specify this parameter, the router keeps the
route in the routing table even when the route fails. If you don't use this option, the
router automatically removes the route when the route fails.
Static routes are the routes you manually add to the router’s routing table. The
process of adding static routes to the routing table is known as static routing. Let’s
take a packet tracer example to understand how to use static routing to create and
add a static route to the routing table.
Create a packet tracer lab as shown in the following image or download the following
pre-created lab and load it on Packet Tracer.
Packet Tracer Lab with Initial IP Configuration
In this lab, each network has two routes to reach. We will configure one route as the
main route and another route as the backup route. If the link bandwidth of all routes
is the same, we use the route that has the least number of routers as the main route.
If the link bandwidth and the number of routers are the same, we can use any route
as the main route and another route as the backup route.
If we specify two routes for the same destination, the router automatically selects the
best route for the destination and adds the route to the routing table. If you manually
want to select a route that the router should add to the routing table, you have to set
the AD value of the route lower than other routes. For example, if you use the
following commands to create two static routes for network 30.0.0/8, the route will
place the first route to the routing table.
Routers automatically learn their connected networks. We only need to add routes
for the networks that are not available on the router’s interfaces. For example,
network 10.0.0.0/8, 20.0.0.0/8 and 40.0.0.0/8 are directly connected to Router0.
Thus, we don’t need to configure routes for these networks. Network 30.0.0.0/8 and
network 50.0.0.0/8 are not available on Router0. We have to create and add routes
only for these networks.
Let's create static routes on each router for networks that are not available on the
router.
Router0 requirements
Create two routes for network 30.0.0.0/8 and configure the first route (via -Router1)
as the main route and the second route (via-Router2) as a backup route.
Create two routes for the host 30.0.0.100/8 and configure the first route (via -
Router2) as the main route and the second route (via-Router1) as a backup route.
Create two routes for network 50.0.0.0/8 and configure the first route (via -Router2)
as the main route and the second route (via-Router1) as a backup route.
Verify the router adds only main routes to the routing table.
Router0 configuration
Access the CLI prompt of Router0 and run the following commands.
Router>enable
Router#configure terminal
Enter configuration commands, one per line. End with CNTL/Z.
Router(config)#ip route 30.0.0.0 255.0.0.0 20.0.0.2 10
Router(config)#ip route 30.0.0.0 255.0.0.0 40.0.0.2 20
Router(config)#ip route 30.0.0.100 255.255.255.255 40.0.0.2 10
Router(config)#ip route 30.0.0.100 255.255.255.255 20.0.0.2 20
Router(config)#ip route 50.0.0.0 255.0.0.0 40.0.0.2 10
Router(config)#ip route 50.0.0.0 255.0.0.0 20.0.0.2 20
Router(config)#exit
Router#show ip route static
30.0.0.0/8 is variably subnetted, 2 subnets, 2 masks
S 30.0.0.0/8 [10/0] via 20.0.0.2
S 30.0.0.100/32 [10/0] via 40.0.0.2
S 50.0.0.0/8 [10/0] via 40.0.0.2
Router#
Router1 requirements
Create two routes for network 10.0.0.0/8 and configure the first route (via -Router0)
as the main route and the second route (via-Router1) as a backup route.
Create two routes for network 40.0.0.0/8 and configure the first route (via -Router0)
as the main route and the second route (via-Router2) as a backup route.
Verify the router adds only main routes to the routing table.
Router1 configuration
Router>enable
Router#configure terminal
Enter configuration commands, one per line. End with CNTL/Z.
Router(config)#ip route 10.0.0.0 255.0.0.0 20.0.0.1 10
Router(config)#ip route 10.0.0.0 255.0.0.0 50.0.0.1 20
Router(config)#ip route 40.0.0.0 255.0.0.0 20.0.0.1 10
Router(config)#ip route 40.0.0.0 255.0.0.0 50.0.0.1 20
Router(config)#exit
Router#show ip route static
S 10.0.0.0/8 [10/0] via 20.0.0.1
S 40.0.0.0/8 [10/0] via 20.0.0.1
Router#
Router2 requirements
Create static routes for network 10.0.0.0/8 and network 30.0.0.0/8 and verify the
router adds both routes to the routing table.
Router2 configuration
Router>enable
Router#configure terminal
Enter configuration commands, one per line. End with CNTL/Z.
Router(config)#ip route 10.0.0.0 255.0.0.0 40.0.0.1
Router(config)#ip route 30.0.0.0 255.0.0.0 50.0.0.2
Router(config)#exit
Router#show ip route static
S 10.0.0.0/8 [1/0] via 40.0.0.1
S 30.0.0.0/8 [1/0] via 50.0.0.2
Router#
Verifying static routing
On Router0, we configured two routes for network 30.0.0.0/8. These routes are via
Router1 and via Router2. We set the first route (via-Router1) as the main route and
the second route as the backup route. We can verify this configuration in two ways.
By sending ping requests to a PC of network 30.0.0.0/8 and tracing the path they
take to reach the network 30.0.0.0/8. For this, you can use 'tracert' command on a
PC of network 10.0.0.0/8. The 'tracert' command sends ping requests to the
destination host and tracks the path they take to reach the destination.
By listing the routing table entries on Router0. Since a router uses the routing table
to forward data packets, you can check the routing table to figure out the route the
router uses to forward data packets for each destination.
We also configured a backup route for network 30.0.0.0/8. The router must put the
backup route to the routing table and use it to forward data packets to network
30.0.0.0/8 when the main route fails. To verify this, we have to simulate the failure of
the main route.
To simulate the failure of the main route, you can delete the link between Router0
and Router1. After deleting the link, do the same testing again for the network
30.0.0.0/8.
The following link provides the configured packet tracer lab of the above example.
Use the 'show ip route static' command to print all static routes.
Note down the route you want to delete.
Use the 'no ip route' command to delete the route.
If you have a backup route, the backup route becomes the main route when you
delete the main route.
In our example, we have a backup route and a main route for the host 30.0.0.100/8.
The following image shows how to delete both routes.
There are two different versions of the RIP routing protocol: RIPv1 and RIPv2. RIPv1
is one of the first most popular IP routing protocols. It was widely used until the 90s.
RIPv1 is no longer used in modern networks.
RIPv2 was developed in the 90s as an enhancement of the RIPv1. To make RIP
more suitable for modern networks, in RIPv2 not only many functions of RIPv1 were
improved but many new features were also added.
To understand which functions were improved and which features were newly
added, in the following section, we will compare RIPv1 and RIPv2 routing protocols.
This tutorial is the third part of the article "How to configure RIP routing protocol explained
with features and functions of the RIP protocol ". The previous parts of this article are the
following.
This is the first part of the article. This part explains how the RIP routing protocol uses broadcast messages
to exchange network paths' information.
This is the second part of the article. This part explains the concept of distance-vector routing and how the
RIP routing protocol uses this concept.
Comparing RIPv1 and RIPv2 routing protocols
To find and select a single best route for each IP subnet of the network, both
versions use the hops-count metric. The maximum hops count limit is set to 15 on
both versions.
RIPv1 does not support trigger updates. RIPv2 supports trigger updates. It means,
when a change occurs, RIPv2 will immediately propagate this information to its
connected neighbors while RIPv1 will broadcast this information to its neighbors only
in the next scheduled broadcast message. Trigger updates speed up the
convergence process.
RIPv1 does not support any type of authentication while RIPv2 supports MD5
authentication. In RIPv2, we can restrict what routers we want to participate in.
RIPv1 and RIPv2 both use the split horizon, route poisoning, and various timers to
perform their basic operations. In the following section, we will understand RIP
timers while we will understand the split horizon and route poisoning in the next parts
of this tutorial.
RIP timers
RIP uses four different types of timers. These timers are: update timer, route invalid
timer, route flash timer, and hold-down timer.
Update timer: -
RIP uses this timer to set the interval between two continuous routing updates. The
default value of this timer is 30 seconds. After advertising a routing update, RIP waits
30 seconds before advertising the next routing update.
RIP learns new routes from the received routing update. After learning new routes,
when RIP adds them to the routing table, it also adds an invalid timer with each
route. RIP uses this timer to get rid of routes that become invalid in the future.
In upcoming routing updates, if RIP receives the same information on a route, it
resets the invalid timer of that route in the routing table. If RIP does not receive the
same information on a route in 180 seconds, it assumes that the route is no longer
available. When this happens, RIP sends an update to all its neighbors to indicate
that the route is invalid.
RIP uses this timer to flush invalid routes from the routing table. The value of this
timer must be set higher than the value of the invalid timer. This gives the RIP
enough time to tell its neighbors about an invalid route before the invalid route is
removed from the routing table. The default value of this timer is 240 seconds.
Hold-down timer: -
RIP uses this timer to quickly identify invalid routes. When RIP receives a routing
update that contains information about an invalid route, the RIP immediately starts
the hold-down timer for that route in the routing table. The default value of the hold
timer is 180 seconds.
RIP stops and removes this timer from an invalid route only if it receives a routing
update that contains a better routing metric for the invalid route. If RIP does not
receive a better metric for the invalid route until the hold-down timer expires, RIP
advertises the invalid route to its neighbors till the flush timer expires. The invalid
route will be removed, once the flush timer is expired.
Both the root invalid-route timer and hold-down timer work similarly except how they
are triggered. An invalid-route timer of the route is triggered when RIP does not
receive a routing update for that route until 180 seconds. The hold-down timer of a
route is triggered when RIP receives a routing update that indicates that the route
has become invalid.
In the distance-vector routing, routers learn the routing information from directly
connected neighbors, and these neighbors may have learned these networks from
other neighboring routers. Because of this, the distance-vector routing is also known
as the routing by rumor.
In the following section, we will not only understand the distance-vector routing
concept in detail through an example but will also understand how the RIP routing
protocol uses this concept to learn and select the best route for each subnet of the
network.
This tutorial is the second part of the article "How to configure RIP routing protocol
explained with features and functions of the RIP protocol ". The first part of this
article is the following.
This part explains how the RIP routing protocol uses broadcast messages to
exchange network paths' information.
The following figure illustrates a simple network running the RIP routing protocol.
When we start this network, the routers only know the IP subnets that are available
on their local interfaces.
After the booting process, routers share configured routes in the network through the
broadcasts. These broadcasts are known as routing updates.
Routers also receive broadcasts (routing updates) on their active interfaces. Routers
compare their routing tables with routing updates to learn about new IP subnets.
Router R1 receives one broadcast from the router R2 and learns one new IP subnet
192.168.1.248/30.
Router R2 receives two broadcasts: one from the router R1 and another from the
router R2. From these broadcasts, the router R2 learns two new IP subnets:
10.0.0.0/8 and 20.0.0.0/8.
Router R3 receives one broadcast from the router R2 and learns one new IP subnet
192.168.1.252/30.
Routers add newly learned IP subnets with their respective ports in routing tables.
The following image shows routers with their updated routing tables.
After 30 seconds (default time interval between two routing updates) all routers
broadcast their routing tables again with updated information.
The following image shows routing tables of all routers after these routing updates.
After 30 seconds, routers broadcast new routing information again. But this time, all
routers know all routes of the network, so they will update nothing. This stage is
known as convergence. The convergence is a term that refers to the time taken by
all routers in understanding the current topology of the network.
The RIP protocol broadcasts successive routing updates even after achieving the
phase of convergence. This helps the router to detect and adapt to any new changes
that occur after the convergence.
RIP uses distance to select the best route for each destination subnet. Distance is
calculated in the term of hops. Each instance where a packet goes through a router
is called a hop, and the route with the least number of hops to the destination subnet
is selected as the best route for that destination subnet. The term vector indicates
the direction to the destination subnet. RIP uses the interface of the next-hop router
as the vector.
The following figure shows an example of a network running the RIP routing
protocol.
In this network, the router A has three routes to the destination network. These
routes are the following.
Since the second route has the lowest hop count, router A uses this route to forward
all packets of the destination network.
Key points: -
To explain RIP Routing, I will use packet tracer network simulator software. You can
use any network simulator software or can use a real Cisco switch to follow this
guide. There is no difference in output as long as your selected software contains the
commands explained in this tutorial.
Initial IP configuration
Double click PC0 and click Desktop menu item and click IP Configuration. Assign
IP address 10.0.0.2/8 to PC0.
Repeat same process for PC1 and assign IP address 20.0.0.2/8.
Double click Router0 and click CLI and press Enter key to access the command
prompt of Router0.
Router>enable
Router#configure terminal
Enter configuration commands, one per line. End with CNTL/Z.
Router(config)#
From global configuration mode we can enter in interface mode. From there we can
configure the interface. Following commands will assign IP address on
FastEthernet0/0.
Serial interface needs two additional parameters clock rate and bandwidth. Every
serial cable has two ends DTE and DCE. These parameters are always configured
at DCE end.
We can use show controllers interface command from privilege mode to check the
cable’s end.
Router#configure terminal
Enter configuration commands, one per line. End with CNTL/Z.
Router(config)#interface serial 0/0/0
Router(config-if)#ip address 192.168.1.249 255.255.255.252
Router(config-if)#clock rate 64000
Router(config-if)#bandwidth 64
Router(config-if)#no shutdown
Router(config-if)#exit
Router(config)#interface serial 0/0/1
Router(config-if)#ip address 192.168.1.254 255.255.255.252
Router(config-if)#clock rate 64000
Router(config-if)#bandwidth 64
Router(config-if)#no shutdown
Router(config-if)#exit
Router(config)#
Router#configure terminal Command is used to enter in global configuration mode.
Router(config)#interface serial 0/0/0 Command is used to enter in interface mode.
Router1
Router>enable
Router#configure terminal
Enter configuration commands, one per line. End with CNTL/Z.
Router(config)#interface serial 0/0/0
Router(config-if)#ip address 192.168.1.250 255.255.255.252
Router(config-if)#no shutdown
Router(config-if)#exit
Router(config)#interface serial 0/0/1
Router(config-if)#ip address 192.168.1.246 255.255.255.252
Router(config-if)#clock rate 64000
Router(config-if)#bandwidth 64
Router(config-if)#no shutdown
Router(config-if)#exit
Use same commands to assign IP addresses on interfaces of Router2.
Router2
Router>enable
Router#configure terminal
Enter configuration commands, one per line. End with CNTL/Z.
Router(config)#interface fastEthernet 0/0
Router(config-if)#ip address 20.0.0.1 255.0.0.0
Router(config-if)#no shutdown
Router(config-if)#exit
Router(config)#interface serial 0/0/0
Router(config-if)#ip address 192.168.1.245 255.255.255.252
Router(config-if)#no shutdown
Router(config-if)#exit
Router(config)#interface serial 0/0/1
Router(config-if)#ip address 192.168.1.253 255.255.255.252
Router(config-if)#no shutdown
Router(config-if)#exit
Great job we have finished our half journey.To be on same page we have uploaded
our practice topology with IP configuration. You can download it form here.
Now routers have information about the networks that they have on their own
interfaces. Routers will not exchange this information between them on their own.
We need to implement RIP routing protocol that will insist them to share this
information.
Configuration of RIP protocol is much easier than you think. It requires only two
steps to configure the RIP routing.
Router0
Router0(config)#router rip
Router0(config-router)# network 10.0.0.0
Router0(config-router)# network 192.168.1.252
Router0(config-router)# network 192.168.1.248
router rip command tell router to enable the RIP routing protocol.
That’s all we need to configure the RIP. Follow same steps on remaining routers.
Router1
Router1(config)#router rip
Router1(config-router)# network 192.168.1.244
Router1(config-router)# network 192.168.1.248
Router2
Router2(config)#router rip
Router2(config-router)# network 20.0.0.0
Router2(config-router)# network 192.168.1.252
Router2(config-router)# network 192.168.1.244
That’s it. Our network is ready to take the advantage of RIP routing. To verify the
setup we will use ping command. ping command is used to test the connectivity
between two devices.
Access the command prompt of PC1 and use ping command to test the connectivity
from PC0.
Good going we have successfully implemented RIP routing in our network. For cross
check we have uploaded a configured topology on our server. You can download
and use that if not getting same output.
RIP protocol automatically manage all routes for us. If one route goes down, it
automatically switches to another available. To explain this process more clearly we
have added one more route in our network.
Currently there are two routes between PC0 and PC1.
Route 1
PC0 [Source / destination – 10.0.0.2] <==> Router0 [FastEthernet0/1 – 10.0.0.1]
<==> Router0 [Serial0/0/1 – 192.168.1.254] <==> Router2 [Serial 0/0/1 –
192.168.1.253] <==> Router2 [FastEthernet0/0 – 20.0.0.1] <==> PC1 [Destination
/source – 20.0.0.2]
Route 2
PC0 [Source / destination – 10.0.0.2] <==> Router0 [FastEthernet0/1 – 10.0.0.1]
<==> Router0 [Serial0/0/0 – 192.168.1.249] <==> Router1 [Serial 0/0/0 –
192.168.1.250] <==> Router1 [Serial 0/0/1 – 192.168.1.246] <==> Router2 [Serial
0/0/0 – 192.168.1.245] <==> Router2 [FastEthernet0/0 – 20.0.0.1] <==> PC1
[Destination /source – 20.0.0.2]
By default RIP will use the route that has low hops counts between source and
destination. In our network route1 has low hops counts, so it will be selected. We can
use tracert command to verify it.
Now suppose route1 is down. We can simulate this situation by removing the cable
attached between Router0 [s0/0/1] and Router2 [s0/0/1].
Okay our primary route went down. What will be happen now?
So far we are running RIP routing protocol and have another route to destination,
there is no need to worry. RIP will automatically reroute the traffic.
Use tracert command again to see the magic of dynamic routing.
Command Description
Router(config-router)#no network a.b.c.d Remove a.b.c.d network from RIP routing advertisement
Router(config-router)#version 1 Enable RIP routing protocol version one ( default)
Router(config-router)#no auto-summary By default RIPv2 automatically summarize networks in their default classfu
This command will turn it off.
Router(config-router)#passive-interface RIP will not broadcast routing update from this interface
s0/0/0
Router(config-router)#timers basic 30 90 Allow us to set RIP timer in seconds. 30 (routing update), 90 (invalid timer)
180 270 360 timer), 270 (Flush timer), 360 (sleep timer)
Router#debug ip rip Used for troubleshooting. Allow us to view all RIP related activity in real tim
Step1
An EIGRP router joins the network and sends Hello messages to discover potential
neighboring EIGRP routers. EIGRP neighboring routers reply to Hello messages.
Hello messages and reply messages contain the required parameters to become a
neighbor. The EIGRP router and EIGRP neighboring routers check parameters to
determine which routers should become neighbors. Neighbors that pass all
parameters build a neighbor relationship.
Step2
When building a neighbor relationship, EIGRP routers exchange full topology
updates. After this, they only share partial updates as needed based on changes to
the network topology. EIGRP routers stores topology information into the EIGRP
topology table.
Step3
An EIGRP router chooses the lowest-metric route to reach each subnet from the
EIGRP topology table and places the route with the metric into the routing table. The
router uses the routing table to forward packets.
EIGRP Terminology
EIGRP is a complex protocol. It uses several terms to refer to its components and
functions. Let’s discuss these terms and their meanings.
EIGRP Neighbor
From an EIGRP router's perspective, an EIGRP neighbor is another EIGRP running
router that is connected to the same subnet and ready to share routing information
with the first router.
Two EIGRP running routers become neighbors only if the following information
matches.
If authentication is configured, both routers must use the same type of authentication
and the same authentication key.
The AS (Autonomous System) number must be the same on both routers.
The interfaces that exchange EIGRP Hello messages must belong to the same IP
subnet.
The K values must match on both routers.
EIGRP uses Hello packets to discover potential EIGRP neighbors and maintain the
EIGRP neighbors. EIGRP uses the multicast address 224.0.0.10 for the destination
in the hello packets.
Hello timer
By default, EIGRP generates hello packets every 5 seconds. This time interval is
known as the hello timer. If required, you can adjust this timer.
Hold timer
The hold timer is the amount of time a router tells others to waits before they can
declare it dead. Once a neighbor is declared dead, EIGRP removes it from the
neighbor table and recalculates all routes that depend on it. The default value of this
timer is three times (5 * 3 = 15 seconds) of the hello timer. You can also adjust this
value.
EIGRP metric
EIGRP uses a composite metric to calculate the best route. A composite metric is a
metric that uses more than one component and uses the math equation to calculate
the result. EIGRP uses five components in the composite metric. These components
are bandwidth, delay, load, reliability, and MTU.
K-values
K-values are the placeholder for components used in the metric calculation formula.
You can use K-values to control the components of the EIGRP metric calculation
formula. By default, EIGRP uses only bandwidth and delay in the formula. If you
want to add another component to the formula, you have to enable its K-value. In
simple words, K-values are used to enable or disable the different metric
components used in the metric calculation formula.
Update message
A router uses an update message to send its topology to another router. The router
uses this message when it builds a neighbor relationship with another router.
Query message
A router uses a query message to ask a neighboring router to validate routing
information.
Reply message
A router uses a reply message to respond to a query message.
Neighbor table
EIGRP uses the neighbor table to store a list of the EIGPR neighbors. EIGRP uses
a separate neighbor table for each routed protocol.
Topology table
EIGRP uses the topology table to store a list of all destinations and paths it learned.
It uses a separate topology table for each routed protocol.
Successor
A successor route is the best path to reach a destination within the topology table. If
there is only one path to reach a destination, EIGRP selects the available path as
successor. If there is more than one path to reach a destination, EIGRP selects the
path that has the lowest metric as a successor.
Feasible successor
A feasible successor is the best backup path to reach a destination within the
topology table. If there are two paths to reach a destination, the path with the lower
metric will be selected as a successor and the path with the higher metric will be
selected as a feasible successor.
Routing table
EIGRP places all successor paths into the routing table. A router uses the routing
table to make forwarding decisions.
AS number
EIGRP routers use the concept of AS number to create a group of routers that can
share routing information. Routers share routing information only within the group.
For example, the following image shows two groups of routers. The AS number of
the first group is 20 and the AS number of the second group is 30. Routers of the
first group do not share routing information with the routers of the second group.
Key points: -
K-values
EIGRP routers use a composite metric to calculate the cost of routes. A composite
metric includes multiple components in the calculation formula. K-values are the
placeholder of these components. K-values allow an administrator to control the
components used in the metric calculation formula. To become neighbors, two
routers must use the same components in the metric calculation formula. If they use
different components in the metric calculation formula, the result of the calculation
will be different for the same path.
To learn what K-values are and how EIGRP uses them in the metric calculation
formula, you can check the following tutorial.
When an EIGRP router joins a network, it sends Hello Packets out from all enabled
interfaces. If an EIGRP router is connected to the other end of the enabled device, it
receives the Hello Packet.
The second router checks the configuration values in the Hello Packet. If the AS
number and K-values in the packet match the router's AS number and K-values, the
router adds the first router into the Neighbor Table.
After adding the first router to the Neighbor Table, the second router responds with
an update message. The update message contains all routing information the
second router has.
The first router responds with an ACK message. This ACK message confirms the
second router that the first router received the update message.
After sending the confirmation message, the first router sends an update message to
the second router. This update message contains all routing information the first
router has.
The second router responds with an ACK message. This ACK message confirms the
first router that the second router received the update message.
At this point, both routers have converged. After this point, they do not share a full
update. If they detect any change in the routing information, they share it through
partial updates.
When an EIGRP router forms a neighborship with another EIGRP router, it creates
an entry for that router in the Neighbor Table. In simple words, the Neighbor Table
contains information about all known and active neighbors.
When a router adds an entry for a neighbor to the Neighbor Table, it also associates
a timer with the entry. This timer is called the hold timer. The router uses the hold
timer to maintain active neighbors in the Neighbor Table. The hold time is the
amount of time a router considers a neighbor is up without receiving a Hello Packet
from the particular neighbor. The default hold time is 3 times of the default hello
interval. The default hello interval is 5 seconds.
If the router does not receive a Hello Packet from a neighbor before the expiration of
the hold time interval, the router declares the neighbor dead. After declaring the
neighbor dead, the router ends the neighborship and deletes all entries associated
with the neighbor.
Passive interface
EIGRP does not send Hello Packets out from a passive interface. To turn an
interface passive, you can use the passive-interface command. Once you turned an
interface passive, EIGRP will never send Hello Packets out from it. You can use this
command to exclude an interface from all EIGRP operations.
Adjacency
EIGRP uses the term Adjacency to refer to the neighborship. In the log, it uses the
term New Adjacency to refer to a new neighborship. The term New
Adjacency indicates a new neighbor is found and the neighborship with the
neighbor has been established.
Convergency
Convergency is a state that indicates two EIGRP routers have learned all routing
information from each other.
Full update
A full update contains all routing information the source router has and wants to
share with another router. To send a full update, EIGRP uses an update message.
Partial update
A partial update contains only the recent change in topology or a link. EIGRP
uses Query and Reply messages to learn and share partial updates.
To learn more about EIGRP message types, you can check the following tutorial.
EIGRP routing protocol calculates the cost of each route. This feature helps EIGRP
in selecting the best route for each destination. To calculate the cost of a route,
EIGRP uses a composite metric calculation formula. The formula can use five
components in the calculation. These components are Bandwidth, Delay, Load,
Reliability, and MTU.
By default, the formula uses only Bandwidth and Delay. If you want to include the
remaining components in the formula, you have to enable them. In simple words, the
formula uses only the enabled components in the calculation and by default, only the
Bandwidth and Delay are enabled.
To enable or disable components, the formula uses K-values. K-values are the
placeholder or influencer in the formula. When you enable or disable a K-value, the
formula adds or removes the component associated with the K-value in the formula.
The following table lists all K-values and their associated components.
Bandwidth is the amount of data that can be transferred over a link within a given
amount of time. It is a static value. It changes only when you make some physical
(layer1) changes in the route such as changing a cable or upgrading a link' type. To
know more about the bandwidth, you can check the following tutorial.
EIGRP picks the lowest bandwidth from all outing going interfaces of the route. Let's
take an example. The following image shows a network. In this network, Router0 has
two routes: Route1 and Route2 to reach Router5.
From EIGRP update messages, Router0 can learn the bandwidth of each link on a
route. The bandwidths of exit interfaces on Route1 are 72Kbps, 28Kbps, and
28Kbps. EIGRP will pick 28Kbps to calculate the cost of Route1. The bandwidths of
the exit interfaces on Route2 are 56Kbps, 56Kbps, and 64Kbps. EIGRP will pick
56Kbps to calculate the cost of Route2.
You may think, why does EIGRP pick the lowest bandwidth instead of the highest
bandwidth?
If EIGRP picks the maximum bandwidth of the route, it will get the equivalent or
lower bandwidth throughout the route. But if it picks the minimum bandwidth of the
route, it will get equivalent or higher bandwidth throughout the route. Picking the
lowest bandwidth is the guarantee of equivalent or higher bandwidth throughout the
route.
Load (K2)
Load is the volume of traffic passing through the interface in comparison to the
maximum capacity. It is expressed on a scale of 255 where 1 represents that an
interface is empty and 255 represents that an interface is fully utilized. It is a dynamic
value that changes frequently. It is based on the packet rate and bandwidth of the
interface.
Since data flows from both directions, the router maintains two separate load
counters: -
If K2 is enabled, the maximum Txload value will be used in the composite metric
calculation formula.
Delay (k3)
EIGRP uses total delay in the metric calculation formula. Total delay is the sum of
the delay received from the neighboring router and the delay configured on the
interface. For example, if EIGRP receives a delay value of 1000 from the
neighboring router and the value of delay is configured 2000 on the local interface,
then the total delay will be 3000 (1000 + 2000).
To view the configured delay on an interface, you can use the 'show
interface' command. The output of the 'show interface' command displays the
value of all components used in the EIGRP metric calculation formula.
The following image shows an example output of the 'show interface' command.
Reliability (K4)
MTU (K5)
MTU stands for Maximum Transmission Unit. It is advertised with the routing update
but does not actively participate in the metric calculation. EIGRP uses it when the
number of similar cost paths to the same destination exceeds the number of allowed
paths. For example, the maximum allowed paths for the load balancing is 5, and the
metric counts 6 identical cost paths to the same destination. In this case, EIGRP
ignores the path that has the lowest MTU.
EIGRP supports the load balancing feature. The load balancing feature allows a
routing protocol to use more than one path for the same destination. EIGRP can use
a maximum of 32 paths for the same destination. By default, it uses only 4 paths for
a single destination. You can configure the number of allowed paths by
the "maximum-paths" command.
Cisco uses following the configuration values for Bandwidth and Delay.
Bandwidth = 107/ least bandwidth of the route [Lowest bandwidth from all
interfaces between source and destination.]
Delay = cumulative delay of the route [Sum of all outgoing interfaces' delay.]
The following image shows an example EIGRP network created on Packet Tracer.
You can download this network topology from the following link.
Run 'show ip route eigrp' command on Router0 to view all routes in the routing
table added by EIGRP. The following image shows the output of this command.
As we can in the above output, there are four routes added by EIGRP. The
destination subnet of the first route is 30.0.0.0/8. The destination subnet of the
second route is 40.0.0.0/8. The destination subnet of the third route and fourth route
is 50.0.0.0/8. Since the third and fourth routes have the same metric, they both are
added for the load balancing. The value after the forward-slash in the square bracket
is the metric of the route. For example, the metric of the first route (30.0.0.0/8) is
2681856.
The destination subnet of the first route (30.0.0.0/8) is available between Router1
and Router2. There are two exit points (interfaces) between the source and the
destination. These exit points are Router0's S0/0/0 and Router1's S0/0/1.
Since we didn't change the default values of the bandwidth and delay on both
interfaces, EIGRP will use the default values in the formula. Default bandwidth and
delay of a serial interface are 1544Kbps and 2000, respectively.
To view the value of all metric components, you can use the 'show
interface' command.
The formula uses the least bandwidth from all interfaces between the source and the
destination. Since both interfaces have equal bandwidth, the least bandwidth is
1544Kbps.
The formula uses the sum of delay configured on all interfaces between the source
and the destination. The sum of delay configured on both interfaces
is 4000(2000+2000).
The output of the "show interface" command shows the delay in tens of
microseconds. Because of this, the delay is shown as 20000 (2000x10 = 20000) in
the above output.
As we know, unless we enable Load, Reliability, and MTU, EIGRP uses the following
formula.
For the above formula, we only need the least bandwidth and cumulative delay. The
least bandwidth and cumulative delay of the first route are 1544Kbps and 4000,
respectively. Let's put both values in the formula.
This route has two exit points (interfaces) till the destination subnet. Both exit
interfaces have default bandwidth and delay. Because of this, the least bandwidth
and cumulative delay will be 1544Kbps and 4000, respectively.
Metric = ((10000000/1544)+4000)*256
Metric = ((6476.6839) +4000)*256
Metric = (6476 + 4000) * 256
Metric = 10476 * 256
Metric = 2681856
Thus, the cost of the second route (40.0.0.0/8) is 2681856.
There are two routes to the subnet 50.0.0.0/8. Both routes have the equal least
bandwidth and cumulative delay. Since their least bandwidth and cumulative delay
are the same, their cost will also be the same. The equal cost routes are
automatically used in the load balancing. Because of this, EIGRP added both routes
to the routing table.
The following image shows the least bandwidth and cumulative delay of both routes.
EIGRP is a Cisco proprietary routing protocol. It is one of the most popular and
widely used routing protocols. It is a complex routing protocol, but it provides many
benefits and supports all sizes and types of networks.
To learn more about the AS number, you can check the following tutorial.
Customizing EIGRP
In EIGRP configuration mode, we use the following command to include the network
in the EIGRP operation.
To calculate the wildcard mask, subtract the subnet mask from 255.255.255.255.
The following table lists some examples of this calculation.
Each time this command will add only one network. To add more networks, you can
use the same command again and again. For example, if you want to add three
networks, you have to use this command three times.
Either build a packet tracer lab as shown in the following image or download the pre-
built lab from the following link.
Router(config)#router eigrp 20
Router(config-router)#network 10.0.0.0 0.0.0.255
Router(config-router)#network 192.168.1.244 0.0.0.3
Router(config-router)#network 192.168.1.0 0.0.0.3
Let’s discuss the above commands in detail.
The first command enables EIGRP with the AS number 20. Since we used the AS
number 20 on this router, this router will share routing information only with the
routers that belong to the AS number 20.
The second command adds the network 10.0.0.0 255.0.0.0 to the EIGRP operation.
The subnet mask of the network 10.0.0.0/8 is 255.0.0.0. To calculate the wildcard
mask for this subnet mask, we subtracted if from 255.255.255.255.
The third command adds the network 192.168.1.244 255.255.255.252 (/30) to the
EIGRP operation. Since this network is configured on the Serial interface S0/0/0,
EIGRP enables EIGRP operation on the Serial interface S0/0/0.
The fourth command adds the network 192.168.1.0 255.255.255.252 (/30) to the
EIGRP operation. This network is configured on the F0/1 interface. Thus, EIGRP
enables the EIGRP operation on the F0/1 interface.
Using the same commands in the same pattern, we can configure and enable
EIGRP on the rest of the routers.
Router1
Router(config)#router eigrp 20
Router(config-router)#network 192.168.1.244 0.0.0.3
Router(config-router)#network 192.168.1.248 0.0.0.3
Router(config-router)#
Router2
Router(config)#router eigrp 20
Router(config-router)#network 192.168.1.248 0.0.0.3
Router(config-router)#network 192.168.1.252 0.0.0.3
Router(config-router)#
Router3
Router(config)#router eigrp 20
Router(config-router)#network 192.168.1.8 0.0.0.3
Router(config-router)#network 192.168.1.4 0.0.0.3
Router(config-router)#
Router4
Router(config)#router eigrp 20
Router(config-router)#network 192.168.1.4 0.0.0.3
Router(config-router)#network 192.168.1.0 0.0.0.3
Router(config-router)#
Router5
Router(config)#router eigrp 20
Router(config-router)#network 20.0.0.0 0.255.255.255
Router(config-router)#network 192.168.1.252 0.0.0.3
Router(config-router)#network 192.168.1.8 0.0.0.3
Router(config-router)#
The following image shows how to run the above commands on routers.
Verifying EIGRP configuration
We can also use the tracert command. This command also sends dummy data
packets to the destination device, but instead of tracking the response, it monitors
the path the data packets take to reach the destination device.
In our example network, we have two end devices: PC0 and Server0. The following
image shows how to test connectivity between both devices.
By listing EIGRP routes on the router
You can use the "show ip route eigrp" command on the router to list all routes
added in the routing table by EIGRP.
To know more about routing loops, you can check the following tutorial.
EIGRP learns all routes of the network. But it does not add the routes that make a
loop. It uses them as backup routes. If the main route fails, it immediately switches to
the backup route.
To verify this, power off the F0/0 interface of Router3. Router3’s F0/0 interface
forwards data packets on the route Router0 uses to reach the network 20.0.0.0/8. If
we power off this interface, the main route of Rotuer0 to reach the
network 20.0.0.0/8 will be down. In this situation, Router0 will immediately switch to
the backup route (Via – R1 and R2) to reach the network 20.0.0.0/8.
Like other routing protocols, the EIGRP routing protocol does not use TCP or UDP
for communication. An EIGRP router uses RTP protocol to communicate with other
EIGRP speaking routers.
RTP sends only one packet at a time. Since it sends only one packet at a time, it
does not use windowing or any congestion control mechanism. It supports multicast
and unicast methods of transmission.
How does EIGRP use RTP?
EIGRP uses RTP to communicate with other EIGRP speaking routers on the
network. In EIGRP implementation, RTP is responsible for guaranteed and ordered
delivery of EIGRP packets with the use of sequence and acknowledge numbers.
Each EIGRP router knows who its neighbors are. When an EIGRP router sends a
multicast, it builds a list and uses it to track the neighbors who have replied. If the
EIGRP router doesn't get a reply from a neighbor via the multicast, the EIGRP router
uses unicasts to resend the same data. If it does not get a reply from the neighbor
after 16 unicast attempts, it declares the router dead. This process is known as
reliable multicast.
RTP maintains a retransmission table for each neighbor. It uses this table to track all
the reliable packets that were sent but not acknowledged within the Retransmission
Time Out (RTO). When RTP sends a reliable packet, it creates an entry for the
packet in the table. The entry contains the RTO timer. If the RTO timer expires
before an ACK packet is received, RTP will transmit the same copy of the reliable
packet again. RTP will repeat this process until the hold time of the neighbor expires
in the neighbor table. If the hold time of a neighbor expires in the neighbor table,
EIGRP removes the neighbor from the neighbor table.
EIGRP uses five types of packets. These types are Update, Query, Reply, Hello, and
Acknowledgment. Let's discuss these types.
Update
An Update packet contains the routing update or route information. When two
EIGRP routers build a neighbor relation, they use update packets to exchange
routing information. After building a neighbor relation, they use update packets only
to exchange information about a change.
EIGRP always uses reliable multicast or unicast to send to an update packet. EIGRP
uses multicast to send the same information to all neighbors and a unicast to send
the information only to a specific neighbor.
Regardless of which method EIGRP uses to send an update packet, EIGRP always
requires an acknowledgment for each update packet.
Query
EIGRP uses a query packet to find an alternate path to a particular destination when
it has lost the exiting path to the destination. EIGRP always uses the reliable
multicast method to send query packets and requires an acknowledgment for each
query packet.
Reply
EIGRP uses a reply packet to respond to a query packet. EIGRP uses the reliable
unicast method to send a reply packet. A reply packet either includes information
about a specific route or a message indicating that there is no known route to the
specific route. EIGRP also needs an acknowledgment for a reply packet.
Hello
EIGRP uses hello packets to discover potential EIGRP neighbors. EIGRP sends
hello packets via the unreliable multicast method. It does not need an
acknowledgment for a hello packet.
Ack
EIGRP sends an ack packet in response to an update, query, or reply packet. An ack
packet confirms that the destination device received the packet. Ack packets are
always sent via unicast and never require an acknowledgment. An ack packet is
itself sent as acknowledgment. An acknowledgment for an acknowledgment packet
makes no sense.
Key points
EIGRP uses RTP to exchange packets with other EIGRP speaking routers.
EIGRP exchanges five types of packets. These packet types are Update, query,
reply, hello, and acknowledgment.
Update, query, and reply packets require acknowledgment.
Hello, and ack packets do not require acknowledgment. Since they don't require
acknowledgment, they do not have sequence numbers.
EIGRP multicast address is 224.0.0.10. A packet sent to this address is heard by all
connected EIGRP routers.
EIGRP Hello packets are sent every 5 seconds on LANs and point-to-point links. On
T1 or low-speed interfaces, they are sent every 60 seconds.
That's all for this tutorial. In this tutorial, we discussed EIGRP packet types and how
EIGRP uses the RTP protocol.
Historically, RIPv1 was the first most popular routing protocol. It was published in
1988. It broadcasts a routing update every 30 seconds from all enabled interfaces. A
routing update includes all routes from the routing table. A neighboring router uses
routing updates to update its routing table.
The formula a routing protocol uses to calculate the cost of a route is called the
routing metric. A routing protocol can use a single component or multiple
components in the routing metric.
RIPv1 uses only one component in the routing metric. It counts the number of
routers in the route to calculate the best route to reach each destination. The number
of routers in a route is known as hop count. A hop is a default gateway in the path
that the packet cross to reach the destination. For example, if there are two routers
in the path, the hop count will be 2.
If there are two routes to a destination, it chooses the route that has less hop count.
It can count a maximum of 15 routers. It means if a route has more than 15 routers, it
will not use the route.
RIPv1 was mainly developed for small networks. It has many limitations and uses
only hop count in the metric. Because of these, RIPv1 was not a good choice for mid
or large-size networks. Cisco developed IGRP to overcome the limitations of RIPv1
and provide a better routing protocol.
IGRP uses multiple components in the routing metric to select the best route for
each destination. These components include bandwidth, delay, load, reliability, and
MTU. IGRP supports a maximum 255 hop count (default 100). It broadcast routing
updates every 90 seconds.
Like RIPv1, IGRP also had some technical limitations. It was developed to support
the technology levels of the 1980s. Till the 1980s, it was a good option. In the early
1990s, business requirements and technical factors pushed Cisco to update IGRP.
In the mid-1990s, Cisco updated IGRP to EIGRP.
EIGRP is the updated version of IGRP. EIGRP uses the same metric components.
These components are bandwidth, delay, load, reliability, and MTU. By default, only
bandwidth and delay are enabled. However, you can manually enable load,
reliability, and MTU. EIGRP uses all enabled components in the metric algorithm.
Even though EIGRP uses the same metric components, it stores them in different
size values. IGRP uses a 24-bit metric, while EIGRP uses a 32-bit metric. Another
major enhancement in EIGRP is that EIGRP supports classless subnets.
IGRP is a classful routing protocol. It does not advertise subnet information. EIGRP
is a classless routing protocol. It includes subnet information in routing updates. It
can advertise both classful and classless networks. This feature allows network
administrators to use VLSM in EIGRP networks.
EIGRP also supports a maximum 255 hop count (default 100). Unlike RIP, EIGRP
does not use hop count as a metric. EIGRP uses hop count to refer to how many
routers an EIGRP routing update can go through before it will be discarded.
Comparing RIPv1, IGRP and EIGRP
The following table compares RIPv1, IGRP, and EIGRP.
TCP is a features rich protocol. It provides guaranteed data delivery. It ensures that
each bit, sent from the source host, reaches at the destination host. To provide such
a reliable service, TCP deploys five functions; Segmentation, connection
multiplexing, three-way handshake, sequencing and acknowledgment, and flow
control through windowing.
From these functions, I have already explained first two functions in previous parts of
this article. In this part, I will explain reaming three functions.
This tutorial is the last part of the article "Similarities and Differences between TCP
and UDP explained with functions" This tutorial explains following CCNA topic.
This tutorial is the first part of the article. It explains segmentation process along with TCP/UDP header in
detail.
This tutorial is the second part of the article. It explains what the connection multiplexing is and how the
TCP and UDP protocols use it to connect with the multiple applications simultaneously.
Connection oriented protocol or connection-less protocol
TCP is a connection oriented protocol. Difference between a connection-oriented
protocol and a connection-less protocol is that a connection-oriented protocol does
not send any data until a proper connection is established. Connection establishment
refers to the process of initializing protocol specific features.
TCP does not send any data without establishing proper connection. Segments
which are used in connection establishment or three-way handshake process
contain only the header information that is used to initialize the TCP specific
features. These features are explained below.
To provide reliability, TCP assigns a sequence number to each sent segment. This
number not only helps the destination host in reordering any incoming segments that
arrived out of the order but also help in verifying that all sent segments were
received.
Acknowledgement numbers are used in opposite direction. These numbers are used
to send the verification of received segments, notification of lost segments and
acknowledgement for next segments.
Upon receiving all sent segments, to get the next segments, destination sends a
segment with a number in the acknowledgment field that is one number higher than
the received sequence number.
Once source and host know each other’s sequence numbers, they use them in data
exchange process. Before we take an example of this process, let’s understand one
more number that is also initialized in three-way handshake process and is used with
these numbers.
Windowing is the process of controlling the flow of segments. It ensures that one
host doesn’t flood another host with too many segments, overflowing its receiving
buffer.
For example, if window size of the receiver computer is 4, sender computer sends
only the 4 segments. Once 4 segments are sent, it waits for confirmation from
receiver computer before sending next 4 segments.
While sending confirmation, receiver computer can change the window size. For
example, it can ask sender computer to send more or less segments in next section.
This feature is called sliding windowing or dynamic windowing. It allows receiver to
control the flow of segments that sender computer can send.
Reordering segments and in correct order and dropping extra segments
To arrange the arrived segments in correct order, receiver computer uses the
sequence numbers of the segments. To detect and drop the duplicate or extra
segments, it compares the received segments with the requested segments. For
example if receiver computer requested 3 segments by specifying window size 3 in
acknowledgment and received 4 segments, it assumes that one segment is arrived
extra.
As mentioned above, receiver computer compares the arrived segments with the
expected segments. If it finds any segment is missing, it uses the sequence number
of that segment to acknowledge the receiver computer about it. When sender
computer receives an acknowledgment of a segment that it has already sent, it
assumes that acknowledged segment has lost. While transmitting the next set of
segments (segments equal to the window size), it retransmits the lost segments first
and new segments later. For example if window size is 3 and lost segment is 1, then
the transmitted segments will be; lost segment, new segment and new segment.
Comparing with UDP
TCP protocol always starts a session with the three-way handshake process. It
means if an application wants to send its data through TCP, it has to wait until the
proper connection is established through the three-way hand shake process.
UDP doesn’t use any mechanism or process before starting the session. It means if
an application wants to send its data through UDP, it can send its data immediately
without any delay.
UDP neither sequences the segments nor care about the order in which they are
sent to the destination. It also does not take acknowledgement of the sent segments
to verify that they reached at destination. It just sends and forgets about them.
Because of this, it is also referred as an unreliable protocol.
UDP takes less bandwidth and uses fewer processing cycles in comparison of TCP.
Segmentation Explained with TCP and
UDP Header
This tutorial explains what the segmentation is, how the segmentation works in data
communication process, what the TCP and UDP header contain and how the header
is used to build a segment.
Both TCP (Transmission Control Protocol) and UDP (User Datagram Protocol)
protocols work in the Transport layer. Both provide same functionality in different
ways. This functionality is the delivering data at the correct destination. While
providing this functionality, TCP focuses on accuracy while UDP pays attention on
speed.
This tutorial is the first part of the article "Similarities and Differences between TCP
and UDP explained with functions" This tutorial explains following CCNA topic.
This tutorial is the second part of the article. It explains what the connection multiplexing is and how the
TCP and UDP protocols use it to connect with the multiple applications simultaneously.
This tutorial is the last part of the article. It explains how TCP provides guaranteed data delivery through its
protocol specific features.
Following five functions are used in data delivery process: -
Segmentation
Connection multiplexing
Connection oriented or connection less delivery
Reliability through acknowledgement and sequencing
Flow control through windowing
From these functions, to ensure the accuracy in delivery process, TCP supports all
functions while in order to provide the highest possible speed, UDP supports only the
second function.
Let’s understand each function in detail and compare the way in which both
protocols provide it.
Segmentation
Segmentation is the process of dividing large data stream into smaller pieces. This
functionality allows a host to send or receive a file of any size over the any size of
network. For example, if network bandwidth is 1 Mbps and file size is 100 Mb, host
can divide the file in 100 or more pieces. Once a piece becomes less or equal to the
network bandwidth in size, it can be transferred easily. Destination host, upon
receiving all pieces, joins them back to reproduce the original file.
TCP supports segmentation while UDP does not. It means if an application wants to
use the TCP to send its data, it can give the data to TCP in actual size. Based on
several conditions such as data size and available network bandwidth, if
segmentation is required, TCP does it on its own before packing data for
transmission.
But if an application wants to use UDP to send its data, it can’t give the data to UDP
in actual size. It has to use its own mechanism to detect whether segmentation is
required or not. And if segmentation is required, it has to do it on its own before
giving data to UDP.
1. The information that is required to send the segment at the correct destination.
2. The information that is required to support the protocol specific features.
Both TCP and UDP add first type of information in same manner. Both use two fields
for this information; source port and destination port. Information about the
application that is sending the data and the information about the application that will
receive the data are added in source port field and in destination port field
respectively.
Protocols add second type of information based on the services they offer. TCP
offers several protocol specific services such as segmentation, windowing, flow
control, etc. To provide these services, it adds the necessary information in the
header.
Source port Used to identify the application that is sending data from the source host
Destination port Used to identify the application that will receive the data at destination host
Sequence Number Used to identify the lost segments and maintain the sequencing in transmission.
Acknowledgment Number Used to send a verification of received segments and to ask for the next segment
Header Length A number that indicates where the data begin in segment
Code Bits Used to define the control functions such as setting up and terminating the sessio
Window size Used to set the number of segments that can be sent before waiting for a confirm
Checksum CRC (cyclic redundancy check) of the header and data piece.
Options Used to define any additional options such as maximum segment size
On other hand, UDP neither provides any protocol specific service, nor adds any
additional information in the header.
Source port Port number of the application that is transmitting data from the source computer
Destination port Port number of the application that will receive the data at destination.
Length Denotes the length of the UDP header and the UDP data
Segment
Once a header is attached with the data piece (generated from the segmentation in
TCP or received from the application in UDP), it is referred as a segment.
This tutorial is the second part of the article "Similarities and Differences between
TCP and UDP explained with functions" This tutorial explains following CCNA topic.
This tutorial is the first part of the article. It explains segmentation process along with TCP/UDP header in
detail.
This tutorial is the last part of the article. It explains how TCP provides guaranteed data delivery through its
protocol specific features.
In each session or connection, two port numbers are used; source port and
destination port. Source port number is used to identify the session or connection
while destination port number is used to identify the application that processes the
data at destination host.
To assign the source port number and the destination port number, both Transport
layer protocols TCP and UDP use two fields in segment header; source port field
and destination port field.
The port number field is 16 bits in length that allows a total of 65536 (from 0 to
65535) port numbers. Port numbers are divided in three categories; well-known,
registered and dynamically assigned.
Transport layer, at source host, assigns a separate port number to each individual
session from the dynamically assigned port numbers. When it initiates a new
session, it picks a currently unused dynamic port number from 49152 to 65535 and
assigns it to the session. All segments which are sent through this session use the
assigned port number as the source port number. For destination port number, the
port number of destination application is used.
Transport layer at destination host, upon receiving segments from the source host,
checks the destination port field in each segment to know by which application that
segment should be processed. After processing, when the destination application
returns the data, the transport layer at destination host uses the same port numbers
in reverse. It uses the port number of application by which the segment was
processed as source port number and the port number from which the segment was
received as destination port number.
Upon receiving segments from this host, transport layer at webserver checks the
destination port number in segments headers. From destination port number, it
knows that the source wants to communicate with an application that uses HTTP
protocol. While responding to this host, it uses the destination port number 5000 and
the source port number 80.
Now suppose, host wants to access another website from same or other webserver.
So it initiates a new session. Since it’s a new session, transport layer assigns a new
port number to it. New port number allows it to keep its segments separate from the
existing session.
If there is only one host that access remote hosts, port numbers are sufficient to
multiplex the sessions. But if there is more than one host, session multiplexing can’t
be done only from the port numbers.
Let’s take an example. There are two hosts those want to access a webserver
simultaneously. So they both initiate sessions. Spouse they both assign the same
port number 50000 to their session. Now how will the webserver know which
segment is coming from which host?
To deal with such a situation, IP address is used with the port number in
multiplexing. Since both hosts have different IP addresses, destination host can
easily differentiate their sessions even they both use the same source and
destination port number.
Following figure shows an example of this. In this example, two hosts 1.1.1.1 and
2.2.2.2 are accessing two webservers 10.10.10.10 and 20.20.20.20 simultaneously
with the same source port numbers.
In this way, to make a connection or session unique or to allow a host to connect
with multiple applications simultaneously, three things are used together; Transport
layer protocol, source port number and destination IP address. To refer these three
things together, a technical term socket is used.
For example, in above figure; the host 2.2.2.2 is using the socket (10.10.10.10, TCP,
50000) to connect with the web server 10.10.10.10 while to connect with the web
server 20.20.20.20, it is using the socket (20.20.20.20, TCP, 5001).
Same way, the host 1.1.1.1 is using the socket (10.10.10.10, TCP, 50000) to
connect with the web server 10.10.10.10 while to connect with the web server
20.20.20.20, it is using the socket (20.20.20.20, TCP, 5001).
Both transport layer protocol TCP and UDP use socket based multiplexing to deliver
the data to the correct application at the source and destination hosts.
Protocol information can be added before and after the data. If the information is
added before the data, it is known as a header. If the information is added after the
data, it is known as a trailer.
The following image explains the data encapsulation and de-encapsulation process.
The header and trailer added by a layer on the sending computer can only be
removed by the peer layer on the receiving computer. For example, the header and
trailer added by the Transport layer on the sending computer can only be removed
by the Transport layer on the receiving computer.
The encapsulation process takes place on the sending computer. The de-
encapsulation process takes place on the receiving computer. After doing the
encapsulation, each layer uses a specific name or term to represent the
encapsulated data.
The following table lists the terms used by the layers in both models to represent the
encapsulated data.
This tutorial is the last part of the article "Networking reference models explained in detail with
examples.". Other parts of this article are the following.
This tutorial is the first part of the article. It briefly explains why the OSI model was created and what the
advantages of the OSI model are.
This tutorial is the second part of the article. It explains the seven layers of the OSI model in detail.
Similarities and Differences between the OSI Model and TCP/IP Model
This tutorial is the third part of the article. It compares the OSI reference model with the TCP/IP model and
lists the similarities and differences between both.
This tutorial is the fourth part of the article. It explains the five layers of the TCP/IP model in detail.
Data
The upper layer (the Application layer in the TCP/IP model) or the layers (the
Application, Presentation, and Session layers in the OSI model) create a data stream
and transfer it to the Transport layer.
The upper layers do not attach headers and trailers to the data. But if required, the
application that initiates the connection can add a header and trailer to the data. For
example, browsers use the HTTP protocol to fetch websites from webservers. The
HTTP protocol uses a header to transfer the data.
The encapsulation process describes the headers and trailers that are added by the
layers. It does not describe application-specific headers and trailers. Since the upper
layers do not add any header or trailer to the data, the encapsulation process does
not use any particular term to refer to the encapsulated data in the upper layers.
Segment
The Transport layer receives the data stream from the upper layers. It breaks the
received data stream into smaller pieces. This process is known as segmentation.
After segmentation, it creates a header for each data piece and attaches that header
to the data piece. Headers contain the information that the remote host needs to
reassemble all data pieces. Once the header is attached, a data piece is known as
the segment. The Transport layer transfers segments to the Network layer for
further processing.
Packet
The Network layer creates a header for each received segment from the Transport
layer. This header contains the information that is required for addressing and
routing, such as the source software address and destination software address.
Once the header is attached, a segment is known as the packet. Packets are
handed down to the Data link layer.
In the original TCP/IP model, the term packet is mentioned as the term datagram.
Both terms are identical and interchangeable. A packet or a datagram contains a
network layer header and an encapsulated segment.
Frame
The Data link layer receives packets from the Network layer. Unlike the Transport
layer and Network layer which only create a header, it also creates a trailer along
with the header for each received packet. The header contains information that is
required for the switching, such as the source hardware address and destination
hardware address. The trailer contains information that is required to detect and drop
the corrupt data packages in the earliest stage of the de-encapsulation. Once the
header and trailer are attached, a packet is known as the frame. Frames are passed
down to the Physical layer.
Bits
The Physical layer receives frames from the Data link layer and converts them into a
format that the attached media can carry. For example, if the host is connected
through a copper wire, the Physical layer converts frames into voltages. And if the
host is connected through a wireless network, the physical layer converts them into
radio signals.
De-encapsulation
The Physical layer picks encoded signals from the media and converts them into
frames and hands them over to the Data link layer.
The Data-link layer reads the trailer of the frame and confirms that the received
frame is in the correct shape. If the frame is in the correct shape, it reads the
destination hardware address of the frame to determine whether the fame is
intended for it.
If the frame is not intended for it, it will discard the frame. If the frame is intended for
it, it will remove the header and the trailer from the frame. Once the data link layer’s
header and trailer are removed from the frame, it becomes the packet. Packets are
handed over to the Network layer.
The Network layer checks the destination software address in the header of each
packet. If the packet is not intended for it, it will discard the packet. If the packet is
intended for it, it will remove the header. Once the network layer’s header is
removed, the packet becomes the segment. Segments are handed over to the
Transport layer.
The Transport layer receives segments from the Network layer. From segment
headers, it collects all necessary information, and based on that information it
arranges all segments back to the correct order. Next, it removes the segment
header from all segments and reassembles them in the original data stream. The
data stream is handed over to the upper layers.
Upper layers convert the data stream in such a format that the target application can
understand.
The following figure shows the encapsulation and de-encapsulation process in the
OSI model.
The following figure shows the encapsulation and de-encapsulation process in the
TCP/IP model.
OSPF Fundamental Terminology
Explained
OSPF is a complex routing protocol. It uses many terms to define its functions and
operations. This tutorial explains the meaning of the terms OSPF routing protocol
uses.
Link
State
Since a link is an interface, it has two states: up and down. The up state shows the
link (interface) is operational and OSPF can reach the IP subnet connected to the
link. The down state shows the link is not operational and OSPF cannot reach the IP
subnet connected to the link.
OSPF is a link-state protocol. Link state protocols use the Shortest Path First (SPF)
algorithm to calculate the best path to a destination. To run this algorithm, link-state
protocols learn the complete topology of the network. In a big size network, this
feature creates scalability problems. To solve this problem, OSPF uses two
concepts: autonomous systems and areas.
Autonomous System
Within the AS, OSPF uses areas and hierarchical design. OSPF implements a two-
layer hierarchy: backbone area and areas off the backbone. OSPF uses the
backbone area to provide routing between areas off the backbone and areas off the
backbone to control when and how much routing information is shared between
routers.
An area is a group of contiguous networks. Each area uses a unique area ID. All
routers in the same area use the same area ID. The following image shows how
OSPF uses areas for hierarchical routing.
The following table lists some common terms used in the hierarchical design and the
area concept.
Term Description
Backbone area A special area to which all other areas must connect.
area A set of contiguous routers that share the same routing information.
Backbone routers Routers in the backbone area
Internal routers Routers in areas off the backbone
ABR A router that connects the area to the backbone area
Intra-area route A route within the same area
Interarea route A route between the areas
Router ID (RID)
RID is the name of the router OSPF uses to identify the router. OSPF uses the
highest configured IP address as RID. If loopback interfaces are configured, OSPF
uses them to choose RID. If loopback interfaces are not configured, OSPF uses all
active physical interfaces to choose RID.
Neighbors
OSPF neighbors are two or more routers that have an interface in the same network
and have certain configuration values same. These configure values are called
neighborship requirements.
Adjacency
OSPF does not share routing updates with all neighbors. An OSPF router shares
routing updates with adjacent neighbors only. An adjacency is a relationship
between two adjacent routers that permits them to directly exchange routing
updates.
OSPF uses the concept of DR and BDR in the broadcast network to minimize the
number of adjacencies formed. In a broadcast network, one router is selected as DR.
A designated router shares routing updates with all routers.
A BDR is a hot standby router for the DR. The BDR keeps the backup copy of all
databases running on DR. If DR fails, BDR immediately takes over the position of
DR.
Hello protocol
OSPF uses the hello protocol to discover OSPF routers in the network and maintain
the relationship with neighbors. Hello packets are sent to multicast address
224.0.0.5.
OSPF database
LSA is an OSPF data packet containing link-state and routing information. OSPF
routers share LSA packets only with established adjacent routers.
LSDB
An LSDB is a collection of all LSAs received by the OSPF router. Each LSA has a
unique sequence number. OSPF stores an LSA in LADB with its sequence number.
Adjacent routers maintain the same LSDB.
OSPF routers share routing information only with neighbors. OSPF uses hello
packets to discover neighbors in segments. A hello packet contains some essential
configuration values that must be same on both routers who want to build an OSPF
neighborship. In this tutorial we will explain these configuration values in detail with
example.
In order to become OSPF neighbor following values must be match on both routers.
Area ID
Authentication
Hello and Dead Intervals
Stub Flag
MTU Size
This tutorial is the second part of our article “OSPF Routing Protocol Explained
with examples". You can read other parts of this article here.
This tutorial is the third part of this article. OSPF adjacency process goes through the seven states; OSPF
State down, OSPF State Init, OSPF State two ways, OSPF State Exstart, OSPF State Exchange, OSPF
State Loading and OSPF State full. This part explains these states with DR BDR selection process in detail
with examples.
This tutorial is the fourth part of this article. Configuration part of OSPF includes process ID, Area ID and
wildcard mask which make its setup a litter bit harder. This part explains these parameters in detail with
examples.
This tutorial is the last part of this article. In this part we will explain OSPF metric component bandwidth,
Delay, Load, Reliability and MTU with cost calculation formula in detail with examples.
Area ID
OSPF uses area concept to scale an enterprise size network. I have explained
OSPF Areas in first part of this article. Just for reference, OSPF areas create a
logical boundary for routing information. By default routers do not share routing
information beyond the area. So in order to become neighbor, two routers must
belong to same area. Here one confusing fact needs to clear. Area is associated with
specific interface, not with entire router. This allows us to configure the router in
multiple areas. For example a router that has two interfaces; Serial interface and
FastEthernet interface, can run Serial interface in one area and FastEthernet in
another area. It means link which connects two routers need be in same area
including its both ends interface. Beside this interfaces should have same network ID
and subnet mask.
Authentication
To enhance the security of network, OSPF allows us to configure the password for
specific areas. Routers who have same password will be eligible for neighborship. If
you want to use this facility, you need to configure password on all routers which you
want to include in network. If you skip any router, that will not be able to form an
OSPF neighborship.
Suppose that our network has two routers R1 and R2. Both routers are connected
with direct link and meet all criteria mentioned in first requirement. What if I configure
password in R1 and leave R2 as it is? Will it form neighborship with R2?
Well in this situation neighborship will not take place. Because when both routers
see each other’s hello packet in segment, they try to match all configure values
including password field. One packet has a value in password filed while other has
nothing in it. In this case routers will simply ignore each other’s packet.
Hello packets are the special type of LSAs (Link State Advertisements) which are
used to discover the neighbors in same segment. And once neighborship is built
same hello packets are used to maintain the neighborship. Hello packets contain all
necessary information that is required to form a neighborship. Hello packets are
generated and distributed in hello interval via multicast. Hello interval is the length of
time in seconds between the hello packets. Default hello interval is 10 seconds.
Dead Intervals
As we already know once neighborship is built, hello packets are used to maintain
the neighborship.
So a router must see hello packets from neighbor in particular time interval. This time
interval is known as dead interval. Dead interval is the number of seconds that a
router waits for hello packet from neighbor, before declaring it as dead.
Default dead interval is 40 seconds. If a router does not receive hello packet
in 40 seconds from neighbor it will declare that as dead. When this happens, router
will propagate this information to other OSPF neighboring router via LSA message.
Hello and dead interval must be same between two neighbors. If any of these
intervals are different, neighborship will not form.
This value indicates that whether sending router belong to stub area or not. Routers
who want to build OPSF neighborship must have same stub area flag.
Just like another areas, Stub area also has some specific meanings in OSPF
hierarchal design.
Configuring a stub area reduces the size of topology table inside that area. Thus
routers running in this area require less memory.
MTU
Consider a situation where MTU setting between two OSPF routers does not match.
If the router with the higher MTU sends a packet larger than the MTU set on the
neighboring router, the neighboring router will ignores this packet. This function
creates serious problem for database updates. Database updates are heavier in
nature. Once an update becomes larger than the configured MTU setting, it needs to
be spilt. In a case of miss match MTU, database update may lost few bytes. Due to
this, OSPF will ignore that update and cannot sync with database. It will be stuck in
Exstart/Exchange stage.
It is always worth to spend a little extra time in matching optional values along with
compulsory values. Matching configuration values will make troubleshooting easier.
1. Down state
2. Attempt/Init state
3. Two ways state
4. Exstart state
5. Exchange state
6. Loading state
7. Full state
This tutorial is the third part of our article “OSPF Routing Protocol Explained with
examples". You can read other parts of this article here.
This tutorial is the first part of this article. In this part we explained basic terminology of OSPF such as
Feature , Advantage and Disadvantage, Autonomous System, Area concept, ABR, IR, Link, State ,LSA
and LSDB with example.
This tutorial is the second part of this article. OSPF neighborship is built between two routers only if
configuration value of Area ID, Authentication, Hello and Dead interval, Stub Area and MTU are matched.
This part explains these parameters and OSPF adjacency in detail with examples.
This tutorial is the fourth part of this article. Configuration part of OSPF includes process ID, Area ID and
wildcard mask which make its setup a litter bit harder. This part explains these parameters in detail with
examples.
This tutorial is the last part of this article. In this part we will explain OSPF metric component bandwidth,
Delay, Load, Reliability and MTU with cost calculation formula in detail with examples.
If you are an intermediate or advance leaner, grab this tutorial from where you want.
But if you are a beginner or Cisco exam candidate, I suggest you to go through the
entire article without skipping any section. Believe me OSPF is the most complex
routing protocol among the routing protocols which you will study in CCNA.
Let’s understand these states with a simple example. Assume that our network has
two routers running OSPF routing protocol. Routers are connected with each other
via serial link. We just turned on both routers simultaneously.
Down state
At this point both routers have no information about each other. R1 does not know
which protocol is running on R2. Vice versa R2 have no clue about R1. In this stage
OSPF learns about the local interfaces which are configured to run the OSPF
instance.
In down state routers prepares themselves for neighborship process. In this state
routers choose RID (Router ID). RID plays a big role in OSPF process. Before we
move in next state let’s understand what is RID.
RID
RID is a unique identifier of Router in OSPF network. It must be unique within the
autonomous system. Routers identify each other through the RID in AS.
1. Manual configuration
2. Loopback interface IP configuration
3. Active interfaces IP configuration
Manual configuration
Because RID plays a significant role in network, OSPF allows us to configure it
manually. RID is 32 bit long. IP address is also 32 bit in length. We can use IP
address as a RID. This gives us more flexibility over RID. For example we can use a
simple and sequential IP scheme such as 1.1.1.1 for R1, 1.1.1.2 for R2, 1.1.1.3 for
R3, 1.1.1.4 for R4, 1.1.1.5 for R5 and so on.
Router(config)#router ospf 1
Router(config-router)#router-id ip_address
If we have assigned RID manually, OSPF will not look in next two options. Suppose
we did not assign it through the command. In this situation OSPF will look in next
option to find the RID.
Loopback interface IP configuration
If loopback interface is configured, OSPF will choose its IP address as RID. If
multiple loopback interfaces are configured, highest IP address will be chosen from
all loopback interfaces configuration.
If loopback interface is not configured, OSPF will look in next and last possible place
to choose the RID.
This option has several reasons which may force OSPF to recalculate the RID such
as Interface which IP address is chosen may go down or for troubleshooting we may
enable / disable the interfaces.
Key points
OSPF will follow the sequence (Manual configuration => Loopback interface =>
Active interface) of options while selecting RID. If RID is found, it will not look in next
option.
OSPF will choose IP address only from operational IP interface. Operational means
interface should be listed as line is up and line protocol is up in the output of show ip
interface brief command.
When multiple IP addresses are available, OSPF will always pick highest IP address
for RID.
For network stability we should always set RID from either router-id command or by
using loopback interfaces.
By default Router chooses OSPF RID when it initialized. Once RID is selected it will
use that RID until next reboot.
OSPF will not consider any change in RID which we make after initialization. We
have two options to implement new RID. Either reboots the router or clear the OPSF
process with clear ip ospf process command.
If OSPF fails to select the RID, it will halt the OSPF process. We cannot use OSPF
process without RID.
In down state router do following
Attempt/Init state
Neighborship building process starts from this state. R1 multicasts first hello packet
so other routers in network can learn about the existence of R1 as an OSPF router.
This hello packet contains Router ID and some essential configuration values such
as area ID, hello interval, hold down timer, stub flag and MTU. Essential
configuration values must be same on routers who want to build an OSPF
neighborship.
In previous part of this article I explained essential configuration values in detail with
example. For this tutorial I assume that these values match on both routers. If
essential configuration values match, R2 will add R1 in his neighbor Table.
In Init state routers do following
R1 will generate a hello packet with RID and essential configuration values and send
it out from all active interfaces.
The hello packets are sent to the multicast address 224.0.0.5.
R2 will receive this packet.
R2 will read RID from packet and look in neighbor table for existing entry.
If match found, R2 would skip neighborship building process and reset the dead
interval timer for that entry.
If OSPF does not find a match in neighbor table, it will consider R1 (sender router) as
a possible OSPF neighbor and start neighborship building process.
R2 will match its essential configuration values with values listed in packet.
If all necessary configuration values match, R2 will add R1 in its neighbor table.
At this moment R1 has no idea about R2. R1 will learn about R2 when it will
respond.
Before we enter in third state, let’s have a quick look on attempt state.
Attempt
In Non-broadcast multi-access environment such as Frame Relay and X.25, OSPF
uses Attempt state instead of Init state. OSPF uses this state only if neighbors are
statically configured with neighbor command. In this situation, it does not have to
discover them dynamically. As it already knows the neighbors, it will use unicast
instead of multicast in this state.
Once neighborship is built, OSPF uses hello packets as keep alive. If a router does
not receive a hello packet from any particular neighbor in dead interval, it will change
its state to down from full. After changing the state it will make an effort to contact the
neighbor by sending Hello packets. This effort is made in Attempt state.
Basically Both Init and Attempt states describe similar situation where one router has
sent a hello packet and waiting for response.
If essential configuration values match, R2 will add R1 in neighbor table and reply
with its hello packet. As R2 knows the exact address of R1, it will use unicast for
reply. Beside RID and configuration values, this packet also contains the R2’s
neighbor table data. As we know R2 has already added R1 in its neighbor table. So
when R1 will see R2’s neighbor table data, R1 would also see its name in this data.
This will assure R1 that R2 has accepted its neighborship request.
At this point:-
R2 has checked all essential configuration values listed in hello packet which it
received from R1.
R2 is ready to build neighborship with these parameters.
R2 has added R1 in its neighbor table.
To continue the neighborship process, R2 has replied with its hello packet.
R1 has received a reply from neighbor, with its own RID listed in R2’s neighbor table.
Now it is R1’s turn to take action on R2’s reply. This reply would be based on hello
packet which it received from R2. As we know that this hello packet contains one
additional field; Neighbor table data field which indicates that this is not a regular
neighbor discovery hello packet. This packet is a reply of its own request.
R1 will take following actions:-
It will read RID from hello packet and look in its neighbor table for existing entry.
If a match for RID found in neighbor table, it would reset the dead interval timer for
that entry.
If a match is not found in neighbor table, it would read the essential configuration
values from packet.
It will match configuration values with its own values. If values match, it will add R2’s
RID in neighbor table.
If packet contains neighbor table data with its own RID, it will consider that as request
to enter in two way state.
R1 will reply with a hello packet which contains its neighbor table data.
This packet is a confirmation of two ways state.
Fine, our routers are neighbor now. They are ready to exchange the routing
information.
I will explain the terms adjacencies, DR, BDR and AllSPFRouters shortly.
Broadcast Networks
Broadcast networks are capable in connecting more than two devices. Ethernet and
FDDI are the example of broadcast type network. In this type of network:-
NMBA
Non-broadcast Multi-access networks are also capable in connecting more than two
devices. But they do not have broadcast capability. X.25 and Frame Relay are the
example of NMBA type network. In this type of network:-
As network does not have broadcast capability, dynamic network discovery will not
be possible.
OSPF neighbors must have to define statically.
All OSFP packets are unicast.
DR and BDR are required.
Point to multipoint
Point to multipoint is a special implementation of NMBA network where networks are
configured as a collection of point to point links. In this type of network:-
So what does DR and BDR actually do? Why do we need them in our network?
DR and BDR
OSPF routers in a network which need DR (Designated router) and BDR (Backup
designated router) do not share routing information directly with all each other’s. To
minimize the routing information exchange, they select one router as designated
router (DR) and one other router as backup designated router (BDR). Remaining
routers are known as DROTHERs.
All DROTHERs share routing information with DR. DR will share this information
back to all DROTHERs. BDR is a backup router. In case DR is down, BDR will
immediately take place the DR and would elect new BDR for itself.
Main reason behind this mechanism is that routers have a central point for routing
information exchange. Thus they need not to update each other’s. A DROTHER only
need to update the central point (DR) and other DROTHERs will receive this update
from DR.
For example following figure illustrates a simple OSPF network. In this network R4 is
selected as DR and R5 is selected as BDR. DROTHERs (R1, R2 and R3) will share
routing information with R4 (DR) and R5 (BDR), but they will not share routing
information with each other. Later DR will share this information back to all
DROTHERs.
DR and BDR Election process
OSPF uses priority value to select DR and BDR. OSPF router with the highest
priority becomes DR. Router with second highest priority becomes BDR. If there is a
tie, router with the highest RID will be chosen.
Priority value is 8 bit in length. Default priority value is 1. We can set any value from
range 0 to 255. We can change it from Interface Sub-configuration mode with ip ospf
priority command.
For example following figure illustrates a simple OPSF network. In this network we
have five routers. We do not want that R3 becomes DR or BDR. So we changed its
default priority value to 0. Now let’s see how these routers select DR and BDR.
This condition says “Arrange all routes in high to low order and pick the highest for
DR and second highest for BDR”. If we arrange our routers in high to lower order, R3
will stand at last. Remaining routers have equal priority value. So at the end of this
condition we have a tie between four routers.
This condition says “If there is a tie, use RID value to choose”. In our network we
have a tie between four routers, so our routers will use RID to elect the DR and BDR.
Arranging routers in high to low order will give us the DR and BDR.
As we know that there are two types of network; networks which do not require DR
and BDR for exchange process and networks which require DR and BDR for
exchange process.
In first type all routers will exchange routing information with each other’s. In second
type DROTHERs will exchange routing information with DR and BDR.
Routers which will exchange routing information are known as adjacent. Relationship
between two adjacent is known as adjacency. This terminology is associated with
interfaces.
For example following figure illustrates an OSPF running NBMA network. In this
network;
R3 will build adjacency with R1, so in this relationship they will be considered
as Adjacent.
R3 will not build adjacency with R4, so in this relationship they will be considered
only DROTHER.
In a network which doesn’t require DR and BDR, all routers will be considered
as Adjacent and relationship between them will be considered as Adjacency.
Only adjacent routers will enter in next states to build the adjacency.
Exstart state
Routers who decided to build adjacency will form a master / slave relationship. In
each adjacency router who has higher RID will become master and other will
become slave. Do not mix Master /Slave relationship with DR/ BDR/ DROTHER
relationship. Both terms look similar but have different meaning. DR/ BDR/
DROTHER relationship is built in a segment and have a wider meaning while Master
/ Slave relationship is built between two interfaces which need to exchange routing
information. Master / Slave relationship has limited purpose. It is used to decide the
Router who will start exchange process. Always Master starts exchange process.
Once routers settle down on Master/Slave, they will establish the initial sequence
numbers which will be used in routing information exchange process. Sequence
numbers insure that routers get most accurate information.
Exchange state
In exchange state, Master and slave decide how much information needs to be
exchange. A router that has more than one interface may learn same network
information from different sources. An OSFP router is smart enough to filter the
updates before receiving it. It will ask only for the updates which it does not have. In
this state, routers will filter the updates which need be to exchange.
Before we learn how routes will filter this information, let’s understand few relative
terms.
LSA and LSDB are explained in the first part of this tutorial. To maintain the flow of this article I am
including the summary of these terms here again.
LSA
Link state advertisement (LSA) is a data packet which contains link-state and routing
information. OSPF uses it share and learn network information.
LSDB
Every OSPF router maintains a Link state database (LSDB). LSDB is collection of all
LSAs received by a router. Every LSA has a sequence number. OSPF stores LSA in
LADB with this sequence number.
DBDs
Database description packets (also referred as DDPs) contain the list of LSA. This
list includes link state type, cost of link, ID of advertising router and sequence
number of link. Make sure you understand this term correctly. It is only a list of all
LSAs from its respective database. It does not include full LSAs.
In this state, routers exchanges DBDs. Through DBDs routers can learn which LSAs
they already have. For example in following network R1 has A1, A2 and B2 LSAs in
its LADB. So it will send a list of these LSAs to R2. This list is a DBDs. R2 will send
an acknowledgment of receiving the list with LSACK signal. Same as R2 will send its
DBDs to R1 and R1 would acknowledge that with its LSACK single.
LSR
Upon receiving DBDs, routers will compare it with their own LADB. Thus they will
learn what they need to order. For example R1 received a check list (DBDs) of A1
and B1. When it will compare this list with its own LSA database (LADB), it will learn
that it already has A1. So it does not need to order this LSA again. But it does not
have B1, so it needs to order for this LSA. After a complete comparison, both routers
will prepare a list of LSAs which they do not have in their own LADB. This list is
known as LSR (Link State Request).
What other have (DBDs) – What I have (LADB) = What I need to order (LSR)
At the end of this state both routers have a list of LSAs which need to be exchanged.
Loading state
In this state actual routing information is exchanged. Routers exchange LSAs from
LSR list.
Routers will use LSU (Link state update) to exchange the LSAs. Each LSA contains
routing information about a particular link. Routers also maintain a retransmission list
to make sure that every sent LSA is acknowledged.
For example following figure illustrates loading state of above example. R1 sent a
LSU which contain two LSAs but it received acknowledgement of only one, so it had
to resend lost LSA again.
This exchange process will continue till router has any unsent LSA in LSR list.
Full state
Full state indicates that both routers has been exchanged all LSAs from LSR list.
Now they have identical LSDB.
Adjacent routers remain in this state for life time. This state also referred as
adjacency. If any change occurs in network, routers will go through this process
again.
Maintaining adjacency
That’s all for this part. In next part, I will explain configuration part of OSPF.
To keep this tutorial simple, I used terms neighbor and adjacencies synonymously.
Technically both terms are related but have different meanings especially in OSPF.
Neighboring routers are defined in RFC 2328.
Neighboring routers are the routers that have interfaces in common network.
For demonstration we will use packet tracer network simulator software. You can use
real Cisco devices or any other network simulator software for following this guide.
This tutorial is the fourth part of our article “OSPF Routing Protocol Explained with
examples". You can read other parts of this article here.
This tutorial is the first part of this article. In this part we explained basic terminology of OSPF such as
Feature , Advantage and Disadvantage, Autonomous System, Area concept, ABR, IR, Link, State ,LSA
and LSDB with example.
This tutorial is the second part of this article. OSPF neighborship is built between two routers only if
configuration value of Area ID, Authentication, Hello and Dead interval, Stub Area and MTU are matched.
This part explains these parameters and OSPF adjacency in detail with examples.
This tutorial is the last part of this article. In this part we will explain OSPF metric component bandwidth,
Delay, Load, Reliability and MTU with cost calculation formula in detail with examples.
Initial IP Configuration
Assign IP address to PC
Double click PC0 and click Desktop menu item and click IP Configuration. Assign
IP address 10.0.0.2/8 to PC0.
Repeat same process for Server0 and assign IP address 20.0.0.2/8.
Double click Router0 and click CLI and press Enter key to access the command
prompt of Router0.
Router>enable
Router# configure terminal
Enter configuration commands, one per line. End with CNTL/Z.
Router(config)#
From global configuration mode we can enter in interface mode. From there we can
configure the interface. Following commands will assign IP address on
FastEthernet0/0 and FastEthernet0/1.
Serial interface needs two additional parameters clock rate and bandwidth. Every
serial cable has two ends DTE and DCE. These parameters are always configured
at DCE end.
We can use show controllers interface command from privilege mode to check the
cable’s end.
Router1
Router2
Router6
Router5
Router3
Router4
Great job we have finished our half journey. Now routers have information about the
networks that they have on their own interfaces. Routers will not exchange this
information between them on their own. We need to implement OSPF routing
protocol that will insist them to share this information.
To be on same track I have uploaded my practice topology. Use this if you want to
skip above IP configuration part.
This command will enable OSPF routing protocol in router. Process ID is a positive
integer. We can use any number from 1 to 65,535. Process ID is locally significant.
We can run multiple OSPF process on same router. Process ID is used to
differentiate between them. Process ID need not to match on all routers.
Network number
Network number is network ID. We can use any particular host IP address or
network IP address. For example we can use 192.168.1.1 (host IP address) or we
can use 192.168.1.0 (Network IP address). While targeting a specific interface
usually we use host IP address (configured on that interface).
While targeting multiple interfaces, we use network IP address. So any interface that
belongs to specified network ID will be selected.
Wildcard mask
Wildcard mask are used with network ID to filter the interfaces. Wildcard mask is
different from subnet mask. Subnet mask is used to separate the network portion
and host portion in IP address. While wildcard mask is used to match corresponding
octet in network portion. Wildcard mask tells OSPF the part of network address that
must be matched. Wildcard masks are explained with examples in access list
tutorials of this category.
Key points
255 (Decimal – octet format) Wildcard mask indicates that we don’t care about
corresponding octet in network address.
For example
0 (Binary – bit format) Wildcard mask indicates that corresponding bit in network
address must be matched exactly.
255 (Binary – bit format) Wildcard mask indicates that we don’t care about
corresponding bit in network address.
OSPF is a classless protocol. With wildcard we can also filter Subnetted networks. In
classes implementation usually we use Subnetted networks. For example consider
following figure
We have four networks 172.168.1.0/24, 172.168.2.0/24, 172.168.3.0/24 and
172.168.4.0/24 subnetted from single class B network 172.168.0.0/16. Classful
configuration does not understand the concept of subnetting. In classful configuration
all these networks belong to a single network. Classful configuration works only with
in default boundary of mask. Default boundary of this address is 16 bits. So a
classful routing protocol will match only first 16 bits (172.168.x.y) of network address.
A classful routing protocol such as RIP cannot distinguish between different
Subnetted networks.
A classless routing protocol such as OSPF goes beyond the default boundary of
mask and work well with Subnetted networks. With wildcard mask we can easily filter
Subnetted networks.
With wildcard we are no longer limited with default boundaries. We can match
Subnetted networks as well as default networks.
For example we want to exclude serial interfaces in above configuration. We can use
a wildcard mask of 0.0.0.255 to match the subnet mask of /24.
Let’s take one more example, if we use following network command, which
interfaces would be selected.
If you are unfamiliar with wildcard mask, I suggest you to check our tutorials on
access lists configuration in this category. In those tutorials wildcard masks are
explained in detail with examples.
For this tutorial let’s move on third argument. Third argument which network
command accept is area number. This parameter say router to put matched interface
in specified area. OSPF areas are explained in second part this article.
Now we know the essential commands for configuration. Let’s implement them in our
network.
OSPF configuration
Router0
Router1
Router2
Router6
Router5
Router4
Router3
That’s it. Our network is ready to take the advantage of OSPF routing. To verify the
setup we will use ping command. ping command is used to test the connectivity
between two devices.
We have two routes between source and destination. tracert command is used to
know the route which is used to get the destination.
Access the command prompt of PC1 and use ping command to test the connectivity
from Server0. After that use tracert command to print the taken path.
If you did not get the same out as explained in this tutorial, use this configured
topology to cross check your topology and find out the reason.
Download OSPF Practice Topology with OSPF configuration
Summary
Command Description
Router(config)#interface loopback 0 Create a Loopback interface and move in sub interface configuration m
Router(config-if)#ip ospf priority 100 Used to influence DR/BDR selection process. Valid range is 0 to 255. 0 m
guaranteed DR/BDR. Higher priority value means higher chance of bec
Router(config-if)#bandwidth 256 Used to influence route metric cost. Cost is the inverse of bandwidth. H
256 means 256 Kbps.
Router(config-if)#ip ospf hello-interval timer Set hello interval timer to 15 seconds. Hello timer must be match on bo
15
Router(config-if)#ip ospf dead-interval 60 Set dead interval timer to 60 seconds. Dead interval timer must be ma
Router#show ip route ospf Display all routers learned through OSPF from routing table
Router#show ip ospf interface Display information about all OSPF active interfaces
Router#show ip ospf interface serial 0/0/0 Display OSPF information about serial 0/0/0 interface
Router#show ip ospf neighbor List all OSPF neighbors with basic info
Router#show ip ospf neighbor detail List OSPF neighbors with detail info
That’s all for this part. In next part of this article I will explain how OSPF calculate the
shortest path for destination.
OSPF Metric cost Calculation Formula
Explained
This tutorial explains OSPF Metric calculation formula and SPF algorithm step by
step in detail with examples. Learn how SPF (Shortest Path First) algorithm
calculates cumulative cost of route to build the Shortest Path Tree (SPT) and how
OSPF Metric Formula can be manipulated by changing reference bandwidth value.
After database is updated, router selects a single best route for each destination
from all available routes. Router uses SPF algorithm to select the best route.
Just like other routing algorithm SPF also uses a metric component called cost to
select the best route for routing table.
This tutorial is the last part of our article “OSPF Routing Protocol Explained with
examples". You can read other parts of this article here.
This tutorial is the first part of this article. In this part we explained basic terminology of OSPF such as
Feature , Advantage and Disadvantage, Autonomous System, Area concept, ABR, IR, Link, State ,LSA
and LSDB with example.
This tutorial is the second part of this article. OSPF neighborship is built between two routers only if
configuration value of Area ID, Authentication, Hello and Dead interval, Stub Area and MTU are matched.
This part explains these parameters and OSPF adjacency in detail with examples.
This tutorial is the third part of this article. OSPF adjacency process goes through the seven states; OSPF
State down, OSPF State Init, OSPF State two ways, OSPF State Exstart, OSPF State Exchange, OSPF
State Loading and OSPF State full. This part explains these states with DR BDR selection process in detail
with examples.
This tutorial is the fourth part of this article. Configuration part of OSPF includes process ID, Area ID and
wildcard mask which make its setup a litter bit harder. This part explains these parameters in detail with
examples.
OSPF Metric cost
Logically a packet will face more overhead in crossing a 56Kbps serial link than
crossing a 100Mbps Ethernet link. Respectively it will take less time in crossing a
higher bandwidth link than a lower bandwidth link. OSPF uses this logic to calculate
the cost. Cost is the inverse proportional of bandwidth. Higher bandwidth has a lower
cost. Lower bandwidth has a higher cost.
Key points
Now we know the equation, let’s do some math and figure out the default cost of
some essential interfaces.
Best route for routing table = Route which has the lowest cumulative cost
Summary
OSPF uses SPT tree to calculate the best route for routing table.
A SPT tree cannot grow beyond the area. So if a router has interfaces in multiple
areas, it needs to build separate tree for each area.
SPF algorithm calculates all possible routes from source router to destination
network.
Cumulative cost is the sum of the all costs of the outgoing OSPF interfaces in the
path.
While calculating cumulative cost, OSPF consider only outgoing interfaces in path. It
does not add the cost of incoming interfaces in cumulative cost.
If multiple routes exist, SPF compares the cumulative costs. Route which has the
lowest cumulative cost will be chosen for routing table.
Now we have a basic understanding of SPF algorithm. In remaining part this tutorial
we will learn how SPF algorithm selects the best route from available routes.
Run show ip route ospf command from privilege mode to view all learned routes
through the OSPF protocol.
As output shows, Router0 has six routes from OSPF in routing table. We will go
through the each route and find out why it was chosen as the best route for routing
table by OSPF.
Route 20.0.0.0
We have three routes to get 20.0.0.0/8 network. Let’s calculate the cumulative cost
of each route.
Via route R0 – R3 – R4 – R6
Router Exit Interface Bandwidth Metric Calculatio
Via route R0 – R5 – R6
Router Exit Interface Bandwidth Metric Calculation
Route 192.168.0.4
Via Route R0 – R1
R0’s Serial 0/0/0 cost (1562) + R1’s Serial 0/0/1 cost (1562) = 3124 (Cumulative
cost)
Via Route R0 – R3 – R4 – R6 – R2
R0’s Serial 0/0/1 cost (64) + R3’s Serial 0/0/0 cost (64) + R4’s Serial 0/0/1 cost (64)
+ R6’s Serial 0/0/0 cost (64) + R2’s Serial 0/0/1 cost (64) = 320 (Cumulative cost)
Via Route R0 – R5 – R6 – R2
Ro’s FastEthernet 0/1 cost (1) + R5’s FastEthernet 0/0 cost (1) + R6’s Serial 0/0/0
cost (64) +R2’s Serial 0/0/1 cost (64) = 130 (Cumulative cost)
Among these routes, Route R0 – R5 – R6 – R2 has the lowest cost so it was picked
for routing table.
Route 192.168.0.8
Via Route R0 – R1
R0’s Serial 0/0/0 cost (1562) + R1’s Serial 0/0/1 cost (1562) + R2’s Serial 0/0/0
(1562) = 4686 (Cumulative cost)
Via Route R0 – R3 – R4 – R6
R0’s Serial 0/0/1 cost (64) + R3’s Serial 0/0/0 cost (64) + R4’s Serial 0/0/1 cost (64)
+ R6’s Serial 0/0/0 cost (64) = 256 (Cumulative cost)
Via Route R0 – R5 – R6
Ro’s FastEthernet 0/1 cost (1) + R5’s FastEthernet 0/0 cost (1) + R6’s Serial 0/0/0
cost (64) = 66 (Cumulative cost)
Among these routes, Route R0 – R5 – R6 has the lowest cost so it was picked for
routing table.
Route 192.168.1.4
Via Route R0 – R1 – R2 – R6
R0’s Serial 0/0/0 cost (1562) + R1’s Serial 0/0/1 (1562) + R2’s Serial 0/0/0 (1562) +
R6’s FastEthernet 0/0 (1) = 4687 (Cumulative cost)
Via R0 – R3 – R4 – R6
R0’s Serial 0/0/1 cost (64) + R3’s Serial 0/0/0 cost (64) + R4’s Serial 0/0/1 cost (64)
+ R6’s FastEthernet 0/0 (1) = 193
Via R0 – R5
R0’s FastEthernet 0/1 cost (1) + R5’s FastEthernet 0/0 cost (1) = 2
Among these routes, Route R0 – R5 has the lowest cost so it was selected as the
best route.
Route 192.168.2.4
Via Route R0 – R1 – R2 – R6 – R4
R0’s Serial 0/0/0 cost (1562) + R1’s Serial 0/0/1 cost (1562) + R2’s Serial 0/0/0 cost
(1562) + R6’s Serial 0/0/1 cost (64) + R4’s Serial 0/0/0 cost (64) = 4814
Via Route R0 – R5 – R6 – R4
R0’s FastEthernet 0/1 cost (1) + R5’s FastEthernet 0/0 cost (1) + R6’s Serial 0/0/1
(64) + R4’s Serial 0/0/0 cost (64) = 130
Via Route R0 – R3
R0’s Serial 0/0/1 cost (64) + R3’s serial 0/0/0 cost (64) = 128
Among these routes, Route R0 - R3 has the lowest cost for destination 192.168.2.4.
Route 192.168.2.8
Via Route R0 – R3 – R4
R0’s Serial 0/0/1 cost (64) + R3’s Serial 0/0/0 cost (64) + R4’s Serial 0/0/1 cost (64)
= 192
Via Route R0 – R1 – R2 – R6
Ro’s Serial 0/0/0 cost (1562) + R1’s Serial 0/0/1 cost (1562) + R2’s Serial 0/0/0 cost
(1562) + R6’s Serial 0/0/1 cost (64) = 4750
Via Route R0 – R5 – R6
R0’s FastEthernet 0/1 cost (1) + R5’s FastEthernet 0/0 cost (1) + R6’s Serial 0/0/1
cost (64) = 66
Route R0 – R5 – R6 has the lowest cost value.
After selecting best route for each destination OSPF network look likes following
figure.
OSPF Route cost Manipulation
We can manipulate OSPF route cost in two ways.
Sub interface mode command Bandwidth is used to set the bandwidth of supported
interface.
If bandwidth is set through this command, OSPF will use it. If bandwidth is not set, it
will use interface’s default bandwidth.
Let me clear one more thing about bandwidth. Changing default bandwidth with
bandwidth command does not change actual bandwidth of interface. Neither default
bandwidth nor bandwidth set by bandwidth command has anything to do with actual
layer one link bandwidth.
Via Route R0 – R1
R0’s Serial 0/0/0 cost (1562) + R1’s Serial 0/0/1 cost (1562) = 3124 (Cumulative
cost)
Via Route R0 – R5 – R6 – R2
Ro’s FastEthernet 0/1 cost (1) + R5’s FastEthernet 0/0 cost (1) + R6’s Serial 0/0/0
cost (64) +R2’s Serial 0/0/1 cost (64) = 130 (Cumulative cost)
Via Route R0 – R3 – R4 – R6 – R2
R0’s Serial 0/0/1 cost (64) + R3’s Serial 0/0/0 cost (64) + R4’s Serial 0/0/1 cost (64)
+ R6’s Serial 0/0/0 cost (64) + R2’s Serial 0/0/1 cost (64) = 320 (Cumulative cost)
Among these routes, Route R0 – R5 – R6 – R2 has the lowest cost so it was picked
for routing table.
Well … Which route would have selected, if we had used default bandwidth?
Via Route R0 – R1
R0’s Serial 0/0/0 cost (64) + R1’s Serial 0/0/1 cost (64) = 128 (Cumulative cost)
Via Route R0 – R5 – R6 – R2
Ro’s FastEthernet 0/1 cost (1) + R5’s FastEthernet 0/0 cost (1) + R6’s Serial 0/0/0
cost (64) +R2’s Serial 0/0/1 cost (64) = 130 (Cumulative cost)
Via Route R0 – R3 – R4 – R6 – R2
R0’s Serial 0/0/1 cost (64) + R3’s Serial 0/0/0 cost (64) + R4’s Serial 0/0/1 cost (64)
+ R6’s Serial 0/0/0 cost (64) + R2’s Serial 0/0/1 cost (64) = 320 (Cumulative cost)
Among these routes, Route R0 – R1 has the lowest cost value so it would be
selected for routing table. Thus by changing interface bandwidth we actually
influenced route selection process.
Route R2 – R3
In this route we have two exit points. Both points have default 1oo Mbps speed.
Route R2 – R1 – R3
In this route we have three exit points. Two exit points (R2 and R1) have 1 Gbps link.
With default reference bandwidth R2 will choose Route R2 – R3, which is not good.
Sadly packet tracer does not include this command. For the practice of this
command please use other simulator software which support this command or use
real router.
Let’s change reference bandwidth to 1000Mbps on all three routers using following
commands
{module in_art_slot_10}
Route R2 – R3
R2’s FastEthernet cost (1000000000/100000000) = 10
Route R2 – R1 – R3
R2’s FastEthernet cost (1000000000/1000000000) = 1
In this case Route R2-R1-R3 will be selected, which is the shortest route for
destination.
Routers connect different networks. They receive data packets from a network, read
the destination address of each data packet, and forward the data packet to the
destination network. To forward data packets, they learn all routes of the network
and store them into the routing table.
The routing table stores each route in a separate line as a separate entry. A routing
table entry contains the network address of the destination network and either the
name of the local interface connected to the destination network or the IP address of
the remote router that knows how to reach the destination network.
There are three methods to add entries to the routing table. These methods are
default or automatic, manual or static, and dynamic. In the default method, the router
automatically adds routing entries for its interfaces.
We have already discussed the default method of routing in the following tutorial.
We will discuss, the dynamic method in the next tutorial. In this tutorial, we will
discuss the static or the manual method of routing.
Routes that are manually added by an administrator to the routing table are known
as static routes. In other words, a static route is a route that you manually add to
the router’s routing table.
Advantages of static routing
Static routing allows the administrator to save money. In static routing, the router
does not use CPU and RAM to learn the routes and calculate the best route to each
destination. Since static routing does not put overhead on the router's CPU and
RAM, the administrator can use a cheaper router.
In static routing, routers do not exchange routing information. Since routers do not
exchange routing information, they save the network bandwidth. If in a network,
routers are connected through a paid WAN link, static routing can reduce the bill
amount that the network pays for WAN connectivity.
Static routing is the safest method of routing. The administrator manually adds
routes for authorized networks. Since the administrator manually decides which
network can reach which network, a network can only access the authorized
network.
In static routing, since the administrator adds and manages all routes, the
administrator must have in-depth knowledge of the internetwork.
To add all routes correctly, the administrator has to learn how each router is
connected to the network.
If the administrator changes the location of a router in the network, the administrator
has to update routing information on all routers manually.
If a link goes up or down, the administrator has to manually update this information
on all routers. On a flipping link, this will cause a huge problem.
If you have a backup route, the router doesn't automatically switch to the backup
route if the main route fails. The administrator must have to reconfigure the router to
use the backup route.
Static routing is a good option when the network size is small. In a small network,
static routing offers many benefits at the cost of little manual work. You can use
static routing to reduce the overhead from routers or save bandwidth on paid WAN
connections.
Static routing is not a good choice when the network size is big. In a big network,
where you have hundreds of routes, static routing is not scalable, since you would
have to configure each route and any redundant paths for that route on each router.
This tutorial is the first part of the tutorial "Static Routing Configuration,
Commands, and Concepts Explained". The other parts of this tutorial are the
following.
That's all for the first part of the tutorial. In this part, we discussed what static routing
and static routes are and learned the advantages and disadvantages of static
routing. In the next part of this tutorial, we will learn the types of static routes.
Connected Routes and Local Routes
Explained
This tutorial explains how routers automatically manage routing information for active
interfaces. Learn the meaning of code C and code L in the routing table.
Routers automatically add and maintain the routing information for active interfaces.
A router uses the IP configuration of active interfaces to create the routing
information. The following steps describe this process step-by-step via an example.
If the administrator changes the IP address of the interface, the router automatically
updates the routing table entry to reflect the change.
The router can't forward packets from the disabled interface. Because of this, if the
administrator disables the interface, the router automatically removes the
corresponding routing table entry.
Currently, this router has no routing information. To verify this, print all routes. Now,
assign the IP address 10.0.0.1/8 to the GigabitEthernet0/0 interface and enable it.
When you enable the interface, the router checks the IP configuration of the interface
and creates the routing information from the IP configuration, but it does not add the
routing information to the routing table. To verify this, you can print all routes again.
The following image shows how to perform the above steps on the router.
The following table describes the meaning of each command used in the above
process.
Command Description
As you can see in the image above, the router did not add routing information for
the GigabitEthernet0/0 interface.
Can you guess why the router did not add the routing information to the routing
table?
Because the interface is not connected to any remote device. To verify this, you can
use the 'show interfaces interface_name' command. The first line of the output
displays the line status and protocol status of the interface.
As you can see in the above output, the interface is up (GigabitEthernet0/0 is up),
but it is not connected to any device (line protocol is down).
The router adds two routes for the interface. The router assigns the code letter C to
the first route and the code letter L to the second route. The router uses the first
route to forward packets from the interface and the second route to send packets to
the interface.
C – (Connected) route
A router uses interfaces to receive and forward packets. An interface and the remote
device that is connected to the interface can exchange packets only if they belong to
the same IP network. Because of this, when an administrator configures an IP
address on the interface, the router automatically assumes that the remote device
that is connected to the interface belongs to the same IP network.
From this IP address, the router assumes that the remote device connected to the
other side of GigabitEthernet0/0 belongs to network 10.0.0.0/8 and knows how to
access network 10.0.0.0/8.
Based on this assumption, the router automatically adds a route for the
network 10.0.0.0/8 and relates this route to the GigabitEthernet0/0 interface. The
router uses the letter C (Connected) to represent this type of route.
When the router receives a packet for a remote destination, it checks all available
routes. If a route is available for the remote destination, the router forwards the
packet from the interface that is mentioned in the route.
In our example, if the router receives a packet which destination address belongs to
the network 10.0.0.0/8, the router forwards the packet from
the GigabitEthernet0/0 interface.
L – (Local) route
A router also receives packets for local usages. You can configure multiple IP
addresses on the router. Each configured IP address represents a specific object on
the router. The second route tells us where the particular IP address is available on
the router. The second route is known as the host route. The router uses the
letter L (Local) to represent this type of route.
A connected route represents the network address. It uses the actual subnet prefix
(mask). A local route represents the host address. It always uses the subnet
prefix /32.
A router always uses a connected route to forward packets out of the router. A router
never uses a local route to send packets out of the router.
Local routes display the internal layout of the router. Connected routes show how the
router is connected to the network.
The following image explains the differences between a connected route and its
related route through an example.
Managing connected routes and local routes
A router creates a connected route and the local route from the IP configuration of
the interface. If the administrator changes the IP configuration of the interface, the
router automatically updates its connected route and the local route.
The following image shows how a router removes routes of an interface when the
administrator shuts down the interface.
If an interface loses its connectivity from the remote device, the router automatically
removes its routes. The following image shows how a router removes the routes of
an interface when the interface loses its connectivity.
Routed, Routable, and Routing Protocols
This tutorial describes the differences between routed or routable protocols and
routing protocols. Learn how computer networks use routing protocols and routable
protocols.
To control and manage the communication process, the source device and the
destination device use a routed protocol whereas the intermediate devices use a
routing protocol. Let’s take a simple example.
Suppose, two devices: A and B are connected through two devices: C and D. In this
case, A and B will use a routed protocol, and C and D will use a routing protocol.
The source device uses a routed protocol to pack data packets for transportation.
The destination device uses the same routed protocol to unpack the received data
packets from the source device.
On computer networks, routers work as intermediate devices. They connect different
networks. They use routing protocols to discover all available routes and find the
shortest and the fastest route between the source device and the destination device
from all available routes.
Let’s take another example. In a network two PCs: PC0 and PC1 are connected
through three routes: route-a, route-b, and route-c. PC0 wants to send a large data
file to PC1. The routed protocol running on PC0 performs all necessary tasks that
are required to send the file to PC1. These tasks include breaking the file into
smaller pieces so each piece can travel over any type of route available in the
network and adding the source address, the destination address, the sequence
number, and other parameters to each piece.
Breaking a large file into smaller pieces is known as segmentation. The process of
adding necessary information to each piece is known as encapsulation. The routed
protocol performs segmentation and encapsulation on the source device. After
performing all necessary tasks, the routed protocol running on the source device
decides the method that it can use to send the packet to the destination device.
A source device can send a packet to the destination device by using one of two
methods. These methods are sending the packet to the destination device directly
and sending the packet to the destination device via the default gateway router. If the
destination device is available in the local network, the source device uses the first
method. If the destination device is available in the remote network, the source
device uses the second method.
To learn more about these methods, you can check the following tutorial.
This tutorial step-by-step explains how the packet moves between the source device
and the destination device.
If the destination device is available in the remote network, the source device gives
the data packet to the default gateway router. The default gateway router reads the
destination address of the packet and forwards the packet to the destination address
or to the router that is connected to the destination address.
If the default gateway router knows multiple routes for the same route, it uses the
shortest and the fastest route to forward the packet. To learn all available routes and
calculate the shortest and the fastest route to the destination, the default gateway
router, and other intermediate routers use a routing protocol.
The Destination device uses the same routed protocol to reproduce the original file
from the received data packets. In our example, the routed protocol running on PC1
recreates the original file from the received data packets.
The source device and the destination device use the same routed protocol to
encapsulate and de-encapsulate the data packets. The routers that connect the
source device to the destination device use the same routing protocol to learn all
available routes and select the best route between the source device and the
destination device.
TCP/IP network
On a TCP/IP network, the source device and the destination device use the IP
protocol to encapsulate and de-encapsulate the data packets. In other words, the IP
protocol is the only available routed protocol on a TCP/IP network.
On a TCP/IP network, many routing protocols are available. RIP, EIGRP, and OSPF
are some of the most common and widely used routing protocols. Each routing
protocol uses a different technique and algorithm to find all available routes and
calculate the best route from all available routes. An administrator can select a
routing protocol that suites the network requirement.
Differences between routed protocols and routing protocols
The main differences between routed protocols and routing protocols are the
following.
Routed protocols
End devices use routed protocols to send and receive data packets.
Routed protocols provide addressing to end devices.
Routed protocols encapsulate and de-encapsulate data packets.
If the destination address is not available in the local network, the routed protocol
forwards the packet to the default gateway.
A routed protocol does not care which route the packet takes to reach the remote
destination from the default gateway.
The IP protocol is an example of a routed protocol.
Routing protocols
Intermediate routers use routing protocols to discover routes and calculate the best
route between a source and the destination.
Routing protocols store discovered routes in routing tables.
Routing protocols continuously update and manage routing tables.
Routing protocols exchange information between routers.
Routing protocols do not care what is inside a packet. They only care about how to
deliver the packet to the correct destination.
RIP, IGRP, EIGRP, OSPF, IS-IS are some examples of routing protocols.
Let’s take another example to understand how a computer network uses routed and
routing protocols.
The following image shows the layout of a network. In this network, PC0 and PC1
are connected to PC2 via four routers; R1, R2, R3, and R4.
Suppose, an application running on PC0 wants to send some data to PC1. The
application calls the IP protocol of PC0 and hands that data over to the IP protocol.
The IP protocol packs data into packets and adds source address and destination
address to each packet.
If the destination address is located in the same IP subnet, the IP protocol sends
packets directly to the destination host.
The entire routing process is controlled by the routed (IP) protocols of PC0 and PC1.
Now suppose, the same application wants to send data to PC2. The same process is
repeated until the packet forwarding decision is made by the IP protocol. This time,
since the destination host (PC2) is located in a different IP subnet, the IP protocol
sends packets to the default gateway router.
The default gateway router not only keeps records of all remote networks but also
keeps records of all available paths for each remote network. A router maintains
these records in the routing table. A typical routing table entry consists of two pieces;
the network address and the interface on which that network is available.
When a router receives a packet on any of its interfaces, it reads the destination
network of the packet and finds the destination network in the routing table. If the
routing table contains a record for the destination network, the router uses the record
to forward the packet. If the routing table doesn’t contain a record for the destination
network, the router discards the packet.
If multiple paths to a remote network exist, the router chooses the fastest path.
In our example, the default gateway router R1 has two paths to reach PC2's network.
When it receives packets for PC2 from PC0, it compares both paths and chooses the
fastest path to forward packets. For this, R1 uses the routing protocol. Routing
protocols help routers to find all paths and select the best path for each destination.
PC2 receives packets from its default gateway router R4. Routed (IP) protocol
running on PC2 processes the received packets.
Bandwidth is the maximum amount of data that an object can transfer within a given
amount of time. It is object-specific. Two different objects may have similar or
different bandwidths. It depends on many factors such as the object's capacity,
environment, configuration, etc.
To know more about the bandwidth, you can check the following tutorial.
Some protocols use the interface's bandwidth for various functions. For example,
TCP and UDP use the interface's bandwidth to decide the size of a segment. EIGRP
and OSPF use the interface's bandwidth to calculate routing metrics. Protocols read
the interface's bandwidth from the running configuration. The bandwidth command
allows us to configure the interface's bandwidth in the running configuration.
Since some protocols use the interface's bandwidth for various functions, Cisco
assigns a default bandwidth to each interface. If you will not configure the interface's
bandwidth, protocols will use the default bandwidth. If you will configure the
interface's bandwidth, protocols will use the configured bandwidth.
Media bandwidth is the amount of data that a media can transfer within a given time.
For example, if a cable can transfer 1Gb data in a second, then the cable's
bandwidth is 1Gbps.
To utilize the maximum capacity or bandwidth of the cable, you should configure the
interface's bandwidth equal to the cable's bandwidth. For example, if the cable's
bandwidth is 1544Kbps, you should configure the interface's bandwidth to
1544Kbps.
If you configure the interface's bandwidth less than or more than the cable's
bandwidth, the performance of the network will decrease.
If you configure the interface's bandwidth to 10Mbps, the upper-level protocols will
assume that interface is connected to a 10Mbps link. They will encapsulate data
packets for a 10Mbps link. Although the cable can carry 100Mb of data per second,
the interface will only load 10Mb of data per second over the cable. With this
configuration, you will waste 90% of the cable's bandwidth.
If you configure the interface's bandwidth to 200Mbps, the upper-layer protocols will
assume that interface is connected to a 200Mbps link. They will encapsulate data
packets for a 200Mbps link. Since the size of data packets is more than the
maximum capacity of the cable, the cable can not carry them. This configuration will
generate many errors.
If you configure the interface's bandwidth to 100Mbps, the upper-layer protocols will
encapsulate data packets for a 100Mbps link. This configuration utilizes the full
capacity of the cable and interface.
Using the bandwidth command to influence a routing protocol's metric
Some routing protocols such as EIGRP and OSPF use the interface's bandwidth to
calculate the metric of each route. We can use the bandwidth command to influence
their metric calculation. For example, EIGRP uses the interface's bandwidth in the
metric calculation formula. By changing an interface's bandwidth, we can force
EIGRP to select the route we want for a particular destination without making any
change in the physical layout of the network.
In the following network, Router0 has two routes to reach Router2. Both routes have
two serial links. Serial links or cables connect to serial interfaces. The default
bandwidth of a serial interface is 1544Kbps.
You can download this network topology from the following link.
If we don't change the default bandwidth of any serial interface on both routes, both
routes have equal costs. To verify this, we can use the 'show ip route' command on
router Router0.
The following image shows the output of the 'show ip route' command.
As you can see in the output, the cost of both routes is 2684416.
By default, EIGRP adds only one and the best route to each destination network in
the routing table. To select the best route, EIGRP compares the cost of each route. It
selects the route that has the lowest cost.
If there is a tie between two routes, EIGRP adds both routes to the routing table.
This feature is known as the load balancing between equal-cost routes.
Now, change the default bandwidth of a serial link and run the "show ip
route" command again. You can also use the "show ip route eigrp" command to
view only EIGRP routes.
Serial 0/0/0 connects Router0 to Router2 via Route1. Serial 0/0/1 connects Router0
to Router2 via Route2. With the default bandwidth on both interfaces, the cost of
both routes is 2684416.
Later, we changed the bandwidth of Serial 0/0/0 to 64. As soon we changed the
bandwidth of Serial 0/0/0, EIGRP recalculated the cost of both routes. Since we had
reduced the bandwidth of a link on Route1, the cost of Route1 increased.
Since EIGRP keeps only the route that has the lowest cost in the routing table, it
excludes Route1 from the routing table. However, it keeps Route1 in the Topology
table for backup.
To verify this and view the new cost of Route1, you can use the 'show ip eigrp
topology' command. The following image shows the output of this command.
This way, an administrator can easily force EIGRP to select a specific route without
making any physical change in the physical layout.
To learn how EIGRP calculates the metric of a route or how the EIGRP metric
calculation formula works, you can check the following tutorial.
The 'show ip route' command displays the structure and contents of the routing
table. You can use the 'show ip route' for the following purposes.
To use the 'show ip route' command, enter privileged-exec mode and run the
following command.
#show ip route
The output of this command is organized into three sections. These sections are
Codes, Default route, and Routes. The following image shows the output of this
command.
Codes
The routing table uses the abbreviated code to store the type of route. This section
displays the meaning of each abbreviated code.
Default route
This section displays the default route. The router uses the routing table's routes to
forward data packets. If there is no route available for the destination address of a
data packet, the router uses the default route to forward the data packet. If the
default route is not set, the router discards the data packet.
Routes
The routing table puts all routes in this section. To arrange routes, the routing table
uses blocks. Each block contains a classful network and the classless networks
created from the classful network. If a classful network is subnetted into small
classless networks and the router knows the routes for the classless networks, the
routing table uses a heading to group all classless networks of the same classful
network.
The routing table uses the heading for a classful network only if it knows more than
one route for the classful network. If there is only one route for the network, the
routing table adds the route without the heading.
The following image shows routes with the heading and without the heading.
The heading includes three things: the classful network, the total number of subnets,
and the total number of masks used to create the subnets. Let's understand these
things.
A router learns routes from various sources. This part shows the total number of
routes the router learned from all sources for the classful network. The total number
includes all routes for the classful network and all classless networks created from
the same classful network.
This is the total number of different masks used in all routes for all subnets created
from the classful network mentioned in the heading.
The routing table uses a heading to organize all routes created from the same
classful network. We have already discussed the things the routing table uses to
create the heading. Now let's discuss the things the routing table uses to build route
entries.
Legend code
The legend code is the first thing in a route entry. A router can learn a route from
various sources. The legend code shows the source from which the router learned
the route. The routing table stores the legend code in the abbreviated form. The first
section of the output of the 'show ip route' command shows the meaning of each
code.
Network address / Subnet mask
Each route reaches a specific destination. Each route entry includes only one
destination network. After the legend code, the routing table places the destination
network address with the subnet mask in the route entry.
Routers use the routing table to make a forwarding decision. A router compares the
destination address of the data packet to the network address stored in each entry of
the routing table. If the network address mentioned in an entry matches the
destination address of the data packet, the router forwards the data packet from the
interface or to the next-hop router's interface mentioned the matching entry.
To learn how this process works or how routers take a routing decision, you can
check the following tutorial.
Routing Decision Longest Match Explained
AD(Administrative Distance)/Metric
The routing table stores only one route for each destination. If the router learns more
than one route for a destination from different sources, the router adds only the best
route to the routing table. To select the best route, the router uses the AD
(Administrative Distance) value.
Let's take an example. A router learned two routes for the same destination. The AD
value of the first source is 10 and the AD value of the second source is 20. The
router will add the route learned from the first source to the routing table.
A router can also learn more than one route from the same source. If the router
learns more than one route for a destination from the same source, the router uses
the metric value of the routes to select the best route. Sources use the metric value
to calculate the best route for the destination.
In simple words, a router uses the AD value to select the best route learned from
different sources and the metric value to select the best route learned from the same
source.
The IP address of the next-hop router
This is the IP address of the next-hop router. A router forwards the packet to the
next-hop router if the destination address and the address specified in the route
match.
EIGRP/OSPF Timer
EIGRP and OSPF routing protocols use a timer for each learned route. If the route is
learned by EIGRP or OSPF, the routing protocol includes the timer in the routing
information.
Exit interface
This is the local interface the router uses to forward the data packet.
Routing Decision Longest Match
Explained
This tutorial explains how routers make a routing decision. Learn what a routing
decision is and how a router takes the routing decision.
Routers connect different networks. To connect different networks, routers learn all
the routes of the network and store them in the routing table. Routers store each
route in a separate entry. A routing table entry contains the network address and the
route to reach the network address.
When a router receives a data packet, the router reads the destination address of
the data packet and compares the destination address to the routing table's entries.
The router may find no matching entry, one matching entry, or multiple matching
entries.
If the router finds no matching entry, the router discards the data packet. If the router
finds only one matching entry, the router forwards the data packet from the route
defined in the entry. If the router finds multiple matching entries, the router selects
the entry that matches the maximum of the destination address and forwards the
data packet from the route defined in the selected entry.
The process of comparing a destination address to the routing table entries and
selecting the matching entry is known as a routing decision. In other words, a routing
decision is a process in which the router decides the route to forward data packets to
a particular destination.
Let's take an example to understand how a router makes a routing decision. The
following image shows the routing table of router R1 in a sample network.
Now suppose, R1 receives five data packets. The following table lists the destination
address of each data packet.
The destination address of the first packet is 192.168.1.1/24. The network address in
this address is 192.168.1.0/24. Since there is no route for the network
192.168.1.0/24, the router discards the first data packet.
The destination address of the second data packet is 20.0.0.10/8. The network
address in this address is 20.0.0.0/8. There is one route for the network address
20.0.0.0/8. This route is available on the local interface (C- Directly connected). The
router forwards the second data packet to the connected local network from the
interface mentioned in the route.
The destination address of the third data packet is 90.0.0.10/8. The network address
in this address is 90.0.0.0/8. The router has three routes for the network 90.0.0.0/8.
These routes are 90.0.0.0/8, 90.1.0.0/16, and 90.2.0.0/24. These routes belong to
the same classful network 90.0.0.0/8 but different classless networks.
You can break a classful network into small classless networks. The process of
breaking a large classful network into small classless networks is called subnetting.
You can check the following tutorial to learn more about subnetting.
When you break a large classful network into small classless networks, each
classless network works independently. A router treats each classless network as a
separate network. It does not use the route of a classless network for another
classless network even if both classless networks are created from the same classful
network.
A router lists all routes for classless networks created from the same classful
network under the classful network. It also shows how many routes it knows for
classless networks created from the same classful network.
Routing decision and the longest prefix match
If the destination address of a data packet matches only one route of the routing
table, the router uses the matched route to forward the data packet. But if the
destination address of a data packet matches more than one route of the routing
table, the router uses the best route to forward the data packet. To select the best
route, the router matches the network bits of the destination address and the
address defined in the route. The route which matches the maximum number of
network bits is considered the best route. If there is a tie, the router uses the route
which belongs to the same subnet.
In our example, the router uses the first route for the third data packet. The first route
(via – 30.0.0.0/8) matches the network address 90.0.0.0/8. However, the second
route (via 40.0.0.0.2) and the third route (via - 50.0.0.2) also have 90 in the first octet
and also match the network address of the destination address (90.0.0.10/8), yet the
router does not use them to forward the data packet as they belong to other
classless networks.
For the fourth data packet, the router uses the second route. The second route (via –
40.0.0.2) matches 16 network bits. The first route and the third route match 8
network bits.
For the fifth data packet, the router uses the third route. The third route (via –
50.0.0.2) matches 24 network bits. The first route and the second route match 8
network bits.
The following image shows how the router takes the routing decision for the third,
fourth, and fifth data packets.
Identifying the route with the longest prefix length mask
There are two ways to identify the route the router will use to forward the packet.
These ways are using subnetting math and using the 'show ip route address'
command.
The following table lists five routes with their subnet prefix and the address they
match.
The/32 and /0 are two special subnet prefixes. The /32 matches only one address.
The /0 matches all addresses. If /32 prefix matches, the router always uses this
route. A route with /32 prefix is known as the host route. The router uses /0 as the
last route. A route with /0 prefix is known as the default route. The router uses the
default route only when no other route matches.
Now, can you guess the route for the following addresses?
The first address (10.0.0.1) is available in the address range of the first route
(10.0.0.0 – 10.255.255.255) and the last route or the default route (0.0.0.0 –
255.255.255.255). As mentioned earlier, the router uses the default route only when
no other route is available. Since a route is available, the router will not use the
default route. If we exclude the default route, only the first route will be available for
the first address. Thus, the router will use the first route for the first address.
The second address (10.1.0.36) is available in the address range of the first route
(10.0.0.0 – 10.255.255.255), second route (10.1.0.0 – 10.1.255.255), and the last
route (0.0.0.0 – 255.255.255.255). If we exclude the default route, the first route and
the second route remain for the second address. The subnet prefix of the first route
is /8. The subnet prefix of the second route is /16. Since /16 is more than /8, the
router will use the second route for the second address.
The third address (10.1.1.25) is available in the address range of the first route
(10.0.0.0 – 10.255.255.255), second route (10.1.0.0 – 10.1.255.255), third route
(10.1.1.0 – 10.1.1.255), and the last route (0.0.0.0 – 255.255.255.255). After
excluding the default route, the first route, second route, and third route remain for
the third address. Subnet prefixes of these routes are /8, /16, and /24, respectively.
Since the router uses the longest prefix to select the best route, the router will use
the route with /24 prefix for the third address.
The fourth address (10.1.1.1) is available in the address range of all routes. The
fourth route (10.1.1.1/32) with the /32 prefix exactly matches the address. The router
will use the fourth route for this address.
The fifth address (20.0.0.1) is available only in the address range of the last or the
default route. The router will use this route for the fifth address.
If you don't want to do the subnetting math, you can use the 'show ip
route address' command. This command prints the route the router will use for the
given address. For example, if you want to know which route the router will use for
the address 10.0.0.1, you can use the following command.
There are many types of a delay. Each type measures the delay of a particular
device used in the path or the delay of a specific stage in the transmission. The most
common types of delay are the following.
Transmission delay
It is the time the source takes to put the data packet on the transmission medium or
the link.
Propagation delay
It is the time the transmission medium or the link takes to transfer the data packet.
Queueing delay
A destination device or a router can process only one data packet at a time. If it
receives more than one packet, it puts them in the queue. Queuing delay is the time
the data packet spends in the queue.
Processing delay
It is the time the destination device or the router takes to process the data packet.
Interface delay
A router interface organizes all the data packets into a serial queue and loads them
one by one onto the connected medium. An interface delay is the amount of time a
data packet spends in the serial queue.
Cisco routers include the delay command. The delay command allows an
administrator to configure the interface delay in the running configuration. Many
services and protocols running on the router use the interface delay for various
purposes. For example, the EIGRP routing protocol uses it to calculate the metric of
a path. If you change the interface delay in the running configuration, the protocols
that read the interface delay from the running configuration will use the modified
delay. Administrators can use the delay command to influence the protocols that
use the interface delay.
A delay is the physical property of the transmission. The delay command does not change
the physical delay of the interface. It allows you to configure a new delay of the interface in
the running configuration.
To select the best path for a destination, EIGRP calculates and compares the
metrics of all available paths for the destination. EIGRP uses interface delay as a
metric component. If you modify the delay of an interface, EIGRP will recalculate the
metric of the paths associated with the interface.
By changing the interface delay, you can force EIGRP to select a specific route to a
particular destination. Let's understand this process through an example.
The following image shows a network. In this network, Router0 has two routes to
reach the network 50.0.0.0/8.
You can download the practice lab of this network from the following link.
Since many services and protocols depend on the interface delay, Cisco assigns a
default delay to each interface. The default delay of a serial interface is 2000. If you
don't change the default delay, protocols will use the default delay. If you change the
default delay, protocols will use the modified delay.
To view the delay of an interface, you can use the "show interface [interface
name]" command. The output of this command shows the delay in the tens of
microseconds (usec). For example, if the configured delay is 2000, then the output of
this command will show the delay 20000 (2000*10 = 20000).
The following image shows how to use the "show interface" command on Router0
to view the delay of both serial interfaces.
With the default configuration, EIGRP selects Route1 to reach the network
50.0.0.0/8. To view EIGRP routes, you can use the "show ip route
eigrp" command. The following image shows the output of this command on
Router0.
Now suppose, for some reason, we want EIGRP to pick Route2 to reach the network
50.0.0.0/8. But, at the same time, we don't want to make any change in the physical
layout of the network.
In this situation, we can use the delay command to force EIGRP to select Route2.
We can configure the delay on the Serial 0/0/1 interface higher than the total delay of
Route2.
To configure the delay on an interface, we use the delay command in the interface
mode of the interface. For example, the following commands configure the delay of
the Serial 0/0/1 interface to 8000.
This change will force EIGRP to recalculate the metric of all paths related to the
Serial 0/0/1 interface. As we have increased the delay of the serial 0/0/1 interface,
the metric of all paths belonging to the serial 0/0/1 interface will also increase.
To learn how EIGRP metric calculation formula works or how EIGRP calculates the
metric of each route, you can check the following tutorial.
If EIGRP has more than one path to a destination, it selects the path that has the
lowest metric. After recalculation, the metric of Route1 (Via - Serial 0/0/1 interface)
will exceed the metric of Route2 (Via - Serial 0/0/0). This change will force EIGRP to
choose Route2 to reach the network 50.0.0.0/8.
What is an LSA?
An LSA (Link State Advertisement) is a data packet that describes a specific part of
the OSPF network. There are 11 types of LSA. OSPF routers use each type of LSA
to share or exchange different information. Since an LSA describes a part of the
OSPF network, each OSPF router learns all LSAs that it needs to function and
stores them into the LSDB (Link-State Database).
In other words, an LSA is an OSPF data packet that contains some specific
information about network topology. The LSDB is the collection of all LSA packets
the router knows. The state, when all OSPF routers on the OSPF network learn all
required LSAs is called convergency.
LSA flooding
LSA flooding is a process that OSPF routers use to share and learn all required
LSAs. In the process of LSA flooding, all routers collectively advertise all known
LSAs to all the other routers. At the end of this process, every router on the network
has every required LSA.
After this process, if any information changes, the affected router creates an LSA
describing the change and floods the LSA into the network. Each recipient validates
the LSA update and sends an acknowledgment back to the sending router that
confirms that it received the flooded update.
Based on the network type, OSPF routers use the following addresses to flood
LSAs.
Types of LSAs
Instead of using the same LSA type for all purposes, OSPF routers use a different
type of LSA for each purpose. They use 11 types of LSAs. The following table
describes each type of LSA and its purpose.
1 Router LSA by all routers within the area Within the area Contain information about RID an
2 Network LSA DR (Designated router) Within the area Contain information about all path
3 Summary LSA ABR (Area Border Router) Within the Contain summarized information
network
4 ASBR Summary ABR (Area Border Router) Within the Contain information about the AS
network
5 AS-external LSA ASBR (Autonomous System Within the Contain information about routes
Boundary Router) network
7 NSSA External LSA ASBR (Autonomous System Intra-area Identical to type-5 but are flooded
Boundary Router) routing information for redistributi
8 Link-local LSA (OSPFv3) by all routers within the area OSPFv3 uses it to share informatio
Link link.
11 Autonomous System Within the Identical to type-10 but are not flo
opaque network
The opaque LSA types are designed for application-specific purposes. For example,
an application can use them to flood bandwidth information. Opaque LSA types are
9,10, and 11. Each type of LSA has a different flooding scope.
If you are learning LSA types for CCNA Routing and Switching exam, you should
focus only on LSA types 1, 2, and 3. The remaining LSA types are not covered in the
CCNA exam syllabus. The CCNP exam covers them. Since the CCNA exam
syllabus includes OSPF topics related to LSA types 1, 2, and 3, we will discuss these
types in detail.
Before we learn these types, we need to understand the OSPF area concept and
roles of IR, ABR, and DR routers in the OSPF network.
OSPF uses a hierarchical design to control network traffic. In the hierarchical design,
OSPF uses two levels. In OSPF terminology, a hierarchical level is called an area.
There are types of OSPF areas: backbone area or area 0 and area-off backbone.
Backbone area
The backbone area is the central point of this implementation. Routers running in this
area are required to maintain a complete database of the entire network. All areas
need to connect with this area through a physical link or a virtual link.
IR (Internal Router)
An IR router has the least responsibilities in the OSPF network. It maintains only
area-specific routing information.
An ABR connects the area to another area. It keeps the area-specific routing
information and summarized routing information of the entire network.
DR (Designated Router)
An LSA type 1 message is called router LSA. OSPF routers use LSA type 1
messages to advertise their RID and information about directly connected networks.
A RID is the unique ID of the router in the OSPF network. Since all routers running in
the area need this information about the other routers, all OSPF routers generate
and advertise LSA type 1 messages. LSA type 1 messages do not cross the area
boundary. They remain within the area.
LSA type 2
LSA type 3
An LSA type 3 message is called summary LSA. ABR routers use LSA type 3
messages to exchange routing information with the other ABR routers. By default,
LSA type 3 messages contain detailed routing information. But if required, an
administrator can instruct the ABR router to summarize the routing information
before sharing it with another ABR. For this, the administrator can use
the summary command.
That’s all for this tutorial. In this tutorial, we discussed what LSA is, LSA types, the
meaning of each LSA type, and the information LSA type 1, 2, and 3 carry.
OSPF stands for Open Shortest Path First. It is an open standard routing protocol. It
has three versions: OSPFv1, OSPFv2, and OSPFv3. OSPFv1 was developed in the
mid-1980s to overcome the limitations, deficiencies, and scalability problems that
RIP had in large networks. In 1998, OSPFv1 was updated by OSPFv2 to support
modern infrastructure and networks. OSPFv1 is not used in modern networks.
Similarly, to support modern infrastructure, RIP was also updated by RIPv2.
The following table compares the features of OSPFv2, RIP, and RIPv2.
Feature OSPFv2 RIPv1
Protocol type Link state Distance
Algorithm Dijkstra Bellman
Metric Bandwidth Hops
Hop count limit None 15
VLSM support Yes No
Classless support Yes No
Non-contiguous network support Yes No
Auto-summarization No Yes
Manual summarization Yes No
Route propagation Multicast Broadcas
Convergence Fast Slow
Use authentication Yes No
Update On event periodic
Supported network type All types Flat only
Since OSPFv2 includes all features and characteristics that modern networks need,
it is one of the two most popular and widely used routing protocols. It was developed
when IPv6 was not in use. Because of this, support for IPv6 was not added to
OSPFv2.
OSPFv2 OSPFv3
It supports IPv4. It supports IPv6.
It is specified in RFC2328. It is specified in RFC 5340.
The header size is 24 bytes. The header size is 16 bytes.
It uses seven link-state advertisements. It uses nine link-state advertainments. Two new include
You can run only one instance per link. You can run many instances per link.
It needs a network mask to form an adjacency. It does not need a network mask to form an adjacency.
It uses MD5 hashing for authentication. It uses IPSec for authentication.
It uses networks. It uses links.
It can configure its RID automatically. It can't configure its own RID. You have to configure R
Since OSPFv1 has been updated and replaced by OSPFv2, network administrators
commonly use the term OSPF to refer to OSPFv2. Because of this, unless the
version of OSPF is explicitly mentioned, you can consider all references to OSPF to
be OSPFv2.
Advantages of OSPF
Disadvantages OSPF
It needs lots of information to calculate the best route for each destination. To
store this information, OSPF consumes more memory than other routing
protocols.
To calculate the best route, it runs the SPF algorithm that requires extra CPU
processing.
It is complex to configure and difficult to troubleshoot. In a large network, only
experienced network administrators can configure it.
Usages of OSPF
Typically, OSPF is used in large enterprise networks that use routing equipment from
different vendors. OSPF is also used in companies that have the policy to use an
open standard protocol for routing which gives them flexibility when they need to
replace an existing router or add a new router.
That’s all for this tutorial. In this tutorial, we discussed the features, advantages, and
disadvantages of OSPF.
A router can learn routing information from a variety of sources. If it learns routing
information for the same destination from two or more sources, it uses the sources'
AD values to decide which source is more reliable. An AD value is the
trustworthiness of the source. Routers assign an AD value to each source from the
range 0 – 255. In this range, a smaller number is considered more reliable than a
bigger number. For example, if a source has an AD value of 40, it will be considered
more reliable than a source that has an AD value of 50. Routers assign the value 0
to the most reliable source and the value 255 to the most unreliable source.
In simple words, Administrative Distance (AD) is a scale the router uses to measure
the trustworthiness of a source that provides routing information.
The following table lists the default AD value of some most common routing sources.
Source
Connected
Static
IGRP
OSPF
IS-IS
RIP
Unreliable source
If required, you can change default AD values. You can change the default AD value
of a particular routing protocol, a particular route, or even a static route. If you
change the AD value of a source, the router's IOS will use the updated AD value to
compare the source with other sources.
A router can learn routing information from the following sources: Interface
configuration, Manual configuration, and Routing protocols. Let's discuss how AD
works for each source.
Interface configuration
A router's interface is used as the default gateway. When you assign the IP
configuration to an interface, the router automatically extracts the network
information from the IP configuration and adds that information to the routing table.
For example, if you assign the IP address 10.0.0.1 255.0.0.0 to the F0/0 interface,
the router adds the following entry to the routing table.
Routing protocols
You can configure a routing protocol. The routing protocol will learn and add all
routing information to the routing table. Depending on the network topology, multiple
routes to the same destination can exist.
To select the best route for each subnet, the routing protocol uses the metric. A
metric is the cost of a route. The routing protocol computes the metric of all routes
and compares the metrics to select the best route for each subnet. After selecting a
single best route for each subnet, the routing protocol provides all routing information
to the Router's IOS. The Router's IOS checks each route and adds the route to the
routing table if it meets certain conditions.
If only one routing protocol is running in the network, the routing protocol's metric is
sufficient to select the best route for each subnet. Different routing protocols use
different metrics to select the best route. Because of this, if two or more routing
protocols are running in the network, they can select different routes as the best
route for the same destination.
In such a situation, the router's IOS checks the AD value of each routing protocol.
The router's IOS selects the routing information of the routing protocol which has the
lower AD value. For example, a router receives the routing information for the same
destination from RIP and EIGRP. The AD values of RIP and EIGRP are 120 and 90,
respectively. Since the AD value of EIGRP is lower than the AD value of RIP, the
router will select the routing information it receives from EIGRP.
Let's take an example to understand how this process works. The following image
shows a network.
In this network, Router0 has two routes to reach the network 200.0.0.0/24. The first
route is via the S0/0/1 interface and the second route is via the S0/0/0 interface.
Which route the router will use is depend on the routing configuration. We have two
options. We can manually add the route to the routing table or we can configure a
routing protocol that will select and add the best route to the routing table. If we
select the first option, the router will use that. The default AD value (1) of a static
route is less than all routing protocols.
If we select the second option and configure only one routing protocol in the network,
the router will use the route selected by the routing protocol. Now let's suppose, we
configured RIP routing protocol in this network. In this situation, Router0 will select
the second route (via S0/0/0) to reach the network 200.0.0.0/24.
RIP uses the number of routers (hops) in the path to select the best route. It selects
the route that has the least number of routers in the path. The following image shows
the selection process.
To verify it, we can use the 'show ip route' command. The following image shows
the output of this command.
Currently, only one routing protocol is running in the network. The router uses the
route the routing protocol selects. If we configure another routing protocol without
removing the current routing protocol and both routing protocols select different
routes for the same destination, the router will use the route selected by the routing
protocol that has the lower AD value. Let's understand it through the example.
Currently, RIP is running on this network. Now suppose, we configure EIGRP in this
network. By default, EIGRP uses the configured bandwidth and delay on all exit
interfaces of the path to compute the metric of the path. After computing the metric of
all paths, it selects the path that has the least metric value.
The metric of the first route is 2684416. The metric of the second route is
256514560. Since the first route has a lower metric, EIGPR will select this route to
reach the network 200.0.0.0/8. To verify this, you can use the 'show ip eigrp
topology' command. The following image shows the output of this command.
Now, Router0 has two different routes reported as the best routes to reach the
network 200.0.0.0/24. RIP says the second route (via - serial0/0/0) is the best route
to reach the network 200.0.0.0/24. EIGRP says the first route (via – serial0/0/1) is
the best route to reach the network 200.0.0.0/24.
In this situation, the router will compare the AD value of both routing protocols and
will select the route reported by the routing protocol that has a lower AD value. The
AD value of EIGRP is 90 and the AD value of RIP is 120. Since the AD value of
EIGRP is lower than the AD value of RIP, the router will select the route reported by
EIGRP. To verify, this you can use the 'show ip route' command. The following
image shows the output of this command.
That's all for this tutorial. In this tutorial, we discussed what administrative distance is
and how routers use it to select the best route.