UNIT 3 - Network Layer and Routing
UNIT 3 - Network Layer and Routing
The network layer in the TCP/IP protocol suite is responsible for the
host-to- host delivery of datagrams.
It provides services to the transport layer and receives services from the
data- link layer.
The network layer translates the logical addresses into physical addresses
It determines the route from the source to the destination and also
manages the traffic problems such as switching, routing and controls
the congestion of data packets.
The main role of the network layer is to move the packets from sending
host to the receiving host.
PACKETIZING
The first duty of the network layer is definitely packetizing.
This means encapsulating the payload (data received from upper layer)
in a network-layer packet at the source and decapsulating the payload
from the network-layer packet at the destination.
The network layer is responsible for delivery of packets from a sender
to a receiver without changing or using the contents.
ERROR CONTROL
The network layer in the Internet does not directly provide error control.
It adds a checksum field to the datagram to control any corruption in
the header, but not in the whole datagram.
This checksum prevents any changes or corruptions in the header of the
datagram.
The Internet uses an auxiliary protocol called ICMP, that provides
some kind of error control if the datagram is discarded or has some
unknown information in the header.
FLOW CONTROL
Flow control regulates the amount of data a source can send without
overwhelming the receiver.
The network layer in the Internet, however, does not directly provide
any flow control.
The datagrams are sent by the sender when they are ready, without any
attention to the readiness of the receiver.
Flow control is provided for most of the upper-layer protocols that use
the services of the network layer, so another level of flow control
makes the network layer more complicated and the whole system less
efficient.
CONGESTION CONTROL
Another issue in a network-layer protocol is congestion control.
Congestion in the network layer is a situation in which too many
datagrams are present in an area of the Internet.
Congestion may occur if the number of datagrams sent by source
computers is beyond the capacity of the network or routers.
In this situation, some routers may drop some of the datagrams.
SECURITY
Another issue related to communication at the network layer is security.
To provide security for a connectionless network layer, we need to have
another virtual level that changes the connectionless service to a
connection- oriented service. This virtual layer is called as called IPSec
(IP Security).
2. PACKET SWITCHING
( REFER THE TOPIC PACKET SWITCHING FROM UNIT – I )
3. NETWORK-LAYER PERFORMANCE
The performance of a network can be measured in terms of
Delay, Throughput and Packet loss.
Congestion control is an issue that can improve the performance.
DELAY
A packet from its source to its destination, encounters delays.
The delays in a network can be divided into four types:
Transmission delay, Propagation delay, Processing delay and Queuing delay.
Transmission Delay
A source host or a router cannot send a packet instantaneously.
A sender needs to put the bits in a packet on the line one by one.
If the first bit of the packet is put on the line at time t1 and the last bit is
put on the line at time t2, transmission delay of the packet is (t2 - t1).
The transmission delay is longer for a longer packet and shorter if the
sender can transmit faster.
The Transmission delay is calculated using the formula
Delaytr = (Packet length) / (Transmission rate)
Example :
In a Fast Ethernet LAN with the transmission rate of 100 million
bits per second and a packet of 10,000 bits, it takes
(10,000)/(100,000,000) or 100 microseconds for all bits of the
packet to be put on the line.
Propagation Delay
Propagation delay is the time it takes for a bit to travel from point A to
point B in the transmission media.
The propagation delay for a packet-switched network depends
on the propagation delay of each network (LAN or WAN).
The propagation delay depends on the propagation speed of the media, which is
3X108 meters/second in a vacuum and normally much less in a wired medium.
It also depends on the distance of the link.
The Propagation delay is calculated using the formula
Delaypg = (Distance) / (Propagation speed)
Example
If the distance of a cable link in a point-to-point WAN is 2000
meters and the propagation speed of the bits in the cable is 2 X ◻10 8
meters/second, then the propagation delay is 10 microseconds.
Processing Delay
The processing delay is the time required for a router or a destination
host to receive a packet from its input port, remove the header, perform
an error detection procedure, and deliver the packet to the output port
(in the case of a
router) or deliver the packet to the upper-layer protocol (in the case of the
destination host).
The processing delay may be different for each packet, but
normally is calculated as an average.
Queuing Delay
Queuing delay can normally happen in a router.
A router has an input queue connected to each of its input ports to store
packets waiting to be processed.
The router also has an output queue connected to each of its output
ports to store packets waiting to be transmitted.
The queuing delay for a packet in a router is measured as the time a
packet waits in the input queue and output queue of a router.
Delayqu = The time a packet waits in input and output queues in a router
Total Delay
Assuming equal delays for the sender, routers and receiver, the total
delay (source-to-destination delay) of a packet can be calculated if we
know the number of routers, n, in the whole path.
Total delay = (n + 1) (Delaytr + Delaypg + Delaypr) + (n) (Delayqu)
If we have n routers, we have (n +1) links.
Therefore, we have (n +1) transmission delays related to n routers and
the source, (n +1) propagation delays related to (n +1) links, (n +1)
processing delays related to n routers and the destination, and only n
queuing delays related to n routers.
THROUGHPUT
Throughput at any point in a network is defined as the number of bits
passing through the point in a second, which is actually the
transmission rate of data at that point.
In a path from source to destination, a packet may pass through several
links (networks), each with a different transmission rate.
Throughput is calculated using the formula
Throughput = minimum{TR1 , TR2, . . . TRn}
Example:
Let us assume that we have three links, each with a different
transmission rate.
The data can flow at the rate of 200 kbps in Link1, 100 kbps in Link2
and 150kbps in Link3.
Throughput = minimum{200,100,150} = 100.
PACKET LOSS
Another issue that severely affects the performance of
communication is the number of packets lost during transmission.
When a router receives a packet while processing another packet, the
received packet needs to be stored in the input buffer waiting for its
turn.
A router has an input buffer with a limited size.
A time may come when the buffer is full and the next packet needs
to be dropped.
The effect of packet loss on the Internet network layer is that the packet
needs to be resent, which in turn may create overflow and cause more
packet loss.
CONGESTION CONTROL
Congestion at the network layer is related to two issues, throughput and delay.
Based on Delay
When the load is much less than the capacity of the network, the delay
is at a minimum.
This minimum delay is composed of propagation delay and processing
delay, both of which are negligible.
However, when the load reaches the network capacity, the delay
increases sharply because we now need to add the queuing delay to the
total delay.
The delay becomes infinite when the load is greater than the capacity.
Based on Throughout
When the load is below the capacity of the network, the throughput
increases proportionally with the load.
We expect the throughput to remain constant after the load
reaches the capacity, but instead the throughput declines sharply.
The reason is the discarding of packets by the routers.
When the load exceeds the capacity, the queues become full and the
routers have to discard some packets.
Discarding packets does not reduce the number of packets in the
network because the sources retransmit the packets, using time-out
mechanisms, when the packets do not reach the destinations.
Retransmission Policy
Retransmission is sometimes unavoidable.
If the sender feels that a sent packet is lost or corrupted, the
packet needs to be retransmitted.
Retransmission in general may increase congestion in the network.
However, a good retransmission policy can prevent congestion.
The retransmission policy and the retransmission timers
must be designed to optimize efficiency and at the same time
prevent congestion.
Window Policy
The type of window at the sender may also affect congestion.
The Selective Repeat window is better than the Go-Back-N
window for congestion control.
In the Go-Back-N window, when the timer for a packet times
out, several packets may be resent, although some may have
arrived safe and sound at the receiver.
This duplication may make the congestion worse.
The Selective Repeat window, on the other hand, tries to send
the specific packets that have been lost or corrupted.
Acknowledgment Policy
The acknowledgment policy imposed by the receiver may
also affect congestion.
If the receiver does not acknowledge every packet it receives,
it may slow down the sender and help prevent congestion.
Several approaches are used in this case.
A receiver may send an acknowledgment only if it has a
packet to be sent or a special timer expires.
A receiver may decide to acknowledge only N packets at a time.
Sending fewer acknowledgments means imposing less load on
the network.
Discarding Policy
A good discarding policy by the routers may prevent congestion
and at the same time may not harm the integrity of the
transmission.
For example, in audio transmission, if the policy is to discard
less sensitive packets when congestion is likely to happen, the
quality of sound is still preserved and congestion is prevented or
alleviated.
Admission Policy
An admission policy, which is a quality-of-service mechanism
can also prevent congestion in virtual-circuit networks.
Switches in a flow first check the resource requirement of a flow
before admitting it to the network.
A router can deny establishing a virtual-circuit connection if
there is congestion in the network or if there is a possibility of
future congestion.
Backpressure
The technique of backpressure refers to a congestion control
mechanism in which a congested node stops receiving data from
the immediate upstream node or nodes.
This may cause the upstream node or nodes to become
congested, and they, in turn, reject data from their upstream
node or nodes, and so on.
Backpressure is a node-to- node congestion control that starts
with a node and propagates, in the opposite direction of data
flow, to the source.
The backpressure technique can be applied only to virtual circuit
networks, in which each node knows the upstream node from
which a flow of data is coming.
Choke Packet
A choke packet is a packet sent by a node to the source to inform
it of congestion.
In backpressure, the warning is from one node to its
upstream node, although the warning may eventually reach the
source station.
In the choke-packet method, the warning is from the router,
which has encountered congestion, directly to the source station.
The intermediate nodes through which the packet has traveled
are not warned.
The warning message goes directly to the source
station; the intermediate routers do not take any action.
Implicit Signaling
In implicit signaling, there is no communication between the
congested node or nodes and the source.
The source guesses that there is congestion somewhere in the
network from other symptoms.
For example, when a source sends several packets and there is
no acknowledgment for a while, one assumption is that the
network is congested.
The delay in receiving an acknowledgment is interpreted as
congestion in the network; the source should slow down.
Explicit Signaling
The node that experiences congestion can explicitly send a
signal to the source or destination.
The explicit-signaling method is different from the choke-packet
method.
In the choke-packet method, a separate packet is used for this
purpose; in the explicit-signaling method, the signal is included
in the packets that carry data.
Explicit signaling can occur in either the forward or the
backward direction.
1. IPV4 ADDRESSES
CLASSFUL ADDRESSING
An IPv4 address is 32-bit long(4 bytes).
An IPv4 address is divided into sub-classes:
Classful Network Architecture
Class A
In Class A, an IP address is assigned to those networks that contain
a large number of hosts.
The network ID is 8 bits long.
The host ID is 24 bits long.
In Class A, the first bit in higher order bits of the first octet is always
set to 0 and the remaining 7 bits determine the network ID.
The 24 bits determine the host ID in any network.
The total number of networks in Class A = 27 = 128 network address
The total number of hosts in Class A = 224 - 2 = 16,777,214 host address
Class B
In Class B, an IP address is assigned to those networks that range from
small- sized to large-sized networks.
The Network ID is 16 bits long.
The Host ID is 16 bits long.
In Class B, the higher order bits of the first octet is always set to 10,
and the remaining14 bits determine the network ID.
The other 16 bits determine the Host ID.
The total number of networks in Class B = 214 = 16384 network address
The total number of hosts in Class B = 216 - 2 = 65534 host address
Class C
In Class C, an IP address is assigned to only small-sized networks.
The Network ID is 24 bits long.
The host ID is 8 bits long.
In Class C, the higher order bits of the first octet is always set to 110,
and the remaining 21 bits determine the network ID.
The 8 bits of the host ID determine the host in a network.
The total number of networks = 221 = 2097152 network address
The total number of hosts = 28 - 2 = 254 host address
Class D
In Class D, an IP address is reserved for multicast addresses.
It does not possess subnetting.
The higher order bits of the first octet is always set to 1110, and the
remaining bits determines the host ID in any network.
Class E
In Class E, an IP address is used for the future use or for the
research and development purposes.
It does not possess any subnetting.
The higher order bits of the first octet is always set to 1111, and the
remaining bits determines the host ID in any network.
Address Depletion in Classful Addressing
The reason that classful addressing has become obsolete is address depletion.
Since the addresses were not distributed properly, the Internet was
faced with the problem of the addresses being rapidly used up.
This results in no more addresses available for organizations and
individuals that needed to be connected to the Internet.
To understand the problem, let us think about class A.
This class can be assigned to only 128 organizations in the world, but
each organization needs to have a single network with 16,777,216
nodes .
Since there may be only a few organizations that are this large, most of
the addresses in this class were wasted (unused).
Class B addresses were designed for midsize organizations, but many
of the addresses in this class also remained unused.
Class C addresses have a completely different flaw in design. The
number of addresses that can be used in each network (256) was so
small that most companies were not comfortable using a block in this
address class.
Class E addresses were almost never used, wasting the whole class.
Subnetting
In subnetting, a class A or class B block is divided into several subnets.
Each subnet has a larger prefix length than the original network.
For example, if a network in class A is divided into four subnets, each
subnet has a prefix of nsub = 10.
At the same time, if all of the addresses in a network are not used,
subnetting allows the addresses to be divided among several
organizations.
CLASSLESS ADDRESSING
In 1996, the Internet authorities announced a new architecture called
classless addressing.
In classless addressing, variable-length blocks are used that
belong to no classes.
We can have a block of 1 address, 2 addresses, 4 addresses, 128
addresses, and so on.
In classless addressing, the whole address space is divided into variable
length blocks.
The prefix in an address defines the block (network); the suffix
defines the node (device).
Theoretically, we can have a block of 20, 21, 22, ◻◻◻◻◻◻◻◻232 addresses.
The number of addresses in a block needs to be a power of 2. An
organization can be granted one block of addresses.
Address Aggregation
One of the advantages of the CIDR strategy is address aggregation
(sometimes called address summarization or route summarization).
When blocks of addresses are combined to create a larger block,
routing can be done based on the prefix of the larger block.
ICANN assigns a large block of addresses to an ISP.
Each ISP in turn divides its assigned block into smaller subblocks and
grants the subblocks to its customers.
Limited-broadcast Address
The only address in the block 255.255.255.255/32 is called the
limited- broadcast address.
It is used whenever a router or a host needs to send a datagram to all
devices in a network.
The routers in the network, however, block the packet having this
address as the destination;the packet cannot travel outside the network.
Loopback Address
The block 127.0.0.0/8 is called the loopback address.
A packet with one of the addresses in this block as the destination
address never leaves the host; it will remain in the host.
Private Addresses
Four blocks are assigned as private addresses: 10.0.0.0/8,
172.16.0.0/12, 192.168.0.0/16, and 169.254.0.0/16.
Multicast Addresses
The block 224.0.0.0/4 is reserved for multicast addresses.
A technology that can provide the mapping between the private and universal
(external)addresses, and at the same time support virtual private networks is called
as Network Address Translation (NAT).
The technology allows a site to use a set of private addresses for internal
communication and a set of global Internet addresses (at least one) for
communication with the rest of the world.
The site must have only one connection to the global Internet through a NAT-
capable router that runs NAT software.
Address Translation
All of the outgoing packets go through the NAT router, which replaces
the source address in the packet with the global NAT address.
All incoming packets also pass through the NAT router, which replaces
the destination address in the packet (the NAT router global address)
with the appropriate private address.
Translation Table
There may be tens or hundreds of private IP addresses, each belonging
to one specific host.
The problem arises when we want to translate the source address to an
external address. This is solved if the NAT router has a translation
table.
Forwarding means to deliver the packet to the next hop (which can be
the final destination or the intermediate connecting device).
Although IP protocol was originally designed as a connectionless
protocol, today the tendency is to use IP as a connection-oriented
protocol based on the label attached to an IP datagram .
When IP is used as a connectionless protocol, forwarding is based on the
destination address of the IP datagram.
When the IP is used as a connection-oriented protocol, forwarding is
based on the label attached to an IP datagram.
Forwarding Algorithm
The job of the forwarding module is to search the table, row by row.
In each row, the n leftmost bits of the destination address (prefix) are
kept and the rest of the bits (suffix) are set to 0s.
If the resulting address ( network address), matches with the address in
the first column, the information in the next two columns is extracted;
otherwise the search continues. Normally, the last row has a default
value in the first column, which indicates all destination addresses that
did not match the previous rows.
Routing in classless addressing uses another principle, longest mask
matching.
This principle states that the forwarding table is sorted from the longest
mask to the shortest mask.
In other words, if there are three masks, /27, /26, and /24, the mask /27
must be the first entry and /24 must be the last.
Example
IP - INTERNET PROTOCOL
The Internet Protocol is the key tool used today to build
scalable, heterogeneous internetworks.
IP runs on all the nodes (both hosts and routers) in a collection of networks
IP defines the infrastructure that allows these nodes and networks to
function as a single logical internetwork.
IP SERVICE MODEL
Service Model defines the host-to-host services that we want to provide
The main concern in defining a service model for an internetwork is that
we can provide a host-to-host service only if this service can somehow
be provided over each of the underlying physical networks.
The Internet Protocol is the key tool used today to build scalable,
heterogeneous internetworks.
The IP service model can be thought of as having two parts:
A GLOBAL ADDRESSING SCHEME - which provides a
way to identify all hosts in the internetwork
A DATAGRAM DELIVERY MODEL – A connectionless model
of data delivery.
FIELD DESCRIPTION
Version Specifies the version of IP. Two versions exists – IPv4 and IPv6.
HLen Specifies the length of the header
TOS An indication of the parameters of the quality of service
(Type of Service) desired such as Precedence, Delay, Throughput and Reliability.
Length Length of the entire datagram, including the header. The maximum
size of an IP datagram is 65,535(210 )bytes
Ident Uniquely identifies the packet sequence number.
(Identification) Used for fragmentation and re-assembly.
Flags Used to control whether routers are allowed to fragment a packet.
If a packet is fragmented , this flag value is 1.If not, flag value is
0.
Offset Indicates where in the datagram, this fragment belongs.
(Fragmentation The fragment offset is measured in units of 8 octets
offset) (64 bits). The first fragment has offset zero.
TTL Indicates the maximum time the datagram is allowed to
(Time to Live) remain in the network. If this field contains the value zero, then
the datagram must be destroyed.
Protocol Indicates the next level protocol used in the data portion of the
datagram
Checksum Used to detect the processing errors introduced into the packet
The original packet starts at the client; the fragments are reassembled at the server.
The value of the identification field is the same in all fragments, as is the value of the flags
field with the more bit set for all fragments except the last.
Also, the value of the offset field for each fragment is shown.
Although the fragments arrived out of order at the destination, they can be correctly
reassembled.
Example:
The value of the offset field is always relative to the original datagram.
Even if each fragment follows a different path and arrives out of
order, the final destination host can reassemble the original datagram
from the fragments received (if none of them is lost) using the
following strategy:
1) The first fragment has an offset field value of zero.
2) Divide the length of the first fragment by 8. The second
fragment has an offset value equal to that result.
3) Divide the total length of the first and second fragment by 8. The
third fragment has an offset value equal to that result.
4) Continue the process. The last fragment has its M bit set to 0.
5) Continue the process. The last fragment has a more bit value of 0.
Reassembly:
Reassembly is done at the receiving host and not at each router.
To enable these fragments to be reassembled at the receiving host,
they all carry the same identifier in the Ident field.
This identifier is chosen by the sending host and is intended to be
unique among all the datagrams that might arrive at the destination
from this source over some reasonable time period.
Since all fragments of the original datagram contain this identifier, the
reassembling host will be able to recognize those fragments that go
together.
For example, if a single fragment is lost, the receiver will still attempt
to reassemble the datagram, and it will eventually give up and have to
garbage- collect the resources that were used to perform the failed
reassembly.
Hosts are now strongly encouraged to perform “path MTU discovery,”
a process by which fragmentation is avoided by sending packets that
are small enough to traverse the link with the smallest MTU in the path
from sender to receiver.
IP SECURITY
There are three security issues that are particularly applicable to the IP protocol:
(1) Packet Sniffing (2) Packet Modification and (3) IP Spoofing.
Packet Sniffing
An intruder may intercept an IP packet and make a copy of it.
Packet sniffing is a passive attack, in which the attacker does not
change the contents of the packet.
This type of attack is very difficult to detect because the sender and the
receiver may never know that the packet has been copied.
Although packet sniffing cannot be stopped, encryption of the packet
can make the attacker’s effort useless.
The attacker may still sniff the packet, but the content is not detectable.
Packet Modification
The second type of attack is to modify the packet.
The attacker intercepts the packet,changes its contents, and sends
the new packet to the receiver.
The receiver believes that the packet is coming from the original sender.
This type of attack can be detected using a data integrity mechanism.
The receiver, before opening and using the contents of the message, can
use this mechanism to make sure that the packet has not been changed
during the transmission.
IP Spoofing
An attacker can masquerade as somebody else and create an IP
packet that carries the source address of another computer.
An attacker can send an IP packet to a bank pretending that it is coming
from one of the customers.
This type of attack can be prevented using an origin
authentication mechanism
IP Sec
The IP packets today can be protected from the previously mentioned
attacks using a protocol called IPSec (IP Security).
This protocol is used in conjunction with the IP protocol.
IPSec protocol creates a connection-oriented service between two
entities in which they can exchange IP packets without worrying about
the three attacks such as Packet Sniffing, Packet Modification and IP
Spoofing.
IP Sec provides the following four services:
1) Defining Algorithms and Keys : The two entities that want to
create a secure channel between themselves can agree on some
available algorithms and keys to be used for security purposes.
2) Packet Encryption : The packets exchanged between two
parties can be encrypted for privacy using one of the encryption
algorithms and a shared key agreed upon in the first step. This
makes the packet sniffing attack useless.
3) Data Integrity : Data integrity guarantees that the packet is not
modified during the transmission. If the received packet does not
pass the data integrity test, it is discarded.This prevents the
second attack, packet modification.
4) Origin Authentication : IPSec can authenticate the origin of
the packet to be sure that the packet is not created by an
imposter. This can prevent IP spoofing attacks.
Ping
The ping program is used to find if a host is alive and responding.
The source host sends ICMP echo-request messages; the destination, if
alive, responds with ICMP echo-reply messages.
The ping program sets the identifier field in the echo-request and echo-
reply message and starts the sequence number from 0; this number is
incremented by 1 each time a new message is sent.
The ping program can calculate the round-trip time.
It inserts the sending time in the data section of the message.
When the packet arrives, it subtracts the arrival time from the departure
time to get the round-trip time (RTT).
$ ping google.com
Traceroute or Tracert
The traceroute program in UNIX or tracert in Windows can be used to
trace the path of a packet from a source to the destination.
It can find the IP addresses of all the routers that are visited along the path.
The program is usually set to check for the maximum of 30 hops
(routers) to be visited.
The number of hops in the Internet is normally less than this.
$ traceroute google.com
5. UNICAST ROUTING
Routing is the process of selecting best paths in a network.
In unicast routing, a packet is routed, hop by hop, from its source to its
destination by the help of forwarding tables.
Routing a packet from its source to its destination means routing the
packet from a source router (the default router of the source host) to a
destination router (the router connected to the destination network).
The source host needs no forwarding table because it delivers its packet
to the default router in its local network.
The destination host needs no forwarding table either because it
receives the packet from its default router in its local network.
Only the intermediate routers in the networks need forwarding tables.
NETWORK AS A GRAPH
The Figure below shows a graph representing a network.
Initial State
Each node sends its initial table (distance vector) to neighbors and
receives their estimate.
Node A sends its table to nodes B, C, E & F and receives tables from
nodes B, C, E & F.
Each node updates its routing table by comparing with each of its
neighbor's table
For each destination, Total Cost is computed as:
Total Cost = Cost (Node to Neighbor) + Cost (Neighbor to Destination)
If Total Cost < Cost then
Cost = Total Cost and NextHop = Neighbor
Node A learns from C's table to reach node D and from F's table
to reach node G.
Total Cost to reach node D via C = Cost (A to C) + Cost(C to D)
Cost = 1 + 1 = 2.
Since 2 < ∞, entry for destination D in A's table is changed to (D, 2, C)
Total Cost to reach node G via F = Cost(A to F) + Cost(F to G) = 1 + 1 = 2
Since 2 < ∞, entry for destination G in A's table is changed to (G, 2, F)
Each node builds complete routing table after few exchanges
amongst its neighbors.
System stabilizes when all nodes have complete routing information, i.e.,
convergence.
Routing tables are exchanged periodically or in case of triggered update.
The final distances stored at each node is given below:
Periodic Update
In this case, each node automatically sends an update message every
so often, even if nothing has changed.
The frequency of these periodic updates varies from protocol to
protocol, but it is typically on the order of several seconds to several
minutes.
Triggered Update
In this case, whenever a node notices a link failure or receives an
update from one of its neighbors that causes it to change one of the
routes in its routing table.
Whenever a node’s routing table changes, it sends an update to its
neighbors, which may lead to a change in their tables, causing them to
send an update to their neighbors.
Reliable Flooding
Each node sends its LSP out on each of its directly connected links.
When a node receives LSP of another node, checks if it has an LSP
already for that node.
If not, it stores and forwards the LSP on all other links except the
incoming one.
Else if the received LSP has a bigger sequence number, then it is
stored and forwarded. Older LSP for that node is discarded.
Otherwise discard the received LSP, since it is not latest for that node.
Thus recent LSP of a node eventually reaches all nodes, i.e., reliable flooding.
Route Calculation
Each node knows the entire topology, once it has LSP from every other node.
Forward search algorithm is used to compute routing table from the
received LSPs.
Each node maintains two lists, namely Tentative and Confirmed with
entries of the form (Destination, Cost, NextHop).
Example :
OPEN SHORTEST PATH FIRST PROTOCOL (OSPF)
OSPF is a non-proprietary widely used link-state routing protocol.
OSPF Features are:
Authentication―Malicious host can collapse a network by
advertising to reach every host with cost 0. Such disasters are
averted by authenticating routing updates.
Additional hierarchy―Domain is partitioned into areas, i.e.,
OSPF is more scalable.
Load balancing―Multiple routes to the same place are
assigned same cost. Thus traffic is distributed evenly.
Spanning Trees
In path-vector routing, the path from a source to all destinations is
determined by the best spanning tree.
The best spanning tree is not the least-cost tree.
It is the tree determined by the source when it imposes its own policy.
If there is more than one route to a destination, the source can choose
the route that meets its policy best.
A source may apply several policies at the same time.
One of the common policies uses the minimum number of nodes to be
visited. Another common policy is to avoid some nodes as the middle
node in a route.
The spanning trees are made, gradually and asynchronously, by each
node. When a node is booted, it creates a path vector based on the
information it can obtain about its immediate neighbor.
A node sends greeting messages to its immediate neighbors to collect
these pieces of information.
Each node, after the creation of the initial path vector, sends it to all its
immediate neighbors.
Each node, when it receives a path vector from a neighbor, updates its
path vector using the formula
Example:
The Figure below shows a small internet with only five nodes.
Each source has created its own spanning tree that meets its policy.
The policy imposed by all sources is to use the minimum number of
nodes to reach a destination.
The spanning tree selected by A and E is such that the communication
does not pass through D as a middle node.
Similarly, the spanning tree selected by B is such that the
communication does not pass through C as a middle node.
Path Vectors made at booting time
The Figure below shows all of these path vectors for the example.
Not all of these tables are created simultaneously.
They are created when each node is booted.
The figure also shows how these path vectors are sent to immediate
neighbors after they have been created.
MULTICAST ADDRESSING
Multicast address is associated with a group, whose members are dynamic.
Each group has its own IP multicast address.
IP addresses reserved for multicasting are Class D in IPv4 (Class D
224.0.0.1 to 239.255.255.255), 1111 1111 prefix in IPv6.
o
Hosts that are members of a group receive copy of the packet sent when
destination contains group address.
Using IP multicast
Sending host does not send multiple copies of the packet
A host sends a single copy of the packet addressed to the group’s
multicast address
The sending host does not need to know the individual unicast IP
address of each member
TYPES OF MULTICASTING
Source-Specific Multicast - In source-specific multicast (one-to-many
model), receiver specifies multicast group and sender from which it is
interested to receive packets. Example: Internet radio broadcasts.
MULTICAST APPLICATIONS
Access to Distributed Databases
Information Dissemination
Teleconferencing.
Distance Learning
MULTICAST ROUTING
To support multicast, a router must additionally have multicast
forwarding tables that indicate, based on multicast address, which
links to use to forward the multicast packet.
Unicast forwarding tables collectively specify a set of paths.
Multicast forwarding tables collectively specify a set of trees -
Multicast distribution trees.
Multicast routing is the process by which multicast distribution
trees are determined.
To support multicasting, routers additionally build multicast
forwarding tables.
Multicast forwarding table is a tree structure, known as
multicast distribution trees.
Internet multicast is implemented on physical networks that
support broadcasting by extending forwarding functions.
Pruning:
Sent from routers receiving multicast traffic for which they have
no active group members
“Prunes” the tree created by DVMRP
Stops needless data from being sent
Grafting:
Used after a branch has been pruned back
Sent by a router that has a host that joins a multicast group
Goes from router to router until a router active on the multicast
group is reached
Sent for the following cases
A new host member joins a group
A new dependent router joins a pruned branch
A dependent router restarts on a pruned branch
2. Protocol Independent Multicast (PIM)
◻ PIM divides multicast routing problem into sparse and dense mode.
◻ PIM sparse mode (PIM-SM) is widely used.
◻ PIM does not rely on any type of unicast routing protocol, hence
protocol independent.
◻ Routers explicitly join and leave multicast group using Join and
Prune messages.
◻ One of the router is designated as rendezvous point (RP) for each
group in a domain to receive PIM messages.
◻ Multicast forwarding tree is built as a result of routers sending Join
messages to RP.
◻ Two types of trees to be constructed:
Shared tree - used by all senders
Source-specific tree - used only by a specific sending host
◻ The normal mode of operation creates the shared tree first, followed by
one or more source-specific trees
Shared Tree
◻ When a router sends Join message for group G to RP, it goes
through a set of routers.
◻ Join message is wildcarded (*), i.e., it is applicable to all senders.
◻ Routers create an entry (*, G) in its forwarding table for the shared tree.
◻ Interface on which the Join arrived is marked to forward packets
for that group.
◻ Forwards Join towards rendezvous router RP.
◻ Eventually, the message arrives at RP. Thus a shared tree with RP as
root is formed.
Example
◻ Router R4 sends Join message for group G to rendezvous router RP.
◻ Join message is received by router R2. It makes an entry (*, G) in its
table and forwards the message to RP.
◻ When R5 sends Join message for group G, R2 does not forwards the Join. It
adds an outgoing interface to the forwarding table created for that group.
◻ As routers send Join message for a group, branches are added to the
tree, i.e., shared.
◻ Multicast packets sent from hosts are forwarded to designated router RP.
◻ Suppose router R1, receives a message to group G.
o R1 has no state for group G.
o Encapsulates the multicast packet in a Register message.
o Multicast packet is tunneled along the way to RP.
◻ RP decapsulates the packet and sends multicast packet onto the
shared tree, towards R2.
◻ R2 forwards the multicast packet to routers R4 and R5 that have
members for group G.
Source-Specific Tree
◻ RP can force routers to know about group G, by sending Join
message to the sending host, so that tunneling can be avoided.
◻ Intermediary routers create sender-specific entry (S, G) in their
tables. Thus a source-specific route from R1 to RP is formed.
◻ If there is high rate of packets sent from a sender to a group G, then
shared- tree is replaced by source-specific tree with sender as root.
Example
Analysis of PIM
◻ Protocol independent because, tree is based on Join messages via shortest path.
◻ Shared trees are more scalable than source-specific trees.
◻ Source-specific trees enable efficient routing than shared trees.
FEATURES OF IPV6
1. Better header format - IPv6 uses a new header format in which options
are separated from the base header and inserted, when needed, between
the base header and the data. This simplifies and speeds up the routing
process because most of the options do not need to be checked by
routers.
2. New options - IPv6 has new options to allow for additional functionalities.
3. Allowance for extension - IPv6 is designed to allow the extension of
the protocol if required by new technologies or applications.
4. Support for resource allocation - In IPv6, the type-of-service field has
been removed, but two new fields, traffic class and flow label, have
been added to enable the source to request special handling of the
packet. This mechanism can be used to support traffic such as real-time
audio and video.
Additional Features :
1. Need to accommodate scalable routing and addressing
2. Support for real-time services
3. Security support
4. Autoconfiguration -
The ability of hosts to automatically configure themselves with such
information as their own IP address and domain name.
5. Enhanced routing functionality, including support for mobile hosts
6. Transition from ipv4 to ipv6
Extension Headers
◻ Extension header provides greater functionality to IPv6.
◻ Base header may be followed by six extension headers.
◻ Each extension header contains a NextHeader field to identify the
header following it.
ADVANTAGES OF IPV6
◻ Address space ― IPv6 uses 128-bit address whereas IPv4 uses 32-bit
address. Hence IPv6 has huge address space whereas IPv4 faces
address shortage problem.
◻ Header format ― Unlike IPv4, optional headers are separated from
base header in IPv6. Each router thus need not process unwanted
addition information.
◻ Extensible ― Unassigned IPv6 addresses can accommodate needs of
future technologies.