Network Layer
Network Layer
Routing Algorithms
Distinction between routing and forwarding
One can think of a router as having two
processes inside it:
One of them handles each packet as it
arrives, looking up the outgoing line to use for
it in the routing tables. This process is
forwarding
The other process is responsible for filling in
and updating the routing tables. That is where
the routing algorithm comes into play
Shortest Path Routing
The idea is to build a graph of the subnet,
with each node of the graph representing
a router and each arc of the graph
representing a communication line (often
called a link)
To choose a route between a given pair of
routers, the algorithm just finds the
shortest path between them on the graph
Shortest Path Routing
The first 5 steps used in computing the shortest path from A to D. The arrows
indicate the working node.
Flooding
• Flooding is a form of static isolated routing.
• It Does not select a specific route when a
router receives a packet, it sends a copy of
the packet out on each line (except the one
on which it arrived)
• To prevent packets from looping forever,
each router decrements a hop count
contained in the packet header.
• Whenever the hop count decrements to zero,
the router discards the packet.
Flooding
• In selective flooding, a router sends packets
out only on those lines in the general
direction of the destination. That is, don't
send packets out on lines that clearly lead in
the wrong direction.
• There is usually little point in sending a
westbound packet on an eastbound line
unless the topology is extremely peculiar
and the router is sure of this fact
Flooding
• In selective flooding, a router sends packets
out only on those lines in the general
direction of the destination. That is, don't
send packets out on lines that clearly lead in
the wrong direction.
• There is usually little point in sending a
westbound packet on an eastbound line
unless the topology is extremely peculiar
and the router is sure of this fact
• Flooding always chooses the shortest path
because it chooses every possible path in
parallel.
Distance Vector Routing
(Dynamic routing algorithms )
Modern computer networks generally use
dynamic routing algorithms rather than the
static ones described above because static
algorithms do not take the current network
load into account
Two dynamic algorithms :
Distance vector routing
Link state routing
Distance Vector Routing
26 41 18 37
Distance Vector Routing
(The Count to Infinity Problem)
Consider the subnet of Fig, which is divided up into two parts, East and
West, connected by two lines, CF and EI.
Suppose that most of the traffic between East and West is using line CF,
and as a result, this line is heavily loaded with long delays.
Including queueing delay in the shortest path calculation will make EI
more attractive.
After the new routing tables have been installed, most of the East-West
traffic will now go over EI, overloading this line.
Consequently, in the next update, CF will appear to be the shortest path.
Measuring Line Cost
An argument against including the load in the delay calculation
Netid
Hostid
Netid and Hostid concept does not apply to clases D and E
Drawback: Classful addressing
• In Class A, block is too large
– most of addresses were wasted and were not
in use
• Block in Class B is also too large
• Block of Class C is probably too small for
many organizations
• Class D: multicast ---- The internet authorities wrongly
predicted a need for 268, 435, 456 groups. This never happened
• Class E: were reserved for future use -----only few were used
Classless Addressing
• No classes, but addresses are still granted
in blocks
• Restriction:
– The addresses in a block must be contiguous
– The number of addresses in a block must be
a power of 2 (1,2,4,8 ,……)
– The first address must be evenly divisible by
the number of addresses
Example: 01
• Multihoming
Congestion Control Algorithms
When too many packets are present in a
part of the subnet, performance degrades.
This situation is called Congestion
Queues in a router
• Congestion
– Arrival rate > Packet processing Rate
– Departure rate < Packet processing Rate
Packet delay and throughput as functions of load
Congestion Control Algorithms
Congestion Control Algorithms
?
Difference between congestion
Control and Flow control
Congestion control:
Makes sure the subnet is able to carry the offered traffic
It is a global issue, involving the behavior of all the hosts, all the
routers, the store-and-forwarding processing within the routers,
and all the other factors that tend to diminish the carrying
capacity of the subnet
Flow control:
Relates to the point-to-point traffic
Its job is to make sure that a fast sender cannot continually
transmit data faster than the receiver is able to absorb it
Flow control frequently involves some direct feedback from the
receiver to the sender to tell the sender how things are doing at
the other end
General Principles of Congestion
Control
Can be viewed from a control theory point
of view
This approach leads to dividing all
solutions into two groups:
Open loop
Closed loop
General Principles of Congestion Control
(Open loop systems)
These systems are designed to minimize
congestion in the first place, rather than
letting it happen and reacting after the fact
Tools for doing open-loop control include
deciding when to accept new traffic,
deciding when to discard packets and
which ones, and making scheduling
decisions at various points in the network
General Principles of Congestion Control
(Close loop systems)
Closed loop solutions are based on the
concept of a feedback loop
This approach has three parts when
applied to congestion control:
1) Monitor the system to detect when and where
congestion occurs
2) Pass this information to places where action can
be taken
3) Adjust system operation to correct the problem
General Principles of Congestion Control
(Close Loop systems)
Monitoring the system: A variety of metrics can be
used to monitor the subnet for congestion.
Chief among these are:
1) the percentage of all packets discarded for lack of buffer
space
2) the average queue lengths
3) the number of packets that time out and are retransmitted
4) the average packet delay
5) the standard deviation of packet delay
In all cases, rising numbers indicate growing
congestion.
General Principles of Congestion Control
(Close Loop systems)
The Feedback Loop: To transfer the
information about the congestion from the
point where it is detected to the point where
something can be done about it
1) One way is for the router detecting the congestion to send
a packet to the traffic source or sources, announcing the
problem
2) Other possibility is to send a bit or field reserved in every
packet for routers to fill in whenever congestion gets above
some threshold level
3) Third approach is to have hosts or routers periodically send
probe packets out to explicitly ask about congestion. This
information can then be used to route traffic around
problem areas
General Principles of Congestion Control
(Close Loop systems)
Adjust system operation :
Two possible solutions:
1) Increase the resources
2) Decrease the load
Congestion Prevention Policies
(Open Loop Systems)
They try to achieve their goal by using appropriate
policies at various levels
Congestion Prevention Policies
(Open Loop Systems)
The retransmission policy is concerned with how fast
a sender times out and what it transmits upon
timeout
A jumpy sender that times out quickly and retransmits all
outstanding packets using go back n will put a heavier load on
the system than will a leisurely sender that uses selective
repeat
Buffering policy :If receivers routinely discard all out-
of-order packets, these packets will have to be
transmitted again later, creating extra load
If receivers routinely discard all out-of-order packets, these
packets will have to be transmitted again later, creating
extra load
Congestion Prevention Policies
(Open Loop Systems)
Acknowledgement policy : If each packet is
acknowledged immediately, the
acknowledgement packets generate extra
traffic. However, if acknowledgements are
saved up to piggyback onto reverse traffic,
extra timeouts and retransmissions may
result
A tight flow control scheme (e.g., a small window)
reduces the data rate and thus helps fight
congestion
Congestion Prevention Policies
(Open Loop Systems)
At the network layer, the choice between using
virtual circuits and using datagrams affects
congestion since many congestion control
algorithms work only with virtual-circuit subnets
Packet queueing and service policy relates to
whether routers have one queue per input line, one
queue per output line, or both
Discard policy is the rule telling which packet is
dropped when there is no space. A good policy can
help alleviate congestion and a bad one can make it
worse.
Congestion Prevention Policies
(Open Loop Systems)
A good routing algorithm can help avoid
congestion by spreading the traffic over all
the lines, whereas a bad one can send too
much traffic over already congested lines
Packet lifetime management deals with how
long a packet may live before being
discarded
If it is too long, lost packets may clog up the works
for a long time, but if it is too short, packets may
sometimes time out before reaching their
destination, thus inducing retransmissions
Congestion Control in Virtual-
Circuit Subnets
One technique that is widely used to keep
congestion that has already started from
getting worse is admission control
The idea is simple: once congestion has been
signaled, no more virtual circuits are set up until the
problem has gone away
Congestion Control in Virtual
Circuit Subnets
An alternative approach is to allow new virtual
circuits but carefully route all new virtual circuits
around problem areas