0% found this document useful (0 votes)
0 views57 pages

Unit II Network Layer - 2

The document discusses various routing algorithms, including Dijkstra's shortest path algorithm, flooding, distance vector routing, and link state routing, highlighting their mechanisms and applications. It also addresses congestion control techniques such as admission control, traffic throttling, and load shedding, alongside quality of service (QoS) considerations and traffic shaping methods. Additionally, it covers packet scheduling algorithms and the principles of internetworking, emphasizing the challenges of connecting different network protocols.

Uploaded by

oscarvinones
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
0 views57 pages

Unit II Network Layer - 2

The document discusses various routing algorithms, including Dijkstra's shortest path algorithm, flooding, distance vector routing, and link state routing, highlighting their mechanisms and applications. It also addresses congestion control techniques such as admission control, traffic throttling, and load shedding, alongside quality of service (QoS) considerations and traffic shaping methods. Additionally, it covers packet scheduling algorithms and the principles of internetworking, emphasizing the challenges of connecting different network protocols.

Uploaded by

oscarvinones
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 57

Unit II

Shortest Path Algorithm


• The idea is to build a graph of the network, with each node of the
graph representing a router and each edge of the graph
representing a communication line, or link.
• To choose a route between a given pair of routers, the algorithm
just finds the shortest path between them on the graph.
• Several algorithms for computing the shortest path between two
nodes of a graph are known.
• This one is due to Dijkstra (1959) and finds the shortest paths
between a source and all destinations in the network.
• Each node is labeled with its distance from the source node along
the best known path.
• The distances must be non-negative, as they will be if they are
based on real quantities like bandwidth and delay
Shortest Path Algorithm
• Initially, no paths are known, so all nodes are labeled with infinity.
• As the algorithm proceeds and paths are found, the labels may
change, reflecting better paths.
• A label may be either tentative or permanent. Initially, all labels are
tentative.
• When it is discovered that a label represents the shortest possible
path from the source to that node, it is made permanent and never
changed thereafter.
• To illustrate how the labeling algorithm works, look at the weighted,
undirected graph of Figure, where the weights represent, for
example, distance.
• We want to find the shortest path from A to D.
Shortest Path Algorithm
Dijkstra’s algorithm to compute the
shortest path through a graph.
Dijkstra’s algorithm to compute the
shortest path through a graph.
Flooding
• A simple local technique is flooding, in which every incoming packet
is sent out on every outgoing line except the one it arrived on.
• Flooding obviously generates vast numbers of duplicate packets, in
fact, an infinite number unless some measures are taken to damp
the process.
• One such measure is to have a hop counter contained in the header
of each packet that is decremented at each hop, with the packet
being discarded when the counter reaches zero.
• Ideally, the hop counter should be initialized to the length of the path
from source to destination.
• If the sender does not know how long the path is, it can initialize the
counter to the worst case, namely, the full diameter of the network.
Flooding
• Flooding is not practical for sending most packets, but it does have some important
uses.
• First, it ensures that a packet is delivered to every node in the network.
• This may be wasteful if there is a single destination that needs the packet, but it is
effective for broadcasting information.
• In wireless networks, all messages transmitted by a station can be received by all
other stations within its radio range, which is, in fact, flooding, and some algorithms
utilize this property.
• Second, flooding is tremendously robust. Even if large numbers of routers are
blown to bits, flooding will find a path if one exists, to get a packet to its destination.
• Flooding also requires little in the way of setup. The routers only need to know their
neighbors. This means that flooding can be used as a building block for other
routing algorithms that are more efficient but need more in the way of setup.
• Flooding can also be used as a metric against which other routing algorithms can
be compared.
• Flooding always chooses the shortest path because it chooses every possible path
in parallel.
Distance Vector Routing
• A distance vector routing algorithm operates by having each
router maintain a table (i.e., a vector) giving the best known
distance to each destination and which link to use to get there.
• These tables are updated by exchanging information with the
neighbors.
• Eventually, every router knows the best link to reach each
destination.
• The distance vector routing algorithm is sometimes called by other
names, most commonly the distributed Bellman-Ford routing
algorithm, after the researchers who developed it.
• It was the original ARPANET routing algorithm and was also used in
the Internet under the name RIP.
Distance Vector Routing
• In distance vector routing, each router maintains a routing table
indexed by, and containing one entry for each router in the network.
• This entry has two parts: the preferred outgoing line to use for that
destination and an estimate of the distance to that destination.
• The distance might be measured as the number of hops or using
another metric, as we discussed for computing shortest paths.
• The router is assumed to know the ‘‘distance’’ to each of its
neighbors.
• If the metric is hops, the distance is just one hop.
• If the metric is propagation delay, the router can measure it directly
with special ECHO packets that the receiver just timestamps and
sends back as fast as it can.
Distance Vector Routing
The Count-to-Infinity Problem

• The settling of routes to best paths across the network is called convergence.
• Distance vector routing is useful as a simple technique by which routers can
collectively compute shortest paths, but it has a serious drawback in practice:
• Although it converges to the correct answer, it may do so slowly. In particular, it reacts
rapidly to good news, but leisurely to bad news.
The Count-to-Infinity Problem
• Now let us consider the situation in which all the links and routers
are initially up.
• Routers B, C, D, and E have distances to A of 1, 2, 3, and4 hops,
respectively.
• Suddenly, either A goes down or the link between A and B is cut
Link State Routing
• Distance vector routing was used in the ARPANET until 1979,
when it was replaced by link state routing.
• The primary problem that caused its demise was that the
algorithm often took too long to converge after the network
topology changed (due to the count-to-infinity problem).
• Consequently, it was replaced by an entirely new algorithm, now
called link state routing.
• Variants of link state routing called IS-IS and OSPF are the routing
algorithms that are most widely used inside large networks and
the Internet today.
Link State Routing
• The idea behind link state routing is fairly simple and
can be stated as five parts.
• Each router must do the following things to make it
work:
1. Discover its neighbors and learn their network addresses.
2. Set the distance or cost metric to each of its neighbors.
3. Construct a packet telling all it has just learned.
4. Send this packet to and receive packets from all other routers.
5. Compute the shortest path to every other router.
Hierarchical Routing
• Too many packets present in (a part of) the network causes packet delay and
loss that degrades performance. This situation is called congestion.
• The network and transport layers share the responsibility for handling
congestion.
• Since congestion occurs within the network, it is the network layer that
directly experiences it and must ultimately determine what to do with the
excess packets.
• However, the most effective way to control congestion is to reduce the load
that the transport layer is placing on the network.
• This requires the network and transport layers to work together.
Approaches to Congestion Control
• Traffic-aware Routing
• Admission Control
• Traffic Throttling
• Load Shedding
Traffic-Aware Routing
Admission Control
• One technique that is widely used in virtual-circuit networks to keep congestion at bay
is admission control.
• The idea is simple: do not set up a new virtual circuit unless the network can carry the
added traffic without becoming congested.
• Thus, attempts to set up a virtual circuit may fail. This is better than the alternative, as
letting more people in when the network is busy just makes matters worse.
Traffic Throttling
• Let us now look at some approaches to throttling traffic that can be used in
both datagram networks and virtual-circuit networks.
• Each approach must solve two problems.
• First, routers must determine when congestion is approaching, ideally before
it has arrived.
• To do so, each router can continuously monitor the resources it is using.
• Three possibilities are :
• the utilization of the output links,
• the buffering of queued packets inside the router,
• and the number of packets that are lost due to insufficient buffering.
• Of these possibilities, the second one is the most useful.
• The second problem is that routers must deliver timely feedback to the
senders that are causing the congestion.
• Congestion is experienced in the network, but relieving congestion requires
action on behalf of the senders that are using the network.
• To deliver feedback, the router must identify the appropriate senders. It must then warn
them carefully, without sending many more packets into the already congested network.
• Different schemes use different feedback mechanisms
• Choke Packets: The most direct way to notify a sender of congestion is to tell it directly.
• In this approach, the router selects a congested packet and sends a choke packet back to the source host, giving
it the destination found in the packet.
• The original packet may be tagged (a header bit is turned on) so that it will not generate any more choke packets
farther along the path and then forwarded in the usual way.
• To avoid increasing load on the network during a time of congestion, the router may only send choke packets at a
low rate.
• Explicit Congestion Notification
• Instead of generating additional packets to warn of congestion, a router can tag any packet it forwards to signal
that it is experiencing congestion.
• When the network delivers the packet, the destination can note that there is congestion and inform the sender
when it sends a reply packet.

• Hop-by-Hop Backpressure
Load Shedding
• When none of the above methods make the congestion
disappear, routers can bring out the heavy artillery: load
shedding.
• Load shedding is a fancy way of saying that when routers are
being inundated by packets that they cannot handle, they just
throw them away

• Random Early Detection


QUALITY OF SERVICE
QUALITY OF SERVICE
• Four issues must be addressed to ensure quality of service:
1. What applications need from the network.
2. How to regulate the traffic that enters the network.
3. How to reserve resources at routers to guarantee performance.
4. Whether the network can safely accept more traffic.
Application Requirements
• A stream of packets from a source to a destination is called a
flow.
• A flow might be all the packets of a connection in a connection-
oriented network, or all the packets sent from one process to
another process in a connectionless network.
• The needs of each flow can be characterized by four primary
parameters: bandwidth, delay, jitter, and loss. Together, these
determine the QoS (Quality of Service) the flow requires.
• Several common applications and the stringency of their network
requirements are listed in Figure.
Traffic Shaping
• Before the network can make QoS guarantees, it must know what traffic
is being guaranteed. In the telephone network, this characterization is
simple.
• Traffic in data networks is bursty. It typically arrives at nonuniform
rates as the traffic rate varies (e.g., videoconferencing with
compression), users interact with applications (e.g., browsing a new
Web page), and computers switch between tasks.
• Bursts of traffic are more difficult to handle than constant-rate traffic
because they can fill buffers and cause packets to be lost.
• Traffic shaping is a technique for regulating the average rate and
burstiness of a flow of data that enters the network.
• The goal is to allow applications to transmit a wide variety of traffic that
suits their needs, including some bursts, yet have a simple and useful
way to describe the possible traffic patterns to the network.
• In effect, the customer says to the provider ‘‘My transmission pattern
will look like this; can you handle it?’’
Traffic Shaping
• Sometimes this agreement is called an SLA (Service Level
Agreement), especially when it is made over aggregate flows and long
periods of time, such as all of the traffic for a given customer.
• Traffic shaping reduces congestion and thus helps the network live up
to its promise.
• Packets in excess of the agreed pattern might be dropped by the
network, or they might be marked as having lower priority.
• Monitoring a traffic flow is called traffic policing.
• Shaping and policing are not so important for peer-to-peer and
other transfers that will consume any and all available bandwidth,
but they are of great importance for real-time data, such as audio
and video connections, which have stringent quality-of-service
requirements.
Leaky and Token Buckets
• Now we will look at a more general way to characterize traffic, with the leaky bucket
and token bucket algorithms.
• The formulations are slightly different but give an equivalent result.
• Try to imagine a bucket with a small hole in the bottom.
• No matter the rate at which water enters the bucket, the outflow is at a constant rate,
R, when there is any water in the bucket and zero when the bucket is empty.
• Also, once the bucket is full to capacity B, any additional water entering it spills over
the sides and is lost.
• This bucket can be used to shape or police packets entering the network.
• Conceptually, each host is connected to the network by an interface containing a
leaky bucket.
• To send a packet into the network, it must be possible to put more water into the
bucket.
• If a packet arrives when the bucket is full, the packet must either be queued until
enough water leaks out to hold it or be discarded.
• This technique was proposed by Turner (1986) and is called the leaky bucket
algorithm
Leaky and Token Buckets
Token Bucket
• A different but equivalent formulation is to imagine the network
interface as a bucket that is being fille.
• The tap is running at rate R and the bucket has a capacity of B,
as before.
• Now, to send a packet we must be able to take water, or tokens,
as the contents are commonly called, out of the bucket (rather
than putting water into the bucket).
• No more than a fixed number of tokens, B, can accumulate in the
bucket, and if the bucket is empty, we must wait until more
tokens arrive before we can send another packet.
• This algorithm is called the token bucket algorithm.
Packet Scheduling
• Algorithms that allocate router resources among the packets of a flow
and between competing flows are called packet scheduling
algorithms.
• Three different kinds of resources can potentially be reserved for
different flows:
• 1. Bandwidth: bandwidth, is the most obvious.
• If a flow requires 1 Mbps and the outgoing line has a capacity of 2 Mbps, trying to
direct three flows through that line is not going to work.
• 2. Buffer space: When a packet arrives,
• It is buffered inside the router until it can be transmitted on the chosen outgoing line.
• If no buffer is available, the packet has to be discarded since there is no place to put
it.
• 3. CPU cycles: CPU cycles may also be a scarce resource.
• It takes router CPU time to process a packet, so a router can process only a certain
number of packets per second.
• While modern routers are able to process most packets quickly, some kinds of
packets require greater CPU processing, such as the ICMP packets
Packet Scheduling
• FIFO (First-In First-Out), or equivalently FCFS (First-Come
First-Serve)

• Priority Queuing

• Fair Queueing

• WFQ (Weighted Fair Queueing)


FIFO Queuing
Priority Queuing
Fair Queuing
WFQ (Weighted Fair Queueing)
WFQ (Weighted Fair Queueing)
Admission Control

• The user offers a flow with an accompanying QoS requirement to


the network.
• The network then decides whether to accept or reject the flow
based on its capacity and the commitments it has made to other
flows.
• If it accepts, the network reserves capacity in advance at routers
to guarantee QoS when traffic is sent on the new flow.
•R
•x
Integrated Services
Differentiated Services
• Expedited Forwarding

• Assured Forwarding
INTERNETWORKING
• How Networks Differ
INTERNETWORKING
• How Networks Can Be Connected
• There are two basic choices for connecting different networks: we can build
devices that translate or convert packets from each kind of network into packets
for each other network

• A router that can handle multiple network protocols is called a multiprotocol


router.
• It must either translate the protocols, or leave connection for a higherprotocol
layer.
INTERNETWORKING
• Tunneling
• Handling the general case of making two different networks interwork is
exceedingly difficult.
• However, there is a common special case that is manageable even for
different network protocols.
• This case is where the source and destination hosts are on the same type of
network, but there is a different network in between.
• The solution to this problem is a technique called tunneling.
INTERNETWORKING
• Internetwork Routing
• Routing through an internet poses the same basic problem as routing within a
single network, but with some added complications.
• To start, the networks may internally use different routing algorithms.
• Within each network, an intradomain or interior gateway protocol is used for
routing.(‘‘Gateway’’ is an older term for ‘‘router.’’)
• Across the networks that make up the internet, an interdomain or exterior
gateway protocol is used.
• In the Internet, the interdomain routing protocol is called BGP (Border Gateway
Protocol).
• Since each network is operated independently of all the others, it is often referred
to as an AS (Autonomous
• System).
• A good mental model for an AS is an ISP network. In fact, an ISP network may be
comprised of more than one AS, if it is managed, or, has been acquired, as multiple
networks.
INTERNETWORKING
• Packet Fragmentation
• Each network or link imposes some maximum size on its packets.
• These limits have various causes, among them
• Hardware (e.g., the size of an Ethernet frame).
• Operating system (e.g., all buffers are 512 bytes).
• Protocols (e.g., the number of bits in the packet length field).
• Compliance with some (inter)national standard.
• Desire to reduce error-induced retransmissions to some level.
• Desire to prevent one packet from occupying the channel too long.
• The result of all these factors is that the network designers are
not free to choose any old maximum packet size they wish.
• Maximum payloads for some common technologies are 1500
bytes for Ethernet and 2272 bytes for 802.11.
• IP is more generous, allows for packets as big as 65,515 bytes.
Packet Fragmentation

• Path MTU (Path Maximum Transmission Unit)

• Fragments
• Transparent fragmentation
• Non-transparent fragmentation
Packet Fragmentation

You might also like