CN Module3 Notes RRCE
CN Module3 Notes RRCE
The network layer is responsible for end-to-end transmission, requiring knowledge of the network's
topology and selecting appropriate paths to avoid overloading communication lines and routers. It
must also address new problems when the source and destination are in different networks,
primarily using the Internet and its network layer protocol, IP. This chapter will study these issues
and illustrate them using the Internet.
5.1 NETWORK LAYER DESIGN ISSUES
• The text introduces network layer designers' challenges, including the transport layer's service and
the network's internal design.
• The network layer protocols operate in a context of ISP's equipment and customers' equipment. Host
H1 is connected to ISP routers, A, while H2 is on LAN with leased router F, owned by the
customer. These routers are considered part of the ISP network as they run the same algorithms as
the ISP's routers. This context can be seen in Fig. 5-1.
• The equipment uses store-and-forward packet switching, where a host sends a packet to the nearest
router, which stores it until it's processed and verified by the link, then forwards it to the destination
host.
2. The transport layer should be shielded from the number, type, and topology of the routers
present.
3. The network addresses made available to the transport layer should use a uniform numbering
plan, even across LANs and WANs.
• The network layer designers have significant freedom in creating detailed specifications for services
offered to the transport layer. However, this freedom often leads to a debate between two factions:
one, represented by the Internet community, who believes routers' job is to move packets and
nothing else, and the other, represented by telephone companies, who believe the network should
provide a reliable, connection-oriented service. This viewpoint is based on the end-to-end argument,
which has been influential in shaping the Internet.
• The debate continues, with early data networks being connection-oriented, but connectionless
network layers have gained popularity since the early Internet days. The IP protocol is now a
symbol of success, while connection-oriented technologies like ATM are now found in niche uses.
However, the Internet is evolving connection-oriented features as quality of service becomes more
important. Two examples of connection-oriented technologies are MPLS (MultiProtocol Label
Switching) and VLANs, both widely used. The debate continues to evolve as the Internet evolves in
response to the evolving needs of its users.
5.1.3 Implementation of Connectionless Service
• The network layer provides two types of service: connection less and connection-oriented.
Connectionless service injects packets into the network individually and routes them independently,
forming datagram networks. No advance setup is needed, and the network is called a datagram
network.
• On the other hand, connection-oriented service establishes a path from the source router to the
destination router, forming a virtual circuit (VC) network. In a datagram network, a process P1 sends
a long message to a transport layer, which prepares a transport header and sends the result to the
forwarded according to A's table to C within a new frame. Packet 1 is then forwarded to E
and then to F, and when it gets to F, it is sent within a frame over the LAN to H2.
• However, packet 4 is sent to router B, even though it is also destined for F. A decided to
send packet 4 via a different route than the first three packets, possibly due to a traffic jam
along the ACE path. The routing algorithm manages the tables and makes routing
decisions, and there are several different kinds of routing algorithms.
increasingly used to help with quality of service and other ISP traffic management tasks.
Each packet contains the full source Each packet contains a short VC
Addressing
and destination address number
Effect of router None, except for packets lost during All VCs that passed through the failed
failures the crash router are terminated
• Virtual circuits and datagrams have different advantages and disadvantages within a network.
One trade-off is setup time, which takes time and resources, but is easily done with a data
packet in a virtual-circuit network. In contrast, a datagram network requires more
complicated lookup procedures to locate the entry for the destination.
• Datagram networks use longer destination addresses than virtual-circuit networks due to their
global meaning, which can be a significant overhead and waste of bandwidth. Additionally,
datagram networks require more table space in router memory, as each virtual circuit needs
an entry.
The network layer primarily handles packet routing from source to destination, with most
networks requiring multiple hops. Network layer design focuses on the algorithms and
data structures used to choose routes and manage network segments.
• The network layer is responsible for routing packets from the source machine to the
destination machine, with most networks requiring multiple hops.
• The routing algorithm is a crucial part of network layer design, responsible for
deciding which output line an incoming packet should be transmitted on. If the
network uses
datagrams internally, routing decisions must be made anew for every arriving data
packet, while virtual circuits use established routes for entire sessions.
• A router has two processes: forwarding, which handles packets upon arrival, and
updating the routing tables, where the routing algorithm plays a crucial role in
deciding which routes
to use.
• A routing algorithm should have certain desirable properties: correctness, simplicity,
robustness, stability, fairness, and efficiency. Robustness is crucial for a network to
cope with changes in topology and traffic without requiring all jobs to be aborted.
Stability is
also important, as a stable algorithm reaches equilibrium and stays there, ensuring
communication is not disrupted until the algorithm reaches equilibrium. This is
especially important for major networks that may run continuously for years without
• Static routing is a method where the choice of route is computed in advance, offline,
and downloaded to routers during network booting, allowing for clear routing choices,
such as sending packets to router E regardless of the final destination.
• Adaptive algorithms adapt their routing decisions to changes in topology and traffic.
They differ in information source, route change frequency, and optimization metrics. These
algorithms cover delivery models and can send packets to multiple, all, or one of a set of
destinations. Decisions based on topology are deferred to Sec 5.3
• The optimality principle enables the creation of a sink tree, a tree rooted at the destination,
where all optimal routes from all sources are used by all router.
• A sink tree is not unique, but if all possible paths are chosen, it forms a DAG (Directed
Acyclic Graph) with no loops. Sink trees are used for both cases, depending on the
assumption that paths do not interfere, such as a traffic jam on one path not causing
another to divert.
• A sink tree is not unique, but if all possible paths are chosen, it forms a DAG (Directed
Acyclic Graph) with no loops. Sink trees are used for both cases, depending on the
assumption that paths do not interfere, such as a traffic jam on one path not causing
another to divert.
5.2.2 Shortest Path Algorithm
• The study of routing algorithms begins with a straightforward method for computing
• The concept of a shortest path deserves some explanation. One way of measuring path
length is the number of hops. Using this metric, the paths ABC and ABE in Fig. 5-7 are
equally long. Another metric is the geographic distance in kilometers, in which case ABC
is clearly much longer than ABE (assuming the figure is drawn to scale).
• Dijkstra's algorithm, developed in 1959, finds the shortest paths between a source and all
destinations in a network. Each node is labeled with its distance from the source node
along the best known path, which must be non-negative. Initially, no paths are known, so
all nodes are labeled with infinity. As paths are found, labels may change to reflect better
paths, and labels can be tentative or permanent.
• The labeling algorithm is a method used to find the shortest path from a node A to a node
from E to AXYZE, where node Z has already been made permanent. If it has, E has
already been probed, so the AXYZE path cannot be a shorter path. If Z is still tentatively
labeled, if the label is greater than or equal to E, then AXYZE cannot be a shorter path
than ABE. If the label is less than E, Z becomes permanent first, allowing E to be probed
from Z.
• The algorithm is given in Fig. 5-8, where global variables n and dist describe the
graph and are initialized before shortest path is called. The shortest paths from t to s in an
undirected graph are the same as the shortest paths from s to t.
5.2.3 Flooding
• Routers in routing algorithms rely on local knowledge rather than the entire network. A
common technique is flooding, where every incoming packet is sent on every outgoing
line except the one it arrived on. This generates numerous duplicate packets, unless
measures are taken. One measure is having a hop counter in each packet header, which is
decremented at each hop and discarded when it reaches zero. The hop counter should be
initialized to the path length.
• Flooding with a hop count can lead to an exponential number of duplicate packets. To
prevent this, routers should track which packets have been flooded and avoid sending
them out again. This can be achieved by placing a sequence number in each packet
received from its hosts, and having a list per source router detailing which sequence
numbers have been seen.
• To prevent list growth without bound, each list should be augmented by a counter, k,
ensuring all sequence numbers through k have been seen. When a packet arrives, it can be
checked if it has been flooded and discarded.
• Flooding ensures packets are delivered to every node in the network, which is effective for
broadcasting information. In wireless networks, all messages transmitted by a station can
be received by all other stations within its radio range, which some algorithms utilize.
• Flooding is a robust routing algorithm that can find a path for a packet even in a war zone,
and requires minimal setup. It can be used as a building block for more efficient routing
algorithms and as a metric for comparison. Flooding always chooses the shortest path,
making it the only algorithm that can produce a shorter delay, excluding the overhead
generated by the flooding process itself.
5.2.4 Distance Vector Routing
• Computer networks utilize dynamic routing algorithms, such as distance vector routing
and link state routing, which are more complex but more efficient in finding the shortest
paths for the current topology, with a focus on the former algorithm.
• A distance vector routing algorithm operates by having each router maintain a table (i.e., a
vector) giving the best known distance to each destination and which link to use to get
Prof. Soniya R, Dept of AI&ML 12
Module 3 The Network Layer
there. These tables are updated by exchanging information with the neighbors. Eventually,
every router knows the best link to reach each destination.
• The distance vector routing algorithm, also known as the Bellman-Ford routing algorithm,
was the original ARPANET routing algorithm and was used in the Internet under the name
RIP. It involves each router maintaining a routing table with an entry for each router,
including the preferred outgoing line and distance estimate, which can be measured as
hops or other metrics.
• The router is assumed to know the distance to its neighbors, either as hops or propagation
delay. If the metric is hops, the distance is one hop.
• If the metric is propagation delay, the router can measure it directly with special ECHO
packets. For example, if delay is used as a metric, the router sends a list of estimated
delays to each neighbor every T msec. If the router knows the delay to X is m msec, it can
reach router i via X in Xi + m msec. The router calculates the best estimate for each
neighbor and uses it in its new routing table.
• This updating process is illustrated in Fig. 5-9. Part (a) shows a network. The first four
columns of part (b) show the delay vectors received from the neighbors of router J. A
claims to have a 12-msec delay to B, a 25-msec delay to C, a 40- msec delay to D, etc.
Suppose that J has measured or estimated its delay to its neighbors, A, I, H, and K, as 8,
10, 12, and 6 msec, respectively.
• Distance vector routing was used in the ARPANET until 1979, when it was replaced by
link state routing. The primary problem that caused its demise was that the algorithm often
took too long to converge after the network topology changed (due to the count-to-infinity
problem). Consequently, it was replaced by an entirely new algorithm, now called link
state routing. Variants of link state routing called IS-IS and OSPF are the routing
algorithms that are most widely used inside large networks and the Internet today.
• The idea behind link state routing is fairly simple and can be stated as five parts. Each
router must do the following things to make it work:
1. Discover its neighbors and learn their network addresses.
2. Set the distance or cost metric to each of its neighbors.
3. Construct a packet telling all it has just learned.
4. Send this packet to and receive packets from all other routers.
5. Compute the shortest path to every other router.
• In effect, the complete topology is distributed to every router. Then Dijkstra’s algorithm
can be run at each router to find the shortest path to every other router. Below we will
consider each of these five steps in more detail
Learning about the Neighbors
• A router learns its neighbors by sending a special HELLO packet on each point-to-point
line, and the router on the other end sends a reply with its name.
• The broadcast LAN provides connectivity between each pair of attached routers.
However, modeling the LAN as many point-to-point links increases the size of the
topology and leads to wasteful messages. A better way to model the LAN is to consider
it as a node itself, as shown in Fig. 5-11(b). Here, we have introduced a new, artificial
node, N, to which A, C, and F are connected. One designated router on the LAN is
selected to play the role of N in the routing protocol. The fact that it is possible to go
from A to C on the LAN is represented by the path ANC here.
• Setting Link Costs: The link state routing algorithm requires each link to have a
distance or cost metric for finding shortest paths. The cost to reach neighbors can be set
automatically, or configured by the network operator. A common choice is to make the
cost inversely proportional to the bandwidth of the link. For example, 1-Gbps Ethernet
may have a cost of 1 and 100-Mbps Ethernet a cost of 10. This makes higher-capacity
paths better choices. If the network is geographically spread out, the delay of the links
may be factored into the cost so that paths over shorter links are better choices. The most
direct way to determine this delay is to send over the line a special ECHO packet that the
other side is required to send back immediately. By measuring the round-trip time and
dividing it by two, the sending router can get a reasonable estimate of the delay.
• Building Link State Packets: Once the information needed for the exchange has been
collected, the next step is for each router to build a packet containing all the data. The
packet starts with the identity of the sender, followed by a sequence number and age (to
be described later) and a list of neighbors. The cost to each neighbor is also given. An
example network is presented in Fig. 5-12(a) with costs shown as labels on the lines. The
corresponding link state packets for all six routers are shown in Fig. 5- 12(b).
• Building the link state packets is easy. The hard part is determining when to build them.
One possibility is to build them periodically, that is, at regular intervals. Another
possibility is to build them when some significant event occurs, such as a line or
neighbor going down or coming back up again or changing its properties appreciably.
5.2.6 Hierarchical Routing
• As networks grow, router routing tables grow proportionally, consuming more memory,
CPU time, and bandwidth. When the network becomes too large for every router to have
Prof. Soniya R, Dept of AI&ML 16
Module 3 The Network Layer
an entry for every other router, hierarchical routing is used.
• Routers are divided into regions, each knowing how to route packets within its own
region but not about the internal structure of other regions. This allows interconnected
networks to treat each region as a separate area, allowing routers to avoid knowing the
topological structure of other networks.
• For large networks, a two-level hierarchy may not be sufficient, as it may be necessary
to group regions into clusters, clusters into zones, and zones into groups. For example, a
packet from Berkeley to Malindi would be routed through a multilevel hierarchy.
• The Berkeley router would know the topology within California but send all out-of-state
traffic to the Los Angeles router. The Los Angeles router could route traffic directly to
other domestic routers but send all foreign traffic to New York. The New York router
would direct all traffic to the destination country responsible for handling foreign traffic,
such as Nairobi.
• The packet would then work its way down the tree in Kenya until it reached Malindi.
Hierarchical routing reduces the table from 17 to 7 entries, but it also increases path
length. For example, the best route from 1A to 5C is via region 2, but all traffic to region
5 goes via region 3, as it is better for most destinations in region 5.
• Reverse path forwarding is a network routing algorithm where router I sends packets to
previously unvisited routes on the first hop. The second hop generates eight packets,
two by each router that received a packet on the first hop.
• All eight packets arrive at previously unvisited routes, with five along the preferred
line. The third hop generates six packets, with only three arriving on the preferred path.
After five hops and 24 packets, broadcasting terminates, compared to four hops and 14
packets if the sink tree was followed exactly. Reverse path forwarding is efficient and
easy to implement, sending the broadcast packet over each link only once in each
direction, similar to flooding, without the need for routers to remember sequence
numbers or list all destinations in the packet.
• The final broadcast algorithm enhances reverse path forwarding by using a sink tree or
other spanning tree for router initiating the broadcast.
• A spanning tree is a subset of the network that includes all routers but has no loops. If
each router knows which line belongs to the tree, it can copy an incoming broadcast
packet onto all spanning tree lines except the one it arrived on.
• This method efficiently uses bandwidth, generating the minimum number of packets
needed. However, each router must have knowledge of a spanning tree for the method to
be applicable.
5.2.8 Multicast Routing
• Multicasting is a method used to send messages to large but small groups in a network.
This is particularly useful for applications like multiplayer games or live sports events.
Sending a distinct packet to each receiver is expensive unless the group is small.
• However, broadcasting a packet is wasteful if the group consists of numerous machines
on a million-node network. Multicasting requires a routing algorithm to create and
destroy groups and identify routers as members. Each group is identified by a multicast
address, and routers know their group membership.
• Multicast routing schemes use spanning trees to deliver packets to members of a group
efficiently, based on the group's density or sparsity. Broadcast is a good start for dense
groups, as it efficiently gets the packet to all parts of the network. However, it may reach
routers not members of the group, which is wasteful. Deering and Cheriton (1990)
19
Module 3 The Network Layer
proposed pruning the broadcast spanning tree by removing links that do not lead to
members, resulting in an efficient multicast spanning tree.
• For example, consider two groups, 1 and 2, with routers attached to hosts belonging to
one or both groups. A spanning tree for the leftmost router can be used for broadcast but
is overkill for multicast. A multicast spanning tree for the leftmost router sends packets
only along this tree, which is more efficient than the broadcast tree due to the number of
links. Different multicast groups have different spanning trees.
• The spanning tree can be pruned using link state routing, where each router is aware of
the complete topology and can construct its own pruned tree for each sender to the group.
This is an example of a link state protocol called MOSPF (Multicast OSPF).
• Distance vector routing follows a different pruning strategy, with the basic algorithm
being reverse path forwarding. When a router receives a multicast message for a
particular group, it responds with a PRUNE message, preventing the neighbor from
sending more multicasts. This recursive pruning of the spanning tree is also possible for routers
with no group members among their hosts. DVMRP (Distance Vector Multicast Routing Protocol)
is an example of this method.
20
Module 3 The Network Layer
• Core-based trees are an alternative design for group routing, where routers agree on a
root (core or rendezvous point) and build the tree by sending packets from each member
to the root. The tree is the union of the paths traced by these packets. For group 1, a
sender sends a packet to the core, which is forwarded down the tree. This performance
optimization allows packets to be multicast without reaching the core before being
forwarded up or down all branches.
• Shared trees are not optimal for all sources, as they can be inefficient depending on the
location of the core and senders. However, they can be a significant savings in storage
costs, messages sent, and computation. Each router maintains one tree per group,
reducing the need for multiple trees. Additionally, routers not part of the tree do not
support the group. This is why core-based trees are used for multicasting to sparse
groups in the Internet, as part of popular protocols like PIM
• The distance vector routing procedure is used to assign addresses to group 1 members,
resulting in nodes sending to the nearest instance of destination 1. This is because the
routing protocol doesn't recognize multiple instances of destination 1, believing all
instances of node 1 are the same.
5.2.10 Routing for Mobile Hosts
• Millions of people use computers on the go, with mobile hosts being either stationary or
mobile. As people increasingly want to stay connected, networks must find mobile
hosts to route packets.
• In a model where all hosts have a permanent home location and permanent home
address, the routing goal is to send packets to mobile hosts using their fixed home
addresses and efficiently reach them wherever they may be. The challenge is to find
mobile hosts to ensure efficient communication and reach them efficiently.
• A different model for mobile network routing could involve recompiling routes as
mobile hosts move and topology changes. This would reduce the burden of constantly
computing new routes. Using home addresses can also help reduce this burden. Another
option is to provide mobility above the network layer, as laptops acquire new network
addresses when moved to new locations.
• This model allows a laptop to browse the web but requires a higher layer location
service for other hosts to send packets. Connections cannot be maintained while the
host is moving, so network-layer mobility is useful.
• Mobile routing in the Internet and cellular networks involves the mobile host indicating
its current location to a home agent, who forwards packets to the mobile host's home
22
Module 3 The Network Layer
location. This process is crucial for ensuring efficient communication and delivery of
data. In a scenario where the mobile host is temporarily in San Diego, the home agent
The mobile host sends a registration message to the home agent, which is a control
message, not a data message. The sender then sends a data packet to the mobile host using
its permanent address, which is routed by the network to the host's home location. In New
York, the home agent intercepts the packet, wraps it with a new header, and sends
the bundle to the care of address. This tunneling mechanism is crucial in the Internet
and cellular networks.
23
Module 3 The Network Layer
• The mobile routing system involves a triangle routing, where the mobile host retrieves
the packet from the sender and sends its reply packet directly to the sender. This route
may be circuitous if the remote location is far from the home location.
• The sender may learn the current care of address during step 4. Subsequent packets can
be routed directly to the mobile host by tunneling them to the care of address, bypassing
the home location entirely. If connectivity is lost, the home address can always be used to
reach the mobile.
• Security is an important aspect, as messages may contain security information to check
with cryptographic protocols. The scheme is modeled on IPv6 mobility, which is used in
the Internet and IP-based cellular networks like UMTS. Extensions of the basic scheme
support mobile networks without host work.
24
Module 3 The Network Layer
• Numerous routing algorithms for ad hoc networks have been proposed, but their practical use is
limited compared to mobile networks. One popular algorithm is AODV (Ad hoc On-demand
Distance Vector), adapted for mobile environments with limited bandwidth
and battery lifetimes. It discovers and maintains routes, highlighting its potential in ad hoc
networks.
• Route Discovery : The Ad Hoc Network (AODV) is a network algorithm that discovers routes
to a destination on demand, saving time and effort when the topology changes
before the route is used. The topology of an ad hoc network can be described by a graph
of connected nodes, with two nodes connected if they can communicate directly using
their radios. For simplicity, all connections are symmetric.
• The algorithm is based on a newly formed ad hoc network, where a process at node A
sends a packet to node I. The algorithm maintains a distance vector table at each node,
keyed by destination, providing information about the destination and its neighbors. If no
entry is found for node I, the process must discover a route to it. This "on demand"
approach makes the algorithm efficient and efficient.
25
Module 3 The Network Layer
• A sends a ROUTE REQUEST packet to I, which is broadcasted using flooding. The packet is
then rebroadcast to nodes F, G, C, H, E, and I. A sequence number is set at the source to
remove duplicates during the flood.
• The request reaches node I, which constructs a ROUTE REPLY packet. This packet is
unicast along the reverse of the path followed by the request. Intermediate nodes must
remember the node that sent the request and increment a hop count as they forward the reply.
The replies indicate which neighbor to use to reach the destination. Intermediate nodes G and
D input the best route they hear into their routing tables. When the reply reaches A, a new
route, ADGI, is created.
• The algorithm generates numerous broadcasts in large networks, even for nearby destinations.
To reduce overhead, the scope of broadcasts is limited using the IP packet's Time to live field.
If the field is 0, the packet is discarded. The route discovery process is modified by
broadcasting a ROUTE REQUEST packet with a Time to live set to 1, and then sending more
packets in increasingly wider rings, starting locally and then globally.
• Route Maintenance : The topology of a network can change spontaneously due to nodes
moving or being switched off. The algorithm must handle this by broadcasting a Hello
message to its neighbors, which is expected to respond. If no response is received, the
broadcaster knows that the neighbor has moved out of range or failed, and if a packet is sent to
a non-responsive neighbor, it learns that the neighbor is no longer available.
26
Module 3 The Network Layer
• This information is used to purge routes that no longer work. Each node, N, tracks its
active neighbors who have sent a packet for a possible destination during the last ΔT seconds.
When any of N's neighbors becomes unreachable, it checks its routing table to see which
routes have routes using the now-gone neighbor. The active neighbors inform each other
recursively until all routes depending on the now-gone node are purged from all routing tables.
• Invalid routes are removed from the network, and senders can find new valid ones using the
discovery mechanism. However, distance vector protocols can suffer from slow convergence
or count-to-infinity problems after topology changes. To ensure rapid convergence, routes
include a sequence number controlled by the destination, which increments every time a fresh
ROUTE REPLY is sent. Senders request a fresh route by including the destination sequence
number of the last used route in the ROUTE REQUEST.
• Intermediate nodes store routes with a higher sequence number or the few hops for the
current sequence number. This on-demand protocol saves bandwidth and battery life compared
to a standard distance vector protocol that broadcasts updates periodically.
• The text discusses ad hoc routing schemes, such as DSR (Dynamic Source Routing) and GPSR
(Greedy Perimeter Stateless Routing). These schemes share route discovery and maintenance
when routes overlap, allowing for resource savings. For example, if B wants to send packets to
I, it will perform route discovery first, then reach D, which already has a route to I. The choice
of protocol depends on the types of ad hoc networks that prove useful in practice.
27
Module 3 The Network Layer
● Too many packets present in (a part of) the network causes packet delay and loss that degrades
performance. This situation is called congestion.
● The network and transport layers share responsibility for handling congestion. The network layer
directly experiences congestion and must decide on handling excess packets.
● The most effective way to control congestion is to reduce the load placed by the transport layer on
the network, requiring collaboration between the two layers.
● The above figure depicts the onset of congestion, The network's carrying capacity is maintained
when the number of packets sent is within its limits. If twice as many packets are sent, twice as
many are delivered.
● However, as traffic loads approach the capacity, some packets are lost, consuming some of the
capacity, causing the number of delivered packets to fall below the ideal curve, resulting in
congested networks.
● A poorly designed network can experience congestion collapse, where performance decreases as
load increases beyond capacity. This occurs when packets are delayed, making them no longer
useful when they leave the network.
● In the early internet, packets spent waiting for backlogs could reach their maximum time. Senders
re-transmitted delayed packets, wasting capacity. To capture these factors, the y-axis of Fig. 3.1 is
28
Module 3 The Network Layer
given as goodput, which is the rate at which useful packets are delivered by the network.
● Congestion is a common issue in networks, causing queues and packet loss. Adding more memory
can help, but it can worsen congestion as packets time out and duplicates are sent. Low-bandwidth
links or routers that process packets slowly can also become congested.
● To improve congestion, traffic can be directed away from the bottleneck, but eventually, all regions
will be congested, necessitating the need for faster network design.
● Congestion control and flow control are two distinct concepts in network management.
Congestion control ensures the network can handle the offered traffic, involving all hosts and
routers' behavior. Flow control focuses on the traffic between a sender and a receiver, ensuring a fast
sender cannot transmit data faster than the receiver can absorb it.
● Congestion control is often confused with flow control, as the best way to handle both problems is
to get the host to slow down. Understanding congestion control involves examining approaches at
different time scales, preventing congestion, and coping with it once it has set in.
1. The presence of congestion means that the load is (temporarily) greater than the resources (in a part
of the network) can handle. Two solutions come to mind: increase the resources or decrease the
load. As shown in Fig. 5-22, these solutions are usually applied on different time scales to either
prevent congestion or react to it once it has occurred.
2. To avoid congestion, build a network that matches the traffic it carries. Resources can be added
dynamically during congestion, such as using spare routers or enabling lines.
3. Provisioning involves upgrading heavily utilized links and routers at the earliest opportunity. Routes
29
Module 3 The Network Layer
can be tailored to changing traffic patterns, such as shifting traffic away from heavily used paths.
4. Traffic-aware routing allows mobile listeners to route packets around hotspots, Some local radio
stations have helicopters flying around their cities to report on road congestion to make it possible
for their mobile listeners to route their packets (cars) around hotspots. This is called traffic-aware
routing. and splitting traffic across multiple paths is helpful.
5. However, sometimes it is not possible to increase capacity. The only way then to beat back the
congestion is to decrease the load. In a virtual-circuit network, new connections can be refused if
they would cause the network to become congested. This is called admission control.
6. At a fine level, when congestion is imminent the network can request traffic sources causing the
problem to throttle their traffic to reduce congestion. To identify onset of congestion, routers can
monitor average load, queuing delay or packet loss. Rising numbers indicate growing congestion.
7. To control sources, routers must participate in a feedback loop. The timescale needs careful
adjustment:
(a) If feedback is too frequent, the system will oscillate wildly.
(b) If feedback is too slow, congestion control will react sluggishly.
8. Sending feedback messages during congestion adds more network load. When all else fails, network
has to discard packets it cannot deliver, which is called load shedding. Good packet discard policies
can help prevent congestion collapse.
1. The first approach we will examine is traffic-aware routing. The goal in taking load into account
when computing routes is to shift traffic away from hotspots that will be the first places in the
network to experience congestion.
2. The most direct way to do this is to set the link weight to be a function of the fixed link bandwidth
and propagation delay plus the variable measured load or average queuing delay. Least-weight paths
will then favor paths that are more lightly loaded, all else being equal.
30
Module 3 The Network Layer
3. Traffic-aware routing was used in the early Internet according to this model by Khanna and Zinky in
1989. However, there is a risk. Consider the network of Fig. 3.3, which is divided into two parts,
East and West, connected by two links, CF and EI. Suppose that most of the traffic between East
and West is using link CF, and, as a result, this link is heavily loaded with long delays. Including
queueing delay in the weight used for the shortest path calculation will make EI more attractive.
After the new routing tables have been installed, most of the East-West traffic will now go over EI,
loading this link. Consequently, in the next update, CF will appear to be the shortest path. As a
result, the routing tables may oscillate wildly, leading to erratic routing and many potential problems.
Figure 3.3 A network in which the East and West parts are connected by two links.
4. If load is ignored and only bandwidth and propagation delay are considered, this problem does not
occur. Attempts to include load but change weights within a narrow range only slow down routing
oscillations. Two techniques can contribute to a successful solution. The first is multi path routing,
in which there can be multiple paths from a source to a destination. In our example this means that
the traffic can be spread across both of the East to West links.
5. The second one is for the routing scheme to shift traffic across routes slowly enough that it is able to
converge, as in the scheme of Gallagher (1977). Given these difficulties, in the Internet routing
protocols do not generally adjust their routes depending on the load. Instead, adjustments are made
outside the routing protocol by slowly changing its inputs. This is called Traffic engineering.
31
Module 3 The Network Layer
5.3.3 Admission Control
1. Admission control is a technique used to prevent congestion in virtual circuit networks. The idea is
to not establish a new virtual circuit if it would make the network congested. This is better than
letting more circuits in when the network is already busy.
2. Determining when a new circuit will lead to congestion is the challenge. In telephone networks, it is
straightforward since calls have fixed bandwidth. But in computer networks, virtual circuits have
varying bandwidths.
3. So for admission control to work, the new circuit needs to provide some characterization of its
traffic. Traffic can be described in terms of its rate and shape. But a simple description is difficult
due to its bursty nature.
4. The average rate only tells half the story since traffic is often bursty. Bursty traffic like web traffic
is harder to handle than streaming traffic with the same throughput since web traffic bursts are more
likely to congest routers.
5. A commonly used method to capture this effect is the leaky bucket or token bucket model.
The leaky bucket has two parameters:
(a) The average rate which bounds the long term traffic rate
(b) The burst size which bounds the maximum burst of traffic
6. The network can use traffic descriptions for admission control when deciding whether to admit new
virtual circuits. One option is for the network to reserve enough capacity along circuit paths to
prevent congestion. This guarantees quality of service for users.
7. Even without guarantees, the network can estimate how many circuits will fit within capacity
without congestion using traffic descriptions. For example, 10 circuits at up to 10 Mbps passing
through a 100 Mbps link can be admitted to avoid congestion. But this is wasteful since circuits may
not transmit at full speed at the same time.
8. Measurements of past behavior and statistics can be used to estimate the right number of circuits to
admit, trading better performance for acceptable risk.
9. Admission control can also be combined with traffic-aware routing by considering routes around
traffic hotspots as part of the setup procedure. For example, consider the network illustrated in Fig.
3.4(a), in which two routers are congested, as indicated.
32
Module 3 The Network Layer
Figure 3.4. (a) A congested network. (b) The portion of the network that is not congested. A virtual circuit from A
to B is also shown.
10. Suppose that a host attached to router A wants to set up a connection to a host attached to router B.
Normally, this connection would pass through one of the congested routers. To avoid this situation,
we can redraw the network as shown in Fig. 5-24(b), omitting the congested routers and all of their
lines. The dashed line shows a possible route for the virtual circuit that avoids the congested routers.
Traffic Throttling is an approach used to avoid congestion. In networks and the internet, the
senders try to send as much traffic as possible as the network can readily deliver. In a network
when congestion is approaching it should tell the senders of packets to slow down them. Traffic
Throttling can be used in virtual circuit networks and datagram networks.
Let us now look at some approaches to throttling traffic that can be used in both datagram networks
and virtual-circuit networks. Each approach must solve two problems.
Problem 1- The router must be able to determine when the congestion is approaching. It must
identify the congestion before it has arrived. For this, each router used in the network must
33
Module 3 The Network Layer
continuously check for all the resources and their activities in the network. Router can
continuously monitor using three possibilities. They are:
In the explicit congestion notification approach the router does not send extra packets to the host
but sets a bit of any one of the packet headers to inform that the network has approached with
congestion. When any packet is delivered in the network the destination sends a reply packet to
the sender informing that congestion has occurred. In the case of choke packets, the sender then
throttles its transmission. The below diagram describes the explicit congestion notification.
34
Module 3 The Network Layer
3. Hop-by-Hop Backpressure
After the congestion has been signaled still due to a slow signal many packets are received from
the long distances. The choke packets have an effect at every step and each router requires more
buffers. The main aim of this Hop-by-Hop Backpressure technique is to provide faster relief at the
point of congestion in the network. This technique propagates in the opposite direction of the data
and is majorly used in virtual circuits. The below diagram describes Hop-by-Hop Backpressure in
detail.
35
Module 3 The Network Layer
1. Load shedding is one of the techniques used for congestion control. A network router
consists of a buffer. This buffer is used to store the packets and then route them to their
destination. Load shedding is defined as an approach of discarding the packets when the
buffer is full according to the strategy implemented in the data link layer. The selection of
packets to discard is an important task. Many times packets with less importance and old
packets are discarded.
2. The term comes from the world of electrical power generation, where it refers to the
practice of utilities intentionally blacking out certain areas to save the entire grid from
collapsing on hot summer days when the demand for electricity greatly exceeds the supply.
3. Selection of Packets to be Discarded: In the process of load shedding the packets need to
be discarded in order to avoid congestion. Therefore which packet needs to be
discarded is a question. Random early detection is one of the approach used to discard
the packets.
Random early detection
4. This situation can be exploited to help reduce congestion. By having routers drop packets
early, before the situation has become hopeless, there is time for the source to take action
before it is too late. A popular algorithm for doing this is called RED (Random Early
Detection)
5. To determine when to start discarding, routers maintain a running average of their queue
lengths. When the average queue length on some link exceeds a threshold, the link is said
to be congested and a small fraction of the packets are dropped at random.
6. Picking packets at random makes it more likely that the fastest senders will see a packet
drop; this is the best option since the router cannot tell which source is causing the most
trouble in a datagram network. The affected sender will notice the loss when there is no
acknowledgement, and then the transport protocol will slow down. The lost packet is thus
delivering the same message as a choke packet, but implicitly, without the router sending
any explicit signal.
7. RED routers improve performance compared to routers that drop packets only when their
buffers are full, though they may require tuning to work well. For example, the ideal
number of packets to drop depends on how many senders need to be notified of congestion.
36
Module 3 The Network Layer
8. However, ECN is the preferred option if it is available. It works in exactly the same
manner, but delivers a congestion signal explicitly rather than as a loss; RED is used when
hosts cannot receive explicit signals.
• The techniques we looked at in the previous sections are designed to reduce congestion
and improve network performance.
• Some applications and customers require stronger performance guarantees than what can
be achieved under typical circumstances.
Overprovisioning
• An easy solution for ensuring good quality of service is to build a network with sufficient
capacity to handle any anticipated traffic.
• This approach, known as overprovisioning, allows the network to carry application traffic
with minimal loss and low latency, assuming a decent routing scheme.
• The result is optimal performance, and the telephone system is an example of a somewhat
overprovisioned system, evident in the near-instant dial tone availability.
Drawbacks of Overprovisioning:
• The primary drawback of overprovisioning is its high cost, essentially solving the problem
by investing more resources.
• Quality of service mechanisms offer an alternative by enabling a network with less
capacity to meet application requirements at a lower cost.
• Overprovisioning relies on expected traffic, and significant issues may arise if traffic
patterns change unexpectedly.
• Quality of service mechanisms allow a network with less capacity to effectively meet
application requirements at a lower cost compared to overprovisioning.
• These mechanisms enable the network to uphold performance guarantees even during
traffic spikes, although this may involve rejecting some requests.
• Four issues must be addressed to ensure quality of service:
1. What applications need from the network.
37
Module 3 The Network Layer
2. How to regulate the traffic that enters the network.
3. How to reserve resources at routers to guarantee performance.
4. Whether the network can safely accept more traffic.
5.4.1 Application Requirements
• Definition of Flow:
• A stream of packets moving from a source to a destination is termed a flow (Clark, 1988).
• Flows can encompass all packets in a connection in a connection-oriented network or all
packets sent from one process to another in a connectionless network.
• Characterization of Flow Needs:
• Each flow's requirements can be described by four primary parameters: bandwidth,
delay, jitter, and loss.
• These parameters collectively determine the Quality of Service (QoS) necessary
for the flow.
• Common Applications and Network Requirements:
• Fig. 5-27 lists various common applications and the level of stringency in their
network requirements.
• The techniques we looked at in the previous sections are designed to reduce congestion
and improve network performance.
• Some applications and customers require stronger performance guarantees than what can
be achieved under typical circumstances.
Overprovisioning
• An easy solution for ensuring good quality of service is to build a network with sufficient
capacity to handle any anticipated traffic.
• This approach, known as overprovisioning, allows the network to carry application traffic
with minimal loss and low latency, assuming a decent routing scheme.
• The result is optimal performance, and the telephone system is an example of a somewhat
overprovisioned system, evident in the near-instant dial tone availability.
Drawbacks of Overprovisioning:
38
Module 3 The Network Layer
• The primary drawback of overprovisioning is its high cost, essentially solving the problem
by investing more resources.
• Quality of service mechanisms offer an alternative by enabling a network with less
capacity to meet application requirements at a lower cost.
• Overprovisioning relies on expected traffic, and significant issues may arise if traffic
patterns change unexpectedly.
• Quality of service mechanisms allow a network with less capacity to effectively meet
application requirements at a lower cost compared to overprovisioning.
• These mechanisms enable the network to uphold performance guarantees even during
traffic spikes, although this may involve rejecting some requests.
• Four issues must be addressed to ensure quality of service:
1. What applications need from the network.
2. How to regulate the traffic that enters the network.
3. How to reserve resources at routers to guarantee performance.
4. Whether the network can safely accept more traffic.
5.4.2 Application Requirements
• Definition of Flow:
• A stream of packets moving from a source to a destination is termed a flow (Clark, 1988).
• Flows can encompass all packets in a connection in a connection-oriented network or all
packets sent from one process to another in a connectionless network.
• Characterization of Flow Needs:
• Each flow's requirements can be described by four primary parameters: bandwidth,
delay, jitter, and loss.
• These parameters collectively determine the Quality of Service (QoS) necessary
for the flow.
• Common Applications and Network Requirements:
• Fig. 5-27 lists various common applications and the level of stringency in their
network requirements.
39
Module 3 The Network Layer
• Example Contrasts:
• Networks do not necessarily need to be lossless for reliable file transfer, as some
loss can be rectified with retransmissions.
• Delivery of packets with identical delays is not crucial for audio and video playout,
as some jitter can be smoothed by buffering packets at the receiver.
• Bandwidth Requirements:
• Applications exhibit varying bandwidth needs.
• Email, audio in all forms, and remote login have relatively low bandwidth
requirements.
• File sharing and video in all forms demand a significant amount of bandwidth.
• Delay Sensitivity:
• File transfer applications, including email and video, are not highly sensitive to
delay.
• Interactive applications like Web surfing and remote login are more delay-
sensitive.
• Real-time applications such as telephony and videoconferencing have strict delay
requirements.
• Jitter and its Impact:
• Jitter, the variation in delay or packet arrival times, is crucial.
• Email, audio, and file transfer are generally not sensitive to jitter.
• Remote login may be affected by jitter, causing bursts of updates on the screen.
• Video and audio are extremely sensitive to jitter; even small variations can have a
significant impact.
• Loss Tolerance:
• The first four applications (email, audio, file transfer, and video) have more
stringent loss requirements, requiring all bits to be delivered correctly.
• Retransmissions are typically used to achieve this goal.
• Audio and video applications can tolerate some lost packets without
retransmission, as users may not notice short pauses or occasional skipped frames.
• Network Support for QoS:
• Networks may support different categories of Quality of Service (QoS) to
accommodate various applications.
40
Module 3 The Network Layer
• ATM networks, once part of a grand networking vision, provide an example of
supporting different QoS categories but have become a niche technology.
• Networks may support different categories of QoS.
1. Constant bit rate (e.g., telephony).
2. Real-time variable bit rate (e.g., compressed videoconferencing).
3. Non-real-time variable bit rate (e.g., watching a movie on demand).
4. Available bit rate (e.g., file transfer).
5.4.3 Traffic Shaping
• Traffic Characterization:
• Before a network can make Quality of Service (QoS) guarantees, it needs to know
the nature of the traffic being guaranteed.
41
Module 3 The Network Layer
ensures the network can fulfill its promises.
• Traffic policing involves monitoring a traffic flow to verify adherence to the
agreement, potentially dropping excess packets or marking them with lower
priority.
• Importance of Shaping and Policing:
• Shaping and policing are critical for real-time data, such as audio and video
connections, which have stringent quality-of-service requirements.
• While less important for peer-to-peer transfers that consume all available
bandwidth, they are crucial for maintaining QoS commitments.
Leaky and Token Buckets
• Now we will look at a more general way to characterize traffic, with the leaky bucket and
token bucket algorithms. The formulations are slightly different but give an equivalent
result.
Try to imagine a bucket with a small hole in the bottom, as illustrated in Fig. 5-28(b). No
• matter the rate at which water enters the bucket, the outflow is at a constant rate, R, when
there is any water in the bucket and zero when the bucket is empty. Also, once the bucket
is full to capacity B, any additional water entering it spills over the sides and is lost.
• This bucket can be used to shape or police packets entering the network, as shown in Fig.
5-28(a). Conceptually, each host is connected to the network by an interface containing a
leaky bucket. To send a packet into the network, it must be possible to put more water into
the bucket. If a packet arrives when the bucket is full, the packet must either be queued
until enough water leaks out to hold it or be discarded.
• The former might happen at a host shaping its traffic for the network as part of the
operating system. The latter might happen in hardware at a provider network interface that
is policing traffic entering the network. This technique was proposed by Turner (1986) and
is called the leaky bucket algorithm.
• A different but equivalent formulation is to imagine the network interface as a bucket that
is being filled, as shown in Fig. 5-28(c). The tap is running at rate R and the bucket has a
capacity of B, as before. Now, to send a packet we must be able to take water, or tokens, as
the contents are commonly called, out of the bucket (rather than putting water into the
bucket).
• No more than a fixed number of tokens, B, can accumulate in the bucket, and if the
42
Module 3 The Network Layer
bucket is empty, we must wait until more tokens arrive before we can send another packet.
This algorithm is called the token bucket algorithm. Leaky and token buckets limit the
long-term rate of a flow but allow short term bursts up to a maximum regulated length to
pass through unaltered and without suffering any artificial delays.
• Large bursts will be smoothed by a leaky bucket traffic shaper to reduce congestion in the
network. As an example, imagine that a computer can produce data at up to 1000
Mbps(125 million bytes/sec) and that the first link of the network also runs at this speed.
• The pattern of traffic the host generates is shown in Fig. 5-29(a). This pattern is bursty.
The average rate over one second is 200 Mbps, even though the host sends a burst of
16,000 KB at the top speed of 1000 Mbps (for 1/8 of the second).
• Now suppose that the routers can accept data at the top speed only for short intervals, until
their buffers fill up. The buffer size is 9600 KB, smaller than the traffic burst. For long
intervals, the routers work best at rates not exceeding 200 Mbps (say, because this is all
the bandwidth given to the customer). The implication is that if traffic is sent in this
pattern, some of it will be dropped in the network because it does not fit into the buffers at
routers.
43
Module 3 The Network Layer
• To avoid this packet loss, we can shape the traffic at the host with a token bucket. If we
use a rate, R, of 200 Mbps and a capacity, B, of 9600 KB, the traffic will fall within what
the network can handle. The output of this token bucket is shown in Fig. 5-29(b).
• The host can send full throttle at 1000 Mbps for a short while until it has drained the
bucket. Then it has to cut back to 200 Mbps until the burst has been sent. The effect is to
spread out the burst over time because it was too large to handle all at once. The level of
the token bucket is shown in Fig. 5- 29(e).
• Fig. 5-29(c) shows the output of a token bucket with R = 200 Mbps and a capacity of 0.
This is the extreme case in which the traffic has been completely smoothed. No bursts are
allowed, and the traffic enters the network at a steady rate. The corresponding bucket level,
shown in Fig. 5-29(f), is always empty. Traffic is being queued on the host for release into
the network and there is always a packet waiting to be sent when it is allowed
• Calculating the length of the maximum burst (until the bucket empties) is slightly tricky. It
is longer than just 9600 KB divided by 125 MB/sec because while the burst is being
output, more tokens arrive. If we call the burst length S sec., the maximum output rate M
bytes/sec, the token bucket capacity B bytes, and the token arrival rate R bytes/sec, we can
see that an output burst contains a maximum of B + RS bytes. We also know that the
number of bytes in a maximum speed burst of length S seconds is MS. Hence, we have
B + RS = MS
• We can solve this equation to get S = B /(M − R). For our parameters of B = 9600 KB, M
= 125 MB/sec, and R = 25 MB/sec, we get a burst time of about 94 msec. A potential
problem with the token bucket algorithm is that it reduces large bursts down to the long- term
rate R. Using all of these buckets can be a bit tricky. When token buckets are used for traffic shaping at
hosts, packets are queued and delayed until the buckets permit them to be sent. When token buckets are
used for traffic policing at routers in the network, the algorithm is simulated to make sure that no more
packets are sent than permitted. Nevertheless, these tools provide ways to shape the network traffic into
more manageable forms to assist in meeting quality-of-service requirements.
44
Module 3 The Network Layer
along the route that packets take through the network.
• Assumption: For performance guarantees, packets of a flow must follow the same
route.
• All packets belonging to a specific flow must follow this established route.
• Algorithms that allocate router resources among packets of a flow and manage
resources between competing flows are known as packet scheduling algorithms.
• These algorithms play a crucial role in ensuring that resources are appropriately
distributed to meet performance guarantees.
Three different kinds of resources can potentially be reserved for different flows:
• Bandwidth Reservation:
• If a flow requires 1 Mbps and the outgoing line has a capacity of 2 Mbps, reserving
bandwidth means not oversubscribing the output line.
• Packets arriving at a router are buffered until they can be transmitted on the chosen
outgoing line.
• Buffers absorb small bursts of traffic as flows contend with each other.
• For quality of service, some buffers might be reserved for specific flows,
preventing competition with other flows for buffer space.
45
Module 3 The Network Layer
• Maintaining a reserve ensures that a buffer is available when a flow needs it, up to
a specified maximum value.
• Processing a packet consumes router CPU time, limiting the number of packets the
router can handle per second.
• Certain packets, such as ICMP packets, may require more CPU processing.
• Various scheduling algorithms create different opportunities for deciding which packet to
drop when buffers are full.
Challenges of FIFO for Quality of Service:
• FIFO scheduling is simple but not ideal for providing good quality of service.
• In scenarios with multiple flows, an aggressive sender in the first flow can adversely
impact the performance of other flows.
• Aggressive senders can hog router capacity, starving other flows and reducing their quality
of service.
46
Module 3 The Network Layer
• The order-of-arrival processing in FIFO can lead to delays for packets of other flows,
exacerbating the impact of aggressive senders.
• Many packet scheduling algorithms have been devised that provide stronger isolation
between flows and thwart attempts at interference (Bhatti and Crowcroft, 2000). One of
the first ones was the fair queueing algorithm devised by Nagle (1987). When the line
becomes idle, the router scans the queues round-robin, as shown in Fig. 5-30.
• It then takes the first packet on the next queue. In this way, with n hosts competing for the
output line, each host gets to send one out of every n packets. It is fair in the sense that all
flows get to send packets at the same rate. Sending more packets will not improve this rate.
Although a start, the algorithm has a flaw: it gives more bandwidth to hosts that use large
packets than to hosts that use small packets. Demers et al suggested an improvement
in which the round-robin is done in such a way as to simulate a byte-by-byte round-robin,
instead of a packet-by-packet round-robin.
• The trick is to compute a virtual time that is the number of the round at which each packet
would finish being sent. Each round drains a byte from all of the queues that have data to
send. The packets are then sorted in order of their finishing times and sent in that order.
This algorithm and an example of finish times for packets arriving in three flows are
illustrated in Fig. 5-31. If a packet has length L, the round at which it will finish is simply
L rounds after the start time. The start time is either the finish time of the previous
packet,time of the packet, if the queue is empty when it arrives or the arrival.
47
Module 3 The Network Layer
• From the table in Fig. 5-32(b), and looking only at the first two packets in the top two
queues, packets arrive in the order A, B, D, and F. Packet A arrives at round 0 and is 8
bytes long, so its finish time is round 8. Similarly, the finish time for packet B is 11.
Packet D arrives while B is being sent. Its finish time is 9 byte-rounds after it starts when
B finishes, or 20. Similarly, the finish time for F is 16. In the absence of new arrivals, the
relative sending order is A, B, F, D, even though F arrived after D. It is possible that
another small packet will arrive on the top flow and obtain a finish time before D. It will
only jump ahead of D if the transmission of that packet has not started
• One shortcoming of this algorithm in practice is that it gives all hosts the same priority. In
many situations, it is desirable to give, for example, video servers more bandwidth than,
say, file servers. This is easily possible by giving the video server two or more bytes per
round. This modified algorithm is called WFQ (Weighted Fair Queueing). Letting the
number of bytes per round be the weight of a flow, W, we can now give the formula for
computing the finish time:
Fi = max(Ai,Fi −1)+Li /W
• where Ai is the arrival time, Fi is the finish time, and Li is the length of packet i. The
bottom queue of Fig. 5-31(a) has a weight of 2, so its packets are sent more quickly as you
can see in the finish times given in Fig. 5-31(b).
• As a final example of a scheduler, packets might carry timestamps and be sent in
timestamp order. Clark et al. (1992) describes a design in which the timestamp records
how far the packet is behind or ahead of schedule as it is sent through a sequence of
routers on the path. Packets that have been queued behind other packets at a router will
tend to be behind schedule, and the packets that have been serviced first will tend to be
ahead of schedule. Sending packets in order of their timestamps has the beneficial effect of
speeding up slow packets while at the same time slowing down fast packets. The result is
that all packets are delivered by the network with a more consistent delay.
5.4.5 Admission Control
• QoS Elements and Admission Control:
48
Module 3 The Network Layer
• The network decides to accept or reject the flow based on its capacity and
commitments to other flows.
• Reservations must be made at all routers along the route to prevent congestion.
• Some routing algorithms send all traffic over the best path, potentially causing
rejection of flows if there's insufficient spare capacity.
• QoS routing and splitting traffic over multiple paths are techniques to
accommodate QoS guarantees for new flows.
• The decision to accept or reject a flow involves more than comparing requested
resources with router excess capacity.
• Some applications are more tolerant of occasional missed deadlines than others.
• Applications must choose between hard guarantees or behavior that holds most of
the time.
• Some applications may be willing to adjust flow parameters, while others may not..
• Properties like the number of pixels per frame and audio bandwidth may be
negotiable for certain applications.
49
Module 3 The Network Layer
• Flows need to be described in terms of specific negotiable parameters.
• Flow Specification:
• As the specification travels along the route, each router may modify parameters as
necessary, but modifications can only reduce the flow, not increase it.
• Parameter Modifications:
• Each router along the path examines and modifies the flow specification.
• Modifications are allowed to decrease the flow, such as lowering the data rate, but
not to increase it.
• Establishment of Parameters:
• When the flow specification reaches the destination, the negotiated parameters can
be established.
• The largest queueing delay experienced by a flow is a function of the burst size of
the token bucket.
• In cases of smooth traffic without bursts, packets are drained from the router as
quickly as they arrive, resulting in no queueing delay (ignoring packetization
effects).
50
Module 3 The Network Layer
• Fair queueing isolates the bandwidth allocated to different flows, ensuring that
each flow meets its bandwidth and delay guarantees regardless of the behaviour of
other competing flows at the router.
• Isolation of Flows:
• Competing flows at the router cannot break the guarantee even if they save up
traffic and send it all at once.
• The combination of token buckets and fair queueing ensures that each flow's
guarantees remain intact.
• The guarantees hold for a path through multiple routers in any network topology.
• Maximum delay for each flow is ensured, even in the worst-case scenario where a
burst of traffic hits the first router and competes with other flows. The burst's delay
is at most D, and this delay smooths the burst, preventing further queueing delays
at later routers.
• Between 1995 and 1997, the Internet Engineering Task Force (IETF) dedicated
substantial effort to creating an architecture for streaming multimedia.
• This initiative resulted in over two dozen RFCs, starting with RFCs 2205–2212.
• The overarching name for this work is "integrated services," addressing both
unicast and multicast applications.
• The integrated services architecture was designed to cater to both unicast and
multicast applications.
• Unicast example: a single user streaming a video clip from a news site.
51
Module 3 The Network Layer
• Focus on Multicast:
• Tracking entries and exits of the audience for each sender becomes impractical,
especially for systems transmitting television to a large number of subscribers
RSVP—The Resource reservation Protocol
• Functionality of RSVP:
• In its simplest form, RSVP utilizes multicast routing with spanning trees.
52
Module 3 The Network Layer
• The standard multicast routing algorithm constructs a spanning tree that covers all
members of the group.
• RSVP doesn't define the multicast routing algorithm but adds a little extra
information periodically multicast to the group. This information instructs routers
along the tree to maintain specific data structures in their memories.
As an example, consider the network of Fig. 5-34(a). Hosts 1 and 2 are multicast senders, and
hosts 3, 4, and 5 are multicast receivers. In this example, the senders and receivers are disjoint,
but in general, the two sets may overlap. The multicast trees for hosts 1 and 2 are shown in Fig. 5-
34(b) and Fig. 5-34(c), respectively.
• To enhance reception quality and prevent congestion, any receiver within a group
has the capability to send a reservation message up the tree to the sender.
• If there is insufficient bandwidth available at any point along the path, the router
53
Module 3 The Network Layer
reports back a failure in the reservation process.
• Reservation Completion:
• By the time the reservation message reaches the source (sender), bandwidth has
been successfully reserved all the way from the sender to the receiver who initiated
the reservation request.
• The spanning tree, as established by the multicast routing algorithm, serves as the
path for the reservation message, ensuring that all routers along the way are
involved in the reservation process.
An example of such a reservation is shown in Fig. 5-35(a). Here host 3 has requested a channel to
host 1. Once it has been established, packets can flow from 1 to 3 without congestion. Now
consider what happens if host 3 next reserves a channel to the other sender, host 2, so the user can
watch two television programs at once. A second path is reserved, as illustrated in Fig. 5-35(b).
Note that two separate channels are needed from host 3 to router E because two independent
streams are being transmitted.
54
Module 3 The Network Layer
• When making a reservation, a receiver has the option to specify one or more
sources from which it wants to receive data.
• The receiver can indicate whether these source choices are fixed for the entire
duration of the reservation or if it wants the flexibility to change sources later.
• Specifically, if two receivers agree not to change sources later on, routers can set
them up to share a path efficiently.
• In the fully dynamic case, reserved bandwidth is decoupled from the choice of the
data source.
• Once a receiver has reserved bandwidth, it can switch to another source while
retaining the portion of the existing path that is valid for the new source.
• Advantages:
• Offers good QoS for one or more flows by reserving necessary resources along the
route.
• Disadvantages:
• Requires advance setup for establishing each flow, which doesn't scale well with a
large number of flows.
55
Module 3 The Network Layer
Class-Based Quality of Service (Differentiated Services - DiffServ):
• Approach:
• Simpler QoS approach, largely implemented locally in each router without advance
setup for the entire path.
• Administrative Domain:
• Packet Marking:
• Class information is carried in the Differentiated Services field of IPv4 and IPv6
packets.
• Traffic Conformance:
• Traffic within a class may be required to conform to specific shapes (e.g., leaky
bucket) or limitations.
• Ease of Implementation:
• Flow-Based Scheme:
56
Module 3 The Network Layer
• Class-Based Scheme:
• All telephone calls share resources reserved for the telephony class, preventing
interference from other classes.
• No individual telephone call receives private resources reserved exclusively for it.
Expedited Forwarding
• Purpose:
• Suited for applications with stringent requirements, such as Voice over IP (VoIP).
• Two-Class System:
• Symbolic Representation:
Note: There is one physical line; logical pipes illustrate reserved bandwidth for
different service classes.
57
Module 3 The Network Layer
• Implementation Strategy:
• Packets are classified as expedited or regular and marked accordingly.
• Classification may occur at the sending host or the ingress (first) router.
• Traffic Policing:
• In cases where marking is done by the host, the ingress router may police the traffic to
ensure customers send only the allocated amount of expedited traffic.
• Queueing Mechanism:
• Routers within the network may have two output queues for each outgoing line: one
for expedited packets and one for regular packets.
• Arrival packets are queued accordingly, and the expedited queue is given priority over
the regular queue.
• Network Behaviour:
• Expedited packets, despite the actual presence of heavy regular traffic, experience the
network as unloaded due to priority treatment.
• Marking by Hosts:
58
Module 3 The Network Layer
• Assured Forwarding
• Objective:
forwarding.
• Priority Classes:
• Specifies four priority classes, each allocated specific resources.
• Discard Classes:
• Defines three discard classes for packets experiencing congestion: low, medium,
and high.
1. Packet Classification:
• Pass packets through a traffic policer (e.g., token bucket) to identify discard
class.
59
Module 3 The Network Layer
• Low discard for packets within small bursts, medium discard for those
exceeding small bursts, and high discard for packets exceeding large bursts.
3. Router Processing:
• Weighted fair queueing commonly employed for the four priority classes.
• Preferential Dropping:
• Within a priority class, packets with a higher discard class can be preferentially
dropped.
• Algorithms like RED (Random Early Detection) may be used, starting to drop
packets before buffer space is exhausted during congestion.
• Enables acceptance of low discard packets while dropping high discard packets.
• Packet Scheduler:
60
Module 3 The Network Layer
61