CN Module 3
CN Module 3
This equipment is used as follows. A host with a packet to send transmits it to the nearest router, either on
its own LAN or over a point-to-point link to the ISP. The packet is stored there until it has fully arrived and
the link has finished its processing by verifying the checksum. Then it is forwarded to the next router along
the path until it reaches the destination host, where it is delivered. This mechanism is store-and-forward
packet switching.
The network layer provides services to the transport layer at the interface, and the services must be
designed with specific goals in mind.
1. The services should be independent of the router technology.
2. The transport layer should be shielded from the number, type, and topology of the routers
present.
• The network layer designers have significant freedom in creating detailed specifications for services
offered to the transport layer. However, this freedom often leads to a debate between two factions:
one, represented by the Internet community, who believes routers' job is to move packets and nothing
else, and the other, represented by telephone companies, who believe the network should provide a
reliable, connection-oriented service. This viewpoint is based on the end-to-end argument,which has
been influential in shaping the Internet.
• The debate continues, with early data networks being connection-oriented, but connectionless network
layers have gained popularity since the early Internet days. The IP protocol is now asymbol of
success, while connection-oriented technologies like ATM are now found in niche uses.
However, the Internet is evolving connection-oriented features as quality of service becomes more
important. Two examples of connection-oriented technologies are MPLS (MultiProtocol Label
Switching) and VLANs, both widely used. The debate continues to evolve as the Internet evolves in
response to the evolving needs of its users.
• The network layer provides two types of service: connectionless and connection-oriented.
Connectionless service injects packets into the network individually and routes them independently,
forming datagram networks. No advance setup is needed, and the network is called a datagram
network.
• On the other hand, connection-oriented service establishes a path from the source router to the
destination router, forming a virtual circuit (VC) network. In a datagram network, a process P1 sends
a long message to a transport layer, which prepares a transport header and sends the result to the
network layer. This layer operates within the operating system, similar to the physical circuits set up
by telephone systems.
Let us now see how a datagram network works. Suppose that the process P1 in Fig. 5-2 has a long
message for P2. It hands the message to the transport layer, with instructions to deliver it to process
P2 on host H2. The transport layer code runs on H1, typically within the operating system. It prepends
a transport header to the front of the message and hands the result to the network layer, probably just
another procedure within the operating system.
In this example, a message is four times longer than its maximum packet size, so the network layer breaks it
into four packets, 1, 2, 3, and 4, and sends each to router A using a point-to-point protocol like PPP. The ISP
takes over, and every router has an internal table indicating where to send packets for each possible
destination. Each table entry is a pair consisting of a destination and the outgoing line to use for that
destination.
A router has only two outgoing lines, so every incoming packet must be sent to one of these routers, even if
the ultimate destination is to some other router. The router's initial routing table shows that packets 1, 2, and
3 are stored briefly at A, and each packet is forwarded according to A's table to C within a new frame.
Packet 1 is then forwarded to E and then to F, and when it gets to F, it is sent within a frame over the LAN to
H2. Packets 2 and 3 follow the same route.
• However, packet 4 is sent to router B, even though it is also destined for F. A decided to
send packet 4 via a different route than the first three packets, possibly due to a traffic jam
along the ACE path. The routing algorithm manages the tables and makes routing
decisions, and there are several different kinds of routing algorithms.
let us consider what happens if H3 also wants to establish a connection to H2. It chooses connection identifier
1 (because it is initiating the connection and this is its only connection) and tells the network to establish the
virtual circuit. This leads to the second row in the tables. Note that we have a conflict here because although
A can easily distinguish connection 1 packets from H1 from connection 1 packets from H3, C cannot do this.
For this reason, A assigns a different connection identifier to the outgoing traffic for the second connection.
Avoiding conflicts of this kind is why routers need the ability to replace connection identifiers in outgoing
packets. In some contexts, this process is called label switching. An example of a connection-oriented
network service is MPLS (MultiProtocol Label Switching). It is used within ISP networks in the Internet,
with IP packets wrapped in an MPLS header having a 20-bit connection identifier or label. MPLS is often
hidden from customers, with the ISP establishing long-term connections for large amounts of traffic, but it
is increasingly being used to help when quality of service is important but also with other ISP traffic
management tasks.
5.1.1 Comparison of Virtual-Circuit and Datagram Networks
Both virtual circuits and datagrams have their supporters and their detractors. We will now attempt to
summarize both sets of arguments. The major issues are listed in Fig. 5-4, although purists could probably
find a counterexample for everything in the figure.
State Routers do not hold state Each VC requires router tablespace per
information information about connections connection
Effect of router None, except for packets lost All VCs that passed through thefailed
failures during the crash router are terminated
• Virtual circuits and datagrams have different advantages and disadvantages within a network. One
trade-off is setup time, which takes time and resources, but is easily done with a data packet in a
virtual-circuit network. In contrast, a datagram network requires more
complicated lookup procedures to locate the entry for the destination.
• Datagram networks use longer destination addresses than virtual-circuit networks due to theirglobal
meaning, which can be a significant overhead and waste of bandwidth. Additionally, datagram
networks require more table space in router memory, as each virtual circuit needs
an entry.
• Virtual circuits offer some advantages in guaranteeing quality of service and avoiding congestion
within the network, as resources can be reserved in advance when the connection is established.
However, congestion avoidance is more difficult with a datagram network.
• For transaction processing systems, the overhead required to set up and clear a virtual circuit may
dwarf its use. However, for long-running uses like VPN traffic between corporate offices, permanent
Virtual circuits also have a vulnerability problem, as if a router crashes and loses its memory, all virtual
circuits passing through it must be aborted. Datagrams, on the other hand, allow routers to balance traffic
throughout the network, as routes can be changed partway through a long sequence of packet transmissions.
• The optimality principle, as proposed by Bellman (1957), states that if router J is on the
optimal path from router I to router K, the optimal path from J to K also follows the same
route. This principle can be applied to specific algorithms without considering network
topology or traffic.
• The optimality principle enables the creation of a sink tree, a tree rooted at the destination,
where all optimal routes from all sources are used by all router.
• A sink tree is not unique, but if all possible paths are chosen, it forms a DAG (Directed
Acyclic Graph) with no loops. Sink trees are used for both cases, depending on the
assumption that paths do not interfere, such as a traffic jam on one path not causing
another to divert.
• A sink tree is not unique, but if all possible paths are chosen, it forms a DAG (Directed
Acyclic Graph) with no loops. Sink trees are used for both cases, depending on the
assumption that paths do not interfere, such as a traffic jam on one path not causing
another to divert.
• The idea is to build a graph of the network, with each node of the graph representing a router
and each edge of the graph representing a communication line, or link. To choose a route
between a given pair of routers, the algorithm just finds the shortest path between them
on the graph.
• The concept of a shortest path deserves some explanation. One way of measuring path length
is the number of hops. Using this metric, the paths ABC and ABE in Fig. 5-7 are
equally long. Another metric is the geographic distance in kilometers, in which case ABC
is clearly much longer than ABE (assuming the figure is drawn to scale).
• Dijkstra's algorithm, developed in 1959, finds the shortest paths between a source and all
destinations in a network. Each node is labeled with its distance from the source node along
the best known path, which must be non-negative. Initially, no paths are known, so all nodes
are labeled with infinity. As paths are found, labels may change to reflect better
paths, and labels can be tentative or permanent.
• The labeling algorithm is a method used to find the shortest path from a node A to a node
• Routers in routing algorithms rely on local knowledge rather than the entire network. A
common technique is flooding, where every incoming packet is sent on every outgoing
line except the one it arrived on. This generates numerous duplicate packets, unless
measures are taken. One measure is having a hop counter in each packet header, which is
decremented at each hop and discarded when it reaches zero. The hop counter should be
initialized to the path length.
• Flooding with a hop count can lead to an exponential number of duplicate packets. To
prevent this, routers should track which packets have been flooded and avoid sending them
out again. This can be achieved by placing a sequence number in each packet received from
its hosts, and having a list per source router detailing which sequence
numbers have been seen.
• The algorithm is given in Fig. 5-8, where global variables n and dist describe the graph and
are initialized before shortest path is called. The shortest paths from t to s in an undirected
graph are the same as the shortest paths from s to t.
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
To prevent list growth without bound, each list should be augmented by a counter, k,
ensuring all sequence numbers through k have been seen. When a packet arrives, it can be
checked if it has been flooded and discarded.
• Flooding ensures packets are delivered to every node in the network, which is effective for
broadcasting information. In wireless networks, all messages transmitted by a station can
be received by all other stations within its radio range, which some algorithms utilize.
• Flooding is a robust routing algorithm that can find a path for a packet even in a war zone,
and requires minimal setup. It can be used as a building block for more efficient routing
algorithms and as a metric for comparison. Flooding always chooses the shortest path,
making it the only algorithm that can produce a shorter delay, excluding the overhead
generated by the flooding process itself.
10
Module 3 The Network Layer
• Computer networks utilize dynamic routing algorithms, such as distance vector routing
and link state routing, which are more complex but more efficient in finding the shortest
paths for the current topology, with a focus on the former algorithm.
• A distance vector routing algorithm operates by having each router maintain a table (i.e., a
vector) giving the best known distance to each destination and which link to use to get
there. These tables are updated by exchanging information with the neighbors. Eventually,
every router knows the best link to reach each destination.
• The distance vector routing algorithm, also known as the Bellman-Ford routing algorithm,
was the original ARPANET routing algorithm and was used in the Internet under the name
RIP. It involves each router maintaining a routing table with an entry for each router,
including the preferred outgoing line and distance estimate, which can be measured as hops
or other metrics.
• The router is assumed to know the distance to its neighbors, either as hops or propagation
delay. If the metric is hops, the distance is one hop.
• If the metric is propagation delay, the router can measure it directly with special ECHO
packets. For example, if delay is used as a metric, the router sends a list of estimated
delays to each neighbor every T msec. If the router knows the delay to X is m msec, it can
reach router i via X in Xi + m msec. The router calculates the best estimate for each neighbor
and uses it in its new routing table.
• This updating process is illustrated in Fig. 5-9. Part (a) shows a network. The first four
columns of part (b) show the delay vectors received from the neighbors of router J. A
claims to have a 12-msec delay to B, a 25-msec delay to C, a 40- msec delay to D, etc.
Suppose that J has measured or estimated its delay to its neighbors, A, I, H, and K, as 8, 10,
12, and 6 msec, respectively.
• Consider how J computes its new route to router G. It knows that it can get to A in 8 msec,
and furthermore A claims to be able to get to G in 18 msec, so J knows it can count on a
11
Module 3 The Network Layer
delay of 26 msec to G if it forwards packets bound for G to A. Similarly, it computes the
delay to G via I, H, and K as 41 (31 + 10), 18 (6 + 12), and 37 (31 + 6) msec, respectively.
The best of these values is 18, so it makes an entry in its routing table that the delay to G is
18 msec and that the route to use is via H. The same calculation is performed for all the
other destinations, with the new routing table shown in the last column of the figure.
• Distance vector routing was used in the ARPANET until 1979, when it was replaced by link
state routing. The primary problem that caused its demise was that the algorithm often
took too long to converge after the network topology changed (due to the count-to-infinity
problem). Consequently, it was replaced by an entirely new algorithm, now called link state
routing. Variants of link state routing called IS-IS and OSPF are the routing algorithms that
are most widely used inside large networks and the Internet today.
• The idea behind link state routing is fairly simple and can be stated as five parts. Each
router must do the following things to make it work:
1. Discover its neighbors and learn their network addresses.
2. Set the distance or cost metric to each of its neighbors.
3. Construct a packet telling all it has just learned.
4. Send this packet to and receive packets from all other routers.
5. Compute the shortest path to every other router.
• In effect, the complete topology is distributed to every router. Then Dijkstra’s algorithm
can be run at each router to find the shortest path to every other router. Below we will
consider each of these five steps in more detail
12
Module 3 The Network Layer
• The broadcast LAN provides connectivity between each pair of attached routers.
However, modeling the LAN as many point-to-point links increases the size of the
topology and leads to wasteful messages. A better way to model the LAN is to consider
it as a node itself, as shown in Fig. 5-11(b). Here, we have introduced a new, artificial
node, N, to which A, C, and F are connected. One designated router on the LAN is selected
to play the role of N in the routing protocol. The fact that it is possible to go from A to C
on the LAN is represented by the path ANC here.
• Setting Link Costs: The link state routing algorithm requires each link to have adistance
or cost metric for finding shortest paths. The cost to reach neighbors can be set
automatically, or configured by the network operator. A common choice is to make the
cost inversely proportional to the bandwidth of the link. For example, 1-Gbps Ethernet
may have a cost of 1 and 100-Mbps Ethernet a cost of 10. This makes higher-capacity
paths better choices. If the network is geographically spread out, the delay of the links
may be factored into the cost so that paths over shorter links are better choices. The most
direct way to determine this delay is to send over the line a special ECHO packet that the
other side is required to send back immediately. By measuring the round-trip time and
dividing it by two, the sending router can get a reasonable estimate of the delay.
• Building Link State Packets: Once the information needed for the exchange has been
collected, the next step is for each router to build a packet containing all the data. The
packet starts with the identity of the sender, followed by a sequence number and age (to
be described later) and a list of neighbors. The cost to each neighbor is also given. An
example network is presented in Fig. 5-12(a) with costs shown as labels on the lines. The
corresponding link state packets for all six routers are shown in Fig. 5- 12(b).
13
Module 3 The Network Layer
• Building the link state packets is easy. The hard part is determining when to build them.
One possibility is to build them periodically, that is, at regular intervals. Another
possibility is to build them when some significant event occurs, such as a line or neighbor
going down or coming back up again or changing its properties appreciably.
14
Module 3 The Network Layer
5.2.7 Broadcast Routing
• Broadcasting is the process of sending a packet to all destinations simultaneously, such
as weather reports or stock market updates.
• This method requires no network features and requires a complete list of all destinations.
However, this method is wasteful, slow, and requires a complete list of all hosts, making
it not desirable in practice despite its widespread application.
• Multidestination routing is an improvement in network communication where each packet
contains a list of destinations or a bit map indicating the desired destinations.
When a packet arrives at a router, it checks all the destinations to determine the set of
output lines needed.
• The router generates a new copy of the packet for each output line and includes only those
destinations that will use the line. This partitions the destination set among the
output lines, resulting in more efficient network bandwidth usage. However, this scheme
still requires the source to know all destinations and requires router work.
• Flooding is a better broadcast routing technique that efficiently uses links with a simple
decision rule at routers. Although it is not suitable for point-to-point communication, it is
worth considering for broadcasting.
• Reverse path forwarding is an elegant and simple idea that checks if a broadcast packet
arrived on the link normally used for sending packets toward the source. If so, the router
forwards copies of the packet onto all links except the one it arrived on. If the packet
arrived on a different link, it is discarded as a likely duplicate.
• Reverse path forwarding is a network routing algorithm where router I sends packets to
previously unvisited routes on the first hop. The second hop generates eight packets, two
by each router that received a packet on the first hop.
• All eight packets arrive at previously unvisited routes, with five along the preferred line.
The third hop generates six packets, with only three arriving on the preferred path. After
five hops and 24 packets, broadcasting terminates, compared to four hops and 14
packets if the sink tree was followed exactly. Reverse path forwarding is efficient and
easy to implement, sending the broadcast packet over each link only once in each
direction, similar to flooding, without the need for routers to remember sequence numbers
or list all destinations in the packet.
• The final broadcast algorithm enhances reverse path forwarding by using a sink tree or
other spanning tree for router initiating the broadcast.
• A spanning tree is a subset of the network that includes all routers but has no loops. If
each router knows which line belongs to the tree, it can copy an incoming broadcast
packet onto all spanning tree lines except the one it arrived on.
• This method efficiently uses bandwidth, generating the minimum number of packets
needed. However, each router must have knowledge of a spanning tree for the method to
be applicable.
15
Module 3 The Network Layer
•
5.2.8 Multicast Routing
• Multicasting is a method used to send messages to large but small groups in a network.
This is particularly useful for applications like multiplayer games or live sports events.
Sending a distinct packet to each receiver is expensive unless the group is small.
• However, broadcasting a packet is wasteful if the group consists of numerous machines
on a million-node network. Multicasting requires a routing algorithm to create and
destroy groups and identify routers as members. Each group is identified by a multicast
address, and routers know their group membership.
• Multicast routing schemes use spanning trees to deliver packets to members of a group
efficiently, based on the group's density or sparsity. Broadcast is a good start for dense
groups, as it efficiently gets the packet to all parts of the network. However, it may reach
routers not members of the group, which is wasteful. Deering and Cheriton (1990)proposed
pruning the broadcast spanning tree by removing links that do not lead to members,
resulting in an efficient multicast spanning tree.
• For example, consider two groups, 1 and 2, with routers attached to hosts belonging to one
or both groups. A spanning tree for the leftmost router can be used for broadcast but
is overkill for multicast. A multicast spanning tree for the leftmost router sends packets
only along this tree, which is more efficient than the broadcast tree due to the number of
links. Different multicast groups have different spanning trees.
• The spanning tree can be pruned using link state routing, where each router is aware of the
complete topology and can construct its own pruned tree for each sender to the group.
This is an example of a link state protocol called MOSPF (Multicast OSPF).
• Distance vector routing follows a different pruning strategy, with the basic algorithm
being reverse path forwarding. When a router receives a multicast message for a particular
group, it responds with a PRUNE message, preventing the neighbor from sending
more multicasts. This recursive pruning of the spanning tree is also possible for routers with no
group members among their hosts. DVMRP (Distance Vector Multicast Routing Protocol) is an
example of this method.
• Core-based trees are an alternative design for group routing, where routers agree on a root
(core or rendezvous point) and build the tree by sending packets from each member
to the root. The tree is the union of the paths traced by these packets. For group 1, a sender
sends a packet to the core, which is forwarded down the tree. This performance
optimization allows packets to be multicast without reaching the core before being
forwarded up or down all branches.
• Shared trees are not optimal for all sources, as they can be inefficient depending on the
location of the core and senders. However, they can be a significant savings in storage
costs, messages sent, and computation. Each router maintains one tree per group, reducing
the need for multiple trees. Additionally, routers not part of the tree do not support the
group. This is why core-based trees are used for multicasting to sparse groups in the
Internet, as part of popular protocols like PIM.
5.2.9 Anycast Routing
• Delivery models such as unicast, broadcast, and multicast are used to send packets to
specific destinations. Anycast, a variant, delivers a packet to the nearest member of a group.
Anycast routing schemes find these paths.
• This is useful when nodes provide services like time of day or content distribution, where
the right information is received regardless of the contact node. Anycast is used in the
Dr.Karthik B U17 Page
Module 3 The Network Layer
Internet as part of DNS. Regular distance vector and link state routing can produce
anycast routes.
• The distance vector routing procedure is used to assign addresses to group 1 members,
resulting in nodes sending to the nearest instance of destination 1. This is because the
routing protocol doesn't recognize multiple instances of destination 1, believing all
instances of node 1 are the same.
• The mobile routing system involves a triangle routing, where the mobile host retrieves the
packet from the sender and sends its reply packet directly to the sender. This route may be
circuitous if the remote location is far from the home location.
• The sender may learn the current care of address during step 4. Subsequent packets can
be routed directly to the mobile host by tunneling them to the care of address, bypassing
the home location entirely. If connectivity is lost, the home address can always be used to
reach the mobile.
• Security is an important aspect, as messages may contain security information to check
with cryptographic protocols. The scheme is modeled on IPv6 mobility, which is used in
the Internet and IP-based cellular networks like UMTS. Extensions of the basic scheme
support mobile networks without host work.
Numerous routing algorithms for ad hoc networks have been proposed, but their practical
use is limited compared to mobile networks. One popular algorithm is AODV (Ad hoc On-
demand Distance Vector), adapted for mobile environments with limited bandwidth and
battery lifetimes. It discovers and maintains routes, highlighting its potential in ad hoc
networks.
• Route Discovery : The Ad Hoc Network (AODV) is a network algorithm that discovers routes
to a destination on demand, saving time and effort when the topology changes
before the route is used. The topology of an ad hoc network can be described by a graph
of connected nodes, with two nodes connected if they can communicate directly using their
radios. For simplicity, all connections are symmetric.
• The algorithm is based on a newly formed ad hoc network, where a process at node A sends
a packet to node I. The algorithm maintains a distance vector table at each node,
keyed by destination, providing information about the destination and its neighbors. If no
entry is found for node I, the process must discover a route to it. This "on demand" approach
makes the algorithm efficient and efficient.
• A sends a ROUTE REQUEST packet to I, which is broadcasted using flooding. The packet is
then rebroadcast to nodes F, G, C, H, E, and I. A sequence number is set at the source to remove
duplicates during the flood.
• The request reaches node I, which constructs a ROUTE REPLY packet. This packet is
unicast along the reverse of the path followed by the request. Intermediate nodes must
remember the node that sent the request and increment a hop count as they forward the reply.
The replies indicate which neighbor to use to reach the destination. Intermediate nodes G and
D input the best route they hear into their routing tables. When the reply reaches A, a new route,
ADGI, is created.
• The algorithm generates numerous broadcasts in large networks, even for nearby destinations.
To reduce overhead, the scope of broadcasts is limited using the IP packet's Time to live field.
If the field is 0, the packet is discarded. The route discovery process is modified by broadcasting
a ROUTE REQUEST packet with a Time to live set to 1, and then sending more packets in
increasingly wider rings, starting locally and then globally.
• Route Maintenance : The topology of a network can change spontaneously due to nodes
Dr.Karthik B U20 Page
Module 3 The Network Layer
moving or being switched off. The algorithm must handle this by broadcasting a Hello message
to its neighbors, which is expected to respond. If no response is received, the broadcaster knows
that the neighbor has moved out of range or failed, and if a packet is sent toa non-responsive neighbor,
it learns that the neighbor is no longer available.
• This information is used to purge routes that no longer work. Each node, N, tracks its active
neighbors who have sent a packet for a possible destination during the last ΔT seconds.
When any of N's neighbors becomes unreachable, it checks its routing table to see which
routes have routes using the now-gone neighbor. The active neighbors inform each other
recursively until all routes depending on the now-gone node are purged from all routing tables.
• Invalid routes are removed from the network, and senders can find new valid ones using the
discovery mechanism. However, distance vector protocols can suffer from slow convergence
or count-to-infinity problems after topology changes. To ensure rapid convergence, routes
include a sequence number controlled by the destination, which increments every time a fresh
ROUTE REPLY is sent. Senders request a fresh route by including the destination sequence
number of the last used route in the ROUTE REQUEST.
• Intermediate nodes store routes with a higher sequence number or the few hops for the
current sequence number. This on-demand protocol saves bandwidth and battery life compared
to a standard distance vector protocol that broadcasts updates periodically.
• The text discusses ad hoc routing schemes, such as DSR (Dynamic Source Routing) and GPSR
(Greedy Perimeter Stateless Routing). These schemes share route discovery and maintenance
when routes overlap, allowing for resource savings. For example, if B wants to send packets to
I, it will perform route discovery first, then reach D, which already has a route to I. The choice
of protocol depends on the types of ad hoc networks that prove useful in practice.
● Too many packets present in (a part of) the network causes packet delay and loss that degrades
performance. This situation is called congestion.
● The network and transport layers share responsibility for handling congestion. The network layer
directly experiences congestion and must decide on handling excess packets.
● The most effective way to control congestion is to reduce the load placed by the transport layer on
the network, requiring collaboration between the two layers.
● The above figure depicts the onset of congestion, The network's carrying capacity is maintained when
the number of packets sent is within its limits. If twice as many packets are sent, twice as many are
delivered.
● However, as traffic loads approach the capacity, some packets are lost, consuming some of the
capacity, causing the number of delivered packets to fall below the ideal curve, resulting in congested
networks.
● A poorly designed network can experience congestion collapse, where performance decreases as load
increases beyond capacity. This occurs when packets are delayed, making them no longer useful when
they leave the network.
● In the early internet, packets spent waiting for backlogs could reach their maximum time. Senders re-
transmitted delayed packets, wasting capacity. To capture these factors, the y-axis of Fig. 3.1 is given
as goodput, which is the rate at which useful packets are delivered by the network.
● Congestion is a common issue in networks, causing queues and packet loss. Adding more memory
can help, but it can worsen congestion as packets time out and duplicates are sent. Low-bandwidth
links or routers that process packets slowly can also become congested.
● To improve congestion, traffic can be directed away from the bottleneck, but eventually, all regions
will be congested, necessitating the need for faster network design.
● Congestion control and flow control are two distinct concepts in network management. Congestion
control ensures the network can handle the offered traffic, involving all hosts and routers' behavior.
Flow control focuses on the traffic between a sender and a receiver, ensuring a fastsender cannot
transmit data faster than the receiver can absorb it.
● Congestion control is often confused with flow control, as the best way to handle both problems is
to get the host to slow down. Understanding congestion control involves examining approaches at
different time scales, preventing congestion, and coping with it once it has set in.
1. The presence of congestion means that the load is (temporarily) greater than the resources (in a part
of the network) can handle. Two solutions come to mind: increase the resources or decrease the
load. As shown in Fig. 5-22, these solutions are usually applied on different time scales to either
prevent congestion or react to it once it has occurred.
2. To avoid congestion, build a network that matches the traffic it carries. Resources can be added
dynamically during congestion, such as using spare routers or enabling lines.
3. Provisioning involves upgrading heavily utilized links and routers at the earliest opportunity. Routes
Dr.Karthik B U22 Page
Module 3 The Network Layer
can be tailored to changing traffic patterns, such as shifting traffic away from heavily used paths.
4. Traffic-aware routing allows mobile listeners to route packets around hotspots, Some local radio
stations have helicopters flying around their cities to report on road congestion to make it possible
for their mobile listeners to route their packets (cars) around hotspots. This is called traffic-aware
routing. and splitting traffic across multiple paths is helpful.
5. However, sometimes it is not possible to increase capacity. The only way then to beat back the
congestion is to decrease the load. In a virtual-circuit network, new connections can be refused if
they would cause the network to become congested. This is called admission control.
6. At a fine level, when congestion is imminent the network can request traffic sources causing the
problem to throttle their traffic to reduce congestion. To identify onset of congestion, routers can
monitor average load, queuing delay or packet loss. Rising numbers indicate growing congestion.
7. To control sources, routers must participate in a feedback loop. The timescale needs careful
adjustment:
(a) If feedback is too frequent, the system will oscillate wildly.
(b) If feedback is too slow, congestion control will react sluggishly.
8. Sending feedback messages during congestion adds more network load. When all else fails, network
has to discard packets it cannot deliver, which is called load shedding. Good packet discard policies
can help prevent congestion collapse.
2. The most direct way to do this is to set the link weight to be a function of the fixed link bandwidth
and propagation delay plus the variable measured load or average queuing delay. Least-weight paths
will then favor paths that are more lightly loaded, all else being equal.
3. Traffic-aware routing was used in the early Internet according to this model by Khanna and Zinky in
1989. However, there is a risk. Consider the network of Fig. 3.3, which is divided into two parts,
East and West, connected by two links, CF and EI. Suppose that most of the traffic between East
and West is using link CF, and, as a result, this link is heavily loaded with long delays. Including
queueing delay in the weight used for the shortest path calculation will make EI more attractive.
After the new routing tables have been installed, most of the East-West traffic will now go over EI,
loading this link. Consequently, in the next update, CF will appear to be the shortest path. As a
result, the routing tables may oscillate wildly, leading to erratic routing and many potential
problems.
Figure 3.3 A network in which the East and West parts are connected by two links.
4. If load is ignored and only bandwidth and propagation delay are considered, this problem does not
occur. Attempts to include load but change weights within a narrow range only slow down routing
oscillations. Two techniques can contribute to a successful solution. The first is multi path routing,
in which there can be multiple paths from a source to a destination. In our example this means that
the traffic can be spread across both of the East to West links.
5. The second one is for the routing scheme to shift traffic across routes slowly enough that it is able to
converge, as in the scheme of Gallagher (1977). Given these difficulties, in the Internet routing
protocols do not generally adjust their routes depending on the load. Instead, adjustments are made
outside the routing protocol by slowly changing its inputs. This is called Traffic engineering.
3. So for admission control to work, the new circuit needs to provide some characterization of its
traffic. Traffic can be described in terms of its rate and shape. But a simple description is difficult
due to its bursty nature.
4. The average rate only tells half the story since traffic is often bursty. Bursty traffic like web traffic
is harder to handle than streaming traffic with the same throughput since web traffic bursts are more
likely to congest routers.
5. A commonly used method to capture this effect is the leaky bucket or token bucket model.
The leaky bucket has two parameters:
(a) The average rate which bounds the long term traffic rate
(b) The burst size which bounds the maximum burst of traffic
6. The network can use traffic descriptions for admission control when deciding whether to admit new
virtual circuits. One option is for the network to reserve enough capacity along circuit paths to
prevent congestion. This guarantees quality of service for users.
7. Even without guarantees, the network can estimate how many circuits will fit within capacity
without congestion using traffic descriptions. For example, 10 circuits at up to 10 Mbps passing
through a 100 Mbps link can be admitted to avoid congestion. But this is wasteful since circuits may
not transmit at full speed at the same time.
8. Measurements of past behavior and statistics can be used to estimate the right number of circuits to
admit, trading better performance for acceptable risk.
9. Admission control can also be combined with traffic-aware routing by considering routes around
traffic hotspots as part of the setup procedure. For example, consider the network illustrated in Fig.
3.4(a), in which two routers are congested, as indicated.
Figure 3.4. (a) A congested network. (b) The portion of the network that is not congested. A virtual circuit from A
to B is also shown.
10. Suppose that a host attached to router A wants to set up a connection to a host attached to router B.
Normally, this connection would pass through one of the congested routers. To avoid this situation,
we can redraw the network as shown in Fig. 5-24(b), omitting the congested routers and all of their
lines. The dashed line shows a possible route for the virtual circuit that avoids the congested routers.
Let us now look at some approaches to throttling traffic that can be used in both datagram networks
and virtual-circuit networks. Each approach must solve two problems.
Problem 1- The router must be able to determine when the congestion is approaching. It must identify
the congestion before it has arrived. For this, each router used in the network mustcontinuously check
for all the resources and their activities in the network. Router can continuously monitor using three
possibilities. They are:
• Using output links
• To buffer the most useful and priority-based packets inside the router
• The total number of packets that are lost because of insufficient buffering
Problem 2- The second problem is that the router must send the feedback on time to the senders that
are creating congestion. To deliver this feedback, the router must identify the senders properly. The
router needs to send them warning efficiently without sending more packets in an already congested
network. Different feedback mechanisms are used for solving this problem.
Feedback Mechanisms
1. Choke packets
Choke packets are a mechanism where the router directly sends the choked packet back to its
sender or host. The header bit of the original packet is turned on so that it will not be able to generate
any choke packet. At the time of congestion to decrease the load router will send backonly the
choked packets at a lower rate. In the case of datagram networks randomly, the packets are selected
therefore it leads to more choked packets. The below diagram describes the choke packets approach.
In the explicit congestion notification approach the router does not send extra packets to the hostbut sets a
bit of any one of the packet headers to inform that the network has approached with congestion. When any
packet is delivered in the network the destination sends a reply packet tothe sender informing that congestion
has occurred. In the case of choke packets, the sender then throttles its transmission. The below diagram describes
the explicit congestion notification.
3. Hop-by-Hop Backpressure
After the congestion has been signaled still due to a slow signal many packets are received from the
long distances. The choke packets have an effect at every step and each router requires more buffers.
The main aim of this Hop-by-Hop Backpressure technique is to provide faster relief at the point of
congestion in the network. This technique propagates in the opposite direction of the data and is
majorly used in virtual circuits. The below diagram describes Hop-by-Hop Backpressure in detail.
1. Load shedding is one of the techniques used for congestion control. A network router
consists of a buffer. This buffer is used to store the packets and then route them to their
destination. Load shedding is defined as an approach of discarding the packets when the
buffer is full according to thestrategy implemented in the data link layer. The selection of
packets to discard is an important task. Many times packets with less importance and old
packets are discarded.
2. The term comes from the world of electrical power generation, where it refers to the practice
of utilities intentionally blacking out certain areas to save the entire grid from collapsing on
hot summer days when the demand for electricity greatly exceeds the supply.
3. Selection of Packets to be Discarded: In the process of load shedding the packets need to
be discarded in order to avoid congestion. Therefore which packet needs to be discarded is
a question. Random early detection is one of the approach used to discard the packets.
5. To determine when to start discarding, routers maintain a running average of their queue
lengths. When the average queue length on some link exceeds a threshold, the link is said to
be congested and a small fraction of the packets are dropped at random.
6. Picking packets at random makes it more likely that the fastest senders will see a packet drop;
this isthe best option since the router cannot tell which source is causing the most trouble in
a datagram network. The affected sender will notice the loss when there is no
acknowledgement, and then the transport protocol will slow down. The lost packet is thus
delivering the same message as a choke packet, but implicitly, without the router sending any
explicit signal.
7. RED routers improve performance compared to routers that drop packets only when their
buffersare full, though they may require tuning to work well. For example, the ideal number
of packets to drop depends on how many senders need to be notified of congestion.
8. However, ECN is the preferred option if it is available. It works in exactly the same manner, but
delivers acongestion signal explicitly rather than as a loss; RED is used when hosts cannot receive
explicit signals.
Overprovisioning
• An easy solution for ensuring good quality of service is to build a network with sufficient
capacity to handle any anticipated traffic.
• This approach, known as overprovisioning, allows the network to carry application traffic
with minimal loss and low latency, assuming a decent routing scheme.
• The result is optimal performance, and the telephone system is an example of a somewhat
overprovisioned system, evident in the near-instant dial tone availability.
• Definition of Flow:
• A stream of packets moving from a source to a destination is termed a flow (Clark, 1988).
• Flows can encompass all packets in a connection in a connection-oriented network or all
packets sent from one process to another in a connectionless network.
• Characterization of Flow Needs:
• Each flow's requirements can be described by four primary parameters: bandwidth,
delay, jitter, and loss.
• These parameters collectively determine the Quality of Service (QoS) necessary
for the flow.
• Common Applications and Network Requirements:
• Fig. 5-27 lists various common applications and the level of stringency in their
network requirements.
•Notably, network requirements are less demanding when an application can
enhance the service provided by the network.
• Example Contrasts:
• Networks do not necessarily need to be lossless for reliable file transfer, as some
loss can be rectified with retransmissions.
• Delivery of packets with identical delays is not crucial for audio and video playout,
as some jitter can be smoothed by buffering packets at the receiver.
• Bandwidth Requirements:
• Applications exhibit varying bandwidth needs.
• Email, audio in all forms, and remote login have relatively low bandwidth
requirements.
• File sharing and video in all forms demand a significant amount of bandwidth.
• Delay Sensitivity:
• File transfer applications, including email and video, are not highly sensitive to
delay.
Dr. Karthik B U, Dept. CSE
Page 45
Module 3 The Network Layer
• Interactive applications like Web surfing and remote login are more delay-
sensitive.
• Real-time applications such as telephony and videoconferencing have strict delay
requirements.
• Jitter and its Impact:
• Jitter, the variation in delay or packet arrival times, is crucial.
• Email, audio, and file transfer are generally not sensitive to jitter.
• Remote login may be affected by jitter, causing bursts of updates on the screen.
• Video and audio are extremely sensitive to jitter; even small variations can have a
significant impact.
• Loss Tolerance:
• The first four applications (email, audio, file transfer, and video) have more
stringent loss requirements, requiring all bits to be delivered correctly.
• Retransmissions are typically used to achieve this goal.
• Audio and video applications can tolerate some lost packets without
retransmission, as users may not notice short pauses or occasional skipped frames.
• Network Support for QoS:
• Networks may support different categories of Quality of Service (QoS) to
accommodate various applications.
• ATM networks, once part of a grand networking vision, provide an example of
supporting different QoS categories but have become a niche technology.
• Networks may support different categories of QoS.
1. Constant bit rate (e.g., telephony).
2. Real-time variable bit rate (e.g., compressed videoconferencing).
3. Non-real-time variable bit rate (e.g., watching a movie on demand).
4. Available bit rate (e.g., file transfer).
5.4.2 Traffic Shaping
• Traffic Characterization:
• Before a network can make Quality of Service (QoS) guarantees, it needs to know
the nature of the traffic being guaranteed.
• In the telephone network, the characterization is straightforward, such as a
voicecall needing 64 kbps with specific sampling intervals.
• Bursty Nature of Data Networks:
• Traffic in data networks is often bursty due to variations in traffic rates (e.g.,
compressed videoconferencing), user interactions (e.g., browsing new web pages),
and task switching by computers.
• Traffic Shaping:
• Traffic shaping is a technique that regulates the average rate and burstiness of data
flow entering the network.
• It allows applications to transmit diverse traffic patterns while providing a simple
and useful way to describe these patterns to the network.
• The customer and provider agree on a traffic pattern (shape) for a flow, often
outlined in a Service Level Agreement (SLA) for long-term commitments.
• SLA and Traffic Monitoring:
• The Service Level Agreement (SLA) outlines the agreed-upon traffic patterns
between the customer and provider.
• Traffic shaping, when adhered to by the customer, helps reduce congestion and
ensures the network can fulfill its promises.
• Traffic policing involves monitoring a traffic flow to verify adherence to the
Dr. Karthik B U, Dept. CSE
Page 46
Module 3 The Network Layer
agreement, potentially dropping excess packets or marking them with lower priority.
• Importance of Shaping and Policing:
• Shaping and policing are critical for real-time data, such as audio and video
connections, which have stringent quality-of-service requirements.
• While less important for peer-to-peer transfers that consume all available bandwidth,
they are crucial for maintaining QoS commitments.
Leaky and Token Buckets
• Now we will look at a more general way to characterize traffic, with the leaky bucket and
token bucket algorithms. The formulations are slightly different but give an equivalent
result.
•
• Try to imagine a bucket with a small hole in the bottom, as illustrated in Fig. 5-28(b). No
matter the rate at which water enters the bucket, the outflow is at a constant rate, R, when
there is any water in the bucket and zero when the bucket is empty. Also, once the bucket is
full to capacity B, any additional water entering it spills over the sides and is lost.
• This bucket can be used to shape or police packets entering the network, as shown in Fig.
5-28(a). Conceptually, each host is connected to the network by an interface containing a
leaky bucket. To send a packet into the network, it must be possible to put more water into
the bucket. If a packet arrives when the bucket is full, the packet must either be queued until
enough water leaks out to hold it or be discarded.
• The former might happen at a host shaping its traffic for the network as part of the operating
system. The latter might happen in hardware at a provider network interface that is policing
traffic entering the network. This technique was proposed by Turner (1986) andis called the
leaky bucket algorithm.
• A different but equivalent formulation is to imagine the network interface as a bucket that
is being filled, as shown in Fig. 5-28(c). The tap is running at rate R and the bucket has a
capacity of B, as before. Now, to send a packet we must be able to take water, or tokens, as
the contents are commonly called, out of the bucket (rather than putting water into the
bucket).
• No more than a fixed number of tokens, B, can accumulate in the bucket, and if the
bucket is empty, we must wait until more tokens arrive before we can send another packet.
This algorithm is called the token bucket algorithm. Leaky and token buckets limit the long-
term rate of a flow but allow short term bursts up to a maximum regulated length to pass
through unaltered and without suffering any artificial delays.
• Large bursts will be smoothed by a leaky bucket traffic shaper to reduce congestion in the
network. As an example, imagine that a computer can produce data at up to 1000 Mbps (125
Dr. Karthik B U, Dept. CSE
Page 47
Module 3 The Network Layer
million bytes/sec) and that the first link of the network also runs at this speed.
• The pattern of traffic the host generates is shown in Fig. 5-29(a). This pattern is bursty. The
average rate over one second is 200 Mbps, even though the host sends a burst of 16,000 KB
at the top speed of 1000 Mbps (for 1/8 of the second).
• Now suppose that the routers can accept data at the top speed only for short intervals, until
their buffers fill up. The buffer size is 9600 KB, smaller than the traffic burst. For long
intervals, the routers work best at rates not exceeding 200 Mbps (say, because this is all the
bandwidth given to the customer). The implication is that if traffic is sent in this
pattern, some of it will be dropped in the network because it does not fit into the buffers at
routers.
• We can solve this equation to get S = B /(M − R). For our parameters of B = 9600 KB, M
= 125 MB/sec, and R = 25 MB/sec, we get a burst time of about 94 msec. A potential
problem with the token bucket algorithm is that it reduces large bursts down to the long-
term rate R. Using all of these buckets can be a bit tricky. When token buckets are used for
traffic shaping at hosts, packets are queued and delayed until the buckets permit them to be
sent. When token buckets are used for traffic policing at routers in the network, the algorithm
is simulated to make sure that no more packets are sent than permitted. Nevertheless, these
tools provide ways to shape the network traffic into more manageable forms to assist in
meeting quality-of-service requirements.
• Many packet scheduling algorithms have been devised that provide stronger isolation
between flows and thwart attempts at interference (Bhatti and Crowcroft, 2000). One of the
first ones was the fair queueing algorithm devised by Nagle (1987). When the line becomes
idle, the router scans the queues round-robin, as shown in Fig. 5-30.
• It then takes the first packet on the next queue. In this way, with n hosts competing for the
output line, each host gets to send one out of every n packets. It is fair in the sense that all
flows get to send packets at the same rate. Sending more packets will not improve this
rate. Although a start, the algorithm has a flaw: it gives more bandwidth to hosts that use
large packets than to hosts that use small packets. Demers et al suggested an improvement
in which the round-robin is done in such a way as to simulate a byte-by-byte round-robin,
instead of a packet-by-packet round-robin.
• The trick is to compute a virtual time that is the number of the round at which each packet
would finish being sent. Each round drains a byte from all of the queues that have data to
send. The packets are then sorted in order of their finishing times and sent in that order. This
algorithm and an example of finish times for packets arriving in three flows are illustrated
in Fig. 5-31. If a packet has length L, the round at which it will finish is simply L rounds
after the start time. The start time is either the finish time of the previous packet, or the
arrival time of the packet, if the queue is empty when it arrives
• From the table in Fig. 5-32(b), and looking only at the first two packets in the top two
queues, packets arrive in the order A, B, D, and F. Packet A arrives at round 0 and is 8 bytes
long, so its finish time is round 8. Similarly, the finish time for packet B is 11. Packet D
arrives while B is being sent. Its finish time is 9 byte-rounds after it starts whenB finishes,
or 20. Similarly, the finish time for F is 16. In the absence of new arrivals, the relative
sending order is A, B, F, D, even though F arrived after D. It is possible that
another small packet will arrive on the top flow and obtain a finish time before D. It will
only jump ahead of D if the transmission of that packet has not started
• One shortcoming of this algorithm in practice is that it gives all hosts the same priority. In
many situations, it is desirable to give, for example, video servers more bandwidth than,
say, file servers. This is easily possible by giving the video server two or more bytes per
round. This modified algorithm is called WFQ (Weighted Fair Queueing). Letting the
number of bytes per round be the weight of a flow, W, we can now give the formula for
computing the finish time:
Fi = max(Ai,Fi −1)+Li /W
• where Ai is the arrival time, Fi is the finish time, and Li is the length of packet i. The bottom
queue of Fig. 5-31(a) has a weight of 2, so its packets are sent more quickly as you can see
in the finish times given in Fig. 5-31(b).
• As a final example of a scheduler, packets might carry timestamps and be sent in timestamp
order. Clark et al. (1992) describes a design in which the timestamp records how far the
packet is behind or ahead of schedule as it is sent through a sequence ofrouters on the
path. Packets that have been queued behind other packets at a router will tend to be behind
schedule, and the packets that have been serviced first will tend to be ahead of schedule.
Sending packets in order of their timestamps has the beneficial effect of
speeding up slow packets while at the same time slowing down fast packets. The result is
that all packets are delivered by the network with a more consistent delay.
Dr. Karthik B U, Dept. CSE
Page 51
Module 3 The Network Layer
Three different kinds of resources can potentially be reserved for different flows:
• Bandwidth Reservation:
• Bandwidth is a critical resource.
• If a flow requires 1 Mbps and the outgoing line has a capacity of 2 Mbps, reserving
bandwidth means not oversubscribing the output line.
• Buffer Space Allocation:
• Buffer space is another crucial resource.
• Packets arriving at a router are buffered until they can be transmitted on the chosen
outgoing line.
• Buffers absorb small bursts of traffic as flows contend with each other.
• For quality of service, some buffers might be reserved for specific flows,
preventing competition with other flows for buffer space.
• Maintaining a reserve ensures that a buffer is available when a flow needs it, up to
a specified maximum value.
• CPU Cycle Management:
• CPU cycles in routers are a potential scarce resource.
• Processing a packet consumes router CPU time, limiting the number of packets the
router can handle per second.
• Certain packets, such as ICMP packets, may require more CPU processing.
• Avoiding CPU overload is essential to ensure timely processing of packets,
particularly those that demand additional computational resources.
• The network decides to accept or reject the flow based on its capacity and
commitments to other flows.
• Reservations and Path Considerations:
• Reservations must be made at all routers along the route to prevent congestion.
• Some routing algorithms send all traffic over the best path, potentially causing
rejection of flows if there's insufficient spare capacity.
• QoS routing and splitting traffic over multiple paths are techniques to
accommodate QoS guarantees for new flows.
• Complexity in Acceptance or Rejection Decision:
• The decision to accept or reject a flow involves more than comparing requested
resources with router excess capacity.
• Applications may not have knowledge of buffers or CPU cycles, necessitating a
different way to describe flows and translate descriptions to router resources.
• Tolerance Levels and Guarantees:
• Some applications are more tolerant of occasional missed deadlines than others.
• Applications must choose between hard guarantees or behavior that holds most of
the time.
• Hard guarantees are expensive due to constraining worst-case behavior, so many
applications are satisfied with guarantees for most packets.
• Negotiation of Flow Parameters:
• Some applications may be willing to adjust flow parameters, while others may not..
• Properties like the number of pixels per frame and audio bandwidth may be
negotiable for certain applications.
• Importance of Accurate Flow Description:
• Due to the involvement of multiple parties in flow negotiation (sender, receiver,
routers along the path), accurate flow description is crucial.
• Flows need to be described in terms of specific negotiable parameters.
• Flow Specification:
• A set of parameters that describe a flow is termed a flow specification.
• The sender, such as a video server, typically generates a flow specification
proposing desired parameters.
• As the specification travels along the route, each router may modify parameters as
necessary, but modifications can only reduce the flow, not increase it.
• Parameter Modifications:
• Each router along the path examines and modifies the flow specification.
• Modifications are allowed to decrease the flow, such as lowering the data rate, but
Dr. Karthik B U, Dept. CSE
Page 54
Module 3 The Network Layer
not to increase it.
• Establishment of Parameters:
• When the flow specification reaches the destination, the negotiated parameters can
be established.
• Queueing Delay and Burst Size:
• The largest queueing delay experienced by a flow is a function of the burst size of
the token bucket.
• In cases of smooth traffic without bursts, packets are drained from the router as
quickly as they arrive, resulting in no queueing delay (ignoring packetization
effects).
• Conversely, if traffic is saved up in bursts, the maximum queueing delay occurs
when a maximum-size burst arrives at the router all at once. The maximum delay is
the time taken to drain this burst at the guaranteed bandwidth, or B/R (ignoring
packetization effects).
• Guarantees and Token Buckets:
• Guarantees provided by token buckets are hard guarantees.
• Token buckets limit the burstiness of the traffic source.
• Fair queueing isolates the bandwidth allocated to different flows, ensuring that
each flow meets its bandwidth and delay guarantees regardless of the behaviour of
other competing flows at the router.
• Isolation of Flows:
• Competing flows at the router cannot break the guarantee even if they save up traffic
and send it all at once.
• The combination of token buckets and fair queueing ensures that each flow's
guarantees remain intact.
• Path Through Multiple Routers:
• The guarantees hold for a path through multiple routers in any network topology.
• Each flow receives a minimum bandwidth at each router, as guaranteed.
• Maximum delay for each flow is ensured, even in the worst-case scenario where a
burst of traffic hits the first router and competes with other flows. The burst's delay
is at most D, and this delay smooths the burst, preventing further queueing delays
at later routers.
•
• Functionality of RSVP:
As an example, consider the network of Fig. 5-34(a). Hosts 1 and 2 are multicast senders, and hosts
3, 4, and 5 are multicast receivers. In this example, the senders and receivers are disjoint, but in
general, the two sets may overlap. The multicast trees for hosts 1 and 2 are shown in Fig. 5-34(b)
and Fig. 5-34(c), respectively.
An example of such a reservation is shown in Fig. 5-35(a). Here host 3 has requested a channel to
host 1. Once it has been established, packets can flow from 1 to 3 without congestion. Now consider
what happens if host 3 next reserves a channel to the other sender, host 2, so the user can watch two
television programs at once. A second path is reserved, as illustrated in Fig. 5-35(b). Note that two
separate channels are needed from host 3 to router E because two independent streams are being
transmitted.
• Receiver's Reservation Options:
• When making a reservation, a receiver has the option to specify one or more sources
from which it wants to receive data.
• The receiver can indicate whether these source choices are fixed for the entire
duration of the reservation or if it wants the flexibility to change sources later.
• Classification may occur at the sending host or the ingress (first) router.
• Advantage of host-based classification: More information about packet flows is
available.
• Example: VoIP packets may be marked for expedited service by hosts.
• Traffic Policing:
• In cases where marking is done by the host, the ingress router may police the traffic to
ensure customers send only the allocated amount of expedited traffic.
• Queueing Mechanism:
• Routers within the network may have two output queues for each outgoing line: one
for expedited packets and one for regular packets.
• Arrival packets are queued accordingly, and the expedited queue is given priority over
the regular queue.
• Network Behaviour:
• Expedited packets, despite the actual presence of heavy regular traffic, experience the
network as unloaded due to priority treatment.
• Marking by Hosts:
• Facilitates the application of expedited service without requiring modifications to
existing applications.
• Common practice for VoIP packets, ensuring preferential treatment in supporting
networks.
Assured Forwarding
72
Module 3 The Network Layer
Dr.Karthik B U
, Dept. CSE Page 82
Module 3 The Network Layer
Dr.Karthik B U
, Dept. CSE Page 83
Module 3 The Network Layer
Dr.Karthik B U
, Dept. CSE Page 84
Module 3 The Network Layer
93
Module 3 The Network Layer