0% found this document useful (0 votes)
26 views50 pages

4a Network Layer

Uploaded by

229x1a3256
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
26 views50 pages

4a Network Layer

Uploaded by

229x1a3256
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 50

The Network Layer

Network Layer

• Network Layer Design issues


• Routing Algorithms
• Congestion Control
• Internetworking
• IPv4 and IPv6
Congestion Control Algorithms
• General Principles of Congestion Control
• Congestion Prevention Policies
• Congestion Control in Virtual-Circuit Subnets
• Congestion Control in Datagram Subnets
• Load Shedding
• Jitter Control
Congestion
When too many packets are present in (a part of) the
subnet, performance degrades. This situation is called
Congestion.
Congestion in a network may occur if the load on the
network is greater than the capacity of network.
As traffic increases too much, the routers are no
longer able to cope, and they begin losing packets.
At very high traffic, performance collapse completely,
and almost no packets are delivered.
Effects of Congestion
Congestion affects two important parameters of the
network performance.
1. Throughput
Initially throughput increases linearly with offered load,
because utilization of the network increases.
However, as the offered load increases beyond certain limit,
the throughput drops.
If the offered load increases further, a point is reached when
not a single packet is delivered to any destination, which is
commonly known as deadlock situation.
2. Delay
As delay increases, performance decreases.
If delay increases, retransmission occurs, making situation
worse.
Congestion

When too much traffic is offered, congestion sets in and


performance degrades sharply.
Congestion

The delay also increases with offered load.


Network without any congestion control will saturate at a lower
offered load.
Causes of Congestion
a) Congestion occurs when a router receives data faster than it
can send it
– Insufficient bandwidth
– Slow hosts
– Data simultaneously arriving from multiple lines
destined for the same outgoing line
– Upgrading the lines but not changing the
processes.
b) The system is not balanced
– Correcting the problem at one router will probably
just move the bottleneck to another router.
Congestion Causes More Congestion

❖ Incoming messages must be placed in queues


• The queues have a finite size
– Overflowing queues will cause packets to be dropped
– Long queue delays will cause packets to be resent
– Dropped packets will cause packets to be resent
• Senders that are trying to transmit to a congested
destination also become congested
– They must continually resend packets that have been
dropped or that have timed-out
– They must continue to hold outgoing /
unacknowledged messages in memory.
Queues at Routers
Congestion Control versus Flow Control
a) Flow control
– Controls point-to-point traffic between sender
and receiver
– It is a local issue.
– Eg., A fast host sending to a slow host
b) Congestion Control
– Controls the traffic throughout the network
– It is a global issue. Involving the behavior of all
the hosts, all the routers.
General Principles of Congestion Control

Applied to prevent Applied to treat or alleviate


congestion before it happens congestion after it happens
General Principles of Congestion Control
Two Categories of Congestion Control
a) Open loop solutions
– Attempt to prevent problems rather than correct
them, by good design.
– Does not utilize runtime feedback from the system
– Make decisions without regard to the current state of
the network.
– These rules or policies include deciding upon when
to accept traffic, when to discard it, making
scheduling decisions and so on.
– Congestion control is handled either by the source or
by the destination.
Open loop solutions
1. Retransmission Policy
2. Window policy
3. Acknowledgement policy
4. Discarding policy
5. Admission policy
General Principles of Congestion Control

Two Categories of Congestion Control


b) Closed loop solutions
– Allow system to enter congested state, detect it, and
remove it.
– Based on the concept of feedback
– During operation, some system parameters are
measured and feed back to portions
– Uses feedback (measurements of system
performance) to make corrections at runtime.
– This approach can be divided into 3 steps:
Parts of Closed loop solution
1. Monitor the system to detect when and where congestion
occurs.
- % of packets discarded
- Average queue length
- Number of packets timeout
- Average packet delay
2. Pass information to where action can be taken.
- Extra packet
- Bit reserved to warn the neighbours
- Routers periodically send the probe packet
3. Adjust system operation to correct the problem.
Metrics Used in Closed Loop Congestion Control

• Percentage of packets discarded due to buffer overflow


• Average queue length
• Percentage of packets that time-out and are retransmit
• Average packet delay
• Standard deviation of packet delay
Reducing Congestion
• Two Methods
– Increase resources
• Get additional bandwidth
– Use faster lines
– Obtain additional lines
– Utilize alternate pathways
– Utilize “spare” routers
– Decrease Traffic
• Send messages to senders telling them to slow down
• Deny service to some users
• Degrade service to some or all users
• Schedule usage to achieve better load balance
Congestion Prevention Policies
In open loop systems:

5-26

Policies that affect congestion.


Congestion Control in Virtual-Circuit Subnets
Three Methods:
1) Admission Control: Once congestion has been signaled no more virtual
circuits are set up until the problem has gone away. It is simple and easy to
carry out.
Ex: In the telephone system, when a switch gets overloaded, it also practices
admission control by not giving dial tones.

2) Allow new Virtual Circuits but carefully route all new virtual circuits around
problem areas.

(a) A congested subnet. (b) A redrawn subnet, eliminates congestion and a


virtual circuit from A to B.
Congestion Control in Virtual-Circuit Subnets
3) Negotiate an agreement between the host & subnet when a
virtual circuit is set up.
This agreement normally specifies the volume and shape of the
traffic, quality of service required, and other parameters.
To keep its part of agreement, the subnet will typically reserve
resources along the path when the circuit is setup.
These resources can include table, buffer space in the routers,
and bandwidth on the lines.

• A disadvantage of doing it all the time is that it tends to waste of


resources.
Source based approach
❑ Basic algorithm
o Router monitors utilisation of output lines
• u recent utilisation: 0 ≤ u ≤ 1 f Instantaneous line utilisation
• good estimate of u a constant
unew = a × uold + (1 – a ) × f
o In case of overload: unew > threshold
• Output line enters warning state
• Some action is taken:
– Warning bit
– Choke packets
– Hop-by-hop choke packets
Congestion Control in Datagram Subnets
• The Warning bit:
It is a method of setting a special bit in the packets header by
router.
A special bit in the packet header is set by the router to warn
the source when congestion is detected.
The bit is copied and piggy-backed on the ACK and sent to
the sender.
The sender monitors the number of ACK packets it receives
with the warning bit set and adjusts its transmission rate
accordingly.
Choke Packets
A more direct way of telling the source to slow down.
A choke packet is a control packet generated at a congested
node and transmitted to restrict traffic flow.
The source, on receiving the choke packet must reduce its
transmission rate by a certain percentage. For a fixed time
interval.
After the period has expired, if the source receives choke
packets from the same destination, it reduces the traffic still
more and begins ignoring choke packets again.
The intermediate nodes through which the packets has traveled
are not warned about congestion.

24
Choke Packets
If no choke packets arrive during the listening period, the host
may increase the flow again.
An example of a choke packet is the ICMP Source Quench
Packet.
• Hosts reduces traffic by adjusting their policy parameters.

The first choke packet causes the data rate to be reduced to


0.50 of its previous rate.

The next one causes a reduction to 0.25 and so on..

Increases are done in smaller increments to prevent


congestion from reoccurring quickly.
Hop-by-Hop Choke Packets

• Over long distances or at high speeds choke packets are not very
effective.
• A more efficient method is to send to choke packets hop-by-hop.
• This requires each hop to reduce its transmission even before the
choke packet arrive at the source.
• The net effect of this hop-by-hop scheme is to provide quick
relief at the congestion at the price of using up more buffers
upstream.

26
Hop-by-Hop
Choke Packets

(a) A choke packet that affects


only the source.

(b) A choke packet that affects


each hop it passes through.
Implicit Signaling
a) Implicit signaling in congestion control is a mechanism where the
source guesses that a network is congested without any
communication between the congested nodes and the source.
b) The source may make this guess based on symptoms such as: No
acknowledgment, Delay in acknowledgment, Packet loss, and
Packet delay.
Explicit Signaling
a) Explicit signaling is a method of notifying congestion in a network
by sending a value within a packet. This method can be used in
either a forward or backward direction:
b) Forward signaling: The signal is sent in the direction of
congestion, warning the destination. The receiver then takes action
to prevent further congestion.
c) Backward signaling: The signal is sent in the opposite direction of
congestion, warning the source to slow down.
Load Shedding
When routers are being overflow by packets that they cannot
handle, they just throw them away.
When buffers become full, routers simply discard packets.
Which packet is chosen to be the victim depends on the
application.
Two policies:
1. Wine: Old is better than new. (discard new packet)
2. Milk: New is better than old. (discard old packet)
For a file transfer, we cannot discard older packets since this
will cause a gap in the received data.
For real-time voice or video it is probably better to throw
away old data and keep new packets.
Get the application to mark packets with discard priority.
Jitter Control
Variation in the delay of received packets.
Delay across network must be short
Rate of delivery must be constant
There will always be some variation in transit

(a) High jitter. (b) Low jitter.


Internetworking
• How Networks Differ
• How Networks Can Be Connected
• Concatenated Virtual Circuits
• Connectionless Internetworking
• Tunneling
• Internetwork Routing
• Fragmentation
Internetworking
Internetworking is the practice of interconnecting
multiple computer networks, such that any pair of hosts in
the connected networks can exchange messages irrespective
of their hardware-level networking technology.

The resulting system of interconnected networks are called


an internetwork, or simply an internet.
Connecting Networks

A collection of interconnected networks.


How Networks Differ

5-43

Some of the many ways networks can differ.


How Networks Can Be Connected
In the physical layer, networks can be connected by
repeaters or hub, which just move the bits from one network
to an identical networks.
In the data link layer, bridges and switches are used to
connect the networks. They accept the frames, examine the
MAC addresses, and forward the frame to a different
network while doing minor protocol translation.
In the network layer, routers are used to connect the
networks. If two networks have dissimilar network layers,
the router may be able to translate between the packet
formats. A router that can handle multiple protocols is
called a multiprotocol router.
In the transport layer, transport gateways are used, which
can act as interface between two transport connections.
In the application layer, application gateways translate
message semantics.
Internetworking devices
How Networks Can Be Connected

(a) Two Ethernets connected by a switch.(b) Two Ethernets connected by routers.


Types of Internetworking
Two types of internetworking are common
1. Connection-Oriented Concatenated Virtual Circuit Subnets
A connection to a host in a distant network is set up in a way
similar to the way connections are normally established.
The subnet sees that the destination is remote and builds a virtual
circuit to the router nearest the destination network.
Then it constructs a virtual circuit from that router to an external
gateway(multiprotocol router).
This gateway records the existence of the virtual circuit in its tables
and proceeds to build another virtual circuit to a router in the next
subnet.
This process continues until the destination host has been reached.
Concatenated Virtual Circuits

Internetworking using concatenated virtual circuits.


Types of Internetworking
2. Connectionless Internetworking (Datagram Model)
This model does not require all packets belonging to
one connection to traverse the same sequence of
gateways.
The datagrams from host1 to host2 are taking
different routes through the internetwork.
A routing decision is made separately for each packet,
possibly depending on the traffic at the moment the
packet is sent.
This strategy can use multiple routes and thus achieve
a higher bandwidth than the virtual circuit model.
There is no guarantee that the packets arrive at the
destination in order, assuming that they arrive at all.
Connectionless Internetworking

A connectionless internet.
Tunneling
Tunneling is a protocol that allows for the secure movement of
data from one network to another.
A technique of internetworking called Tunneling is used when
source and destination networks of same type are connected
through a network of different type.
For example let us consider an Ethernet to be connected to
another Ethernet through a WAN.
Tunneling works by encapsulating packets: Wrapping packets
inside of other packets.
Tunneling

Tunneling a packet from Paris to London.


Tunneling (2)

Tunneling a car from France to England.


An underground rail tunnel that runs below the English Channel and
connects Great Britain and France.
Fragmentation
Each network imposes some maximum size on its packets.
An obvious problem appears when a large packet wants to
travel through a network whose maximum packet size is too
small.
What happens if the original source packet is too large to be
handled by the destination network?
Solution: To allow gateways to break packets up into
fragments.

Two Strategies exist for recombining the fragments back into


the original packet.
1. Transparent Fragmentation
2. Non-Transparent Fragmentation
Fragmentation

(a) Transparent fragmentation. (b) Nontransparent fragmentation.


Fragmentation: Tree Structure

Packet1

Packet1.0 Packet1.1

Packet1.0. Packet1.0. Packet1.1. Packet1.1.


0 1 0 1
Fragmentation: Defining Elementary Fragment Size

Fragmentation when the elementary data size is 1 byte.


(a) Original packet, containing 10 data bytes.
(b) Fragments after passing through a network with maximum
packet size of 8 payload bytes plus header.
(c) Fragments after passing through a size 5 gateway.
n k Y o u …
Tha

You might also like