0% found this document useful (0 votes)
29 views9 pages

Unit 3

Uploaded by

yakad53859
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
29 views9 pages

Unit 3

Uploaded by

yakad53859
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

Network layer is majorly focused on getting packets from the source to the destination,

routing error handling and congestion control.


Before learning about design issues in the network layer, let’s learn about it’s various
functions.
✓ Addressing:
Maintains the address at the frame header of both source and destination and performs
addressing to detect various devices in network.
✓ Packeting:
This is performed by Internet Protocol. The network layer converts the packets from its
upper layer.
✓ Routing:
It is the most important functionality. The network layer chooses the most relevant and
best path for the data transmission from source to destination.
✓ Inter-networking:
It works to deliver a logical connection across multiple devices.

Network layer design issues:

The network layer comes with some design issues they are described as follows:

1. Store and Forward packet switching:


The host sends the packet to the nearest router. This packet is stored there until it has fully
arrived once the link is fully processed by verifying the checksum then it is forwarded to
the next router till it reaches the destination. This mechanism is called “Store and Forward
packet switching.”
2. Services provided to Transport Layer:
Through the network/transport layer interface, the network layer transfers it’s services to
the transport layer. These services are described below.
But before providing these services to the transfer layer following goals must be kept in
mind :-
✓ Offering services must not depend on router technology.
✓ The transport layer needs to be protected from the type, number and topology of the
available router.
✓ The network addresses for the transport layer should use uniform numbering pattern
also at LAN and WAN connections.
Based on the connections there are 2 types of services provided :

✓ Connectionless – The routing and insertion of packets into subnet is done individually.
No added setup is required.
✓ Connection-Oriented – Subnet must offer reliable service and all the packets must be
transmitted over a single route.
3. Implementation of Connectionless Service:
Packet are termed as “datagrams” and corresponding subnet as “datagram subnets”.
When the message size that has to be transmitted is 4 times the size of the packet, then the
network layer divides into 4 packets and transmits each packet to router via. a few
protocol.Each data packet has destination address and is routed independently irrespective
of the packets.
4. Implementation of Connection Oriented service:
To use a connection-oriented service, first we establishes a connection, use it and then
release it. In connection-oriented services, the data packets are delivered to the receiver in
the same order in which they have been sent by the sender.
It can be done in either two ways :

✓ Circuit Switched Connection – A dedicated physical path or a circuit is established


between the communicating nodes and then data stream is transferred.
✓ Virtual Circuit Switched Connection – The data stream is transferred over a packet
switched network, in such a way that it seems to the user that there is a dedicated path
from the sender to the receiver. A virtual path is established here. While, other
connections may also be using the same path.

Congestion Control
Congestion control refers to techniques and mechanisms that can either prevent congestion
before it happens or remove congestion after it has happened. In general, we can divide
congestion control mechanisms into two broad categories: open-loop congestion control
(prevention) and closed-loop congestion control (removal).

Open-Loop Congestion Control: In open-loop congestion control, policies are applied to


prevent congestion before it happens. In these mechanisms, congestion control is handled by
either the source or the destination.

Retransmission Policy Retransmission is sometimes unavoidable. If the sender feels that a


sent packet is lost or corrupted, the packet needs to be retransmitted. Retransmission in
general may increase congestion in the network. However, a good retransmission policy can
prevent congestion. The retransmission policy and the retransmission timers must be
designed to optimize efficiency and at the same time prevent congestion.

Window Policy The type of window at the sender may also affect congestion. The Selective
Repeat window is better than the Go-Back-N window for congestion control.

Acknowledgment Policy The acknowledgment policy imposed by the receiver may also
affect congestion. If the receiver does not acknowledge every packet it receives, it may slow
down the sender and help prevent congestion. A receiver may send an acknowledgment
only if it has a packet to be sent or a special timer expires. A receiver may decide to
acknowledge only N packets at a time. the acknowledgments are also part of the load in a
network. Sending fewer acknowledgments means imposing less load on the network.
Discarding Policy A good discarding policy by the routers may prevent congestion and at
the same time may not harm the integrity of the transmission. For example, in audio
transmission, if the policy is to discard less sensitive packets when congestion is likely to
happen, the quality of sound is still preserved and congestion is prevented or alleviated.

Admission Policy An admission policy, which is a quality-of-service mechanism, can also


prevent congestion in virtual-circuit networks. Switches in a flow first check the resource
requirement of a flow before admitting it to the network. A router can deny establishing a
virtual-circuit connection if there is congestion in the network or if there is a possibility of
future congestion.

Closed-Loop Congestion Control Closed-loop congestion control mechanisms try to


alleviate congestion after it happens.

Backpressure The technique of backpressure refers to a congestion control mechanism in


which a congested node stops receiving data from the immediate upstream node or nodes.
This may cause the upstream node or nodes to become congested, and they, in turn, reject
data from their upstream node or nodes, and so on. Backpressure is a node-to- node
congestion control that starts with a node and propagates, in the opposite direction of data
flow, to the source. The backpressure technique can be applied only to virtual circuit
networks, in which each node knows the upstream node from which a flow of data is
coming.

Choke Packet A choke packet is a packet sent by a node to the source to inform it of
congestion. Note the difference between the backpressure and choke-packet methods. In
backpressure, the warning is from one node to its upstream node, although the warning may
eventually reach the source station. In the choke-packet method, the warning is from the
router, which has encountered congestion, directly to the source station. The intermediate
nodes through which the packet has travelled are not warned.

Implicit Signaling In implicit signaling, there is no communication between the congested


node or nodes and the source. The source guesses that there is congestion somewhere in the
network from other symptoms.

For example, when a source sends several packets and there is no acknowledgment for a
while, one assumption is that the network is congested. The delay in receiving an
acknowledgment is interpreted as congestion in the network; the source should slow down.

Explicit Signaling The node that experiences congestion can explicitly send a signal to the
source or destination. The explicit-signaling method, however, is different from the choke-
packet method. In the choke-packet method, a separate packet is used for this purpose; in
the explicit-signaling method, the signal is included in the packets that carry data. Explicit
signaling can occur in either the forward or the backward direction. This type of congestion
control can be seen in an ATM network.

QoS(Quality of Service) is nothing but a quality that every flow seeks to attain (smooth
movement without any congestion).

Techniques to Improve QoS


Some techniques that can be used to improve the quality of service. The four common
methods: scheduling, traffic shaping, admission control, and resource reservation.

a. Scheduling

Packets from different flows arrive at a switch or router for processing. A good scheduling
technique treats the different flows in a fair and appropriate manner. Several scheduling
techniques are designed to improve the quality of service. We discuss three of them here:
FIFO queuing, priority queuing, and weighted fair queuing.

i. FIFO Queuing
In first-in, first-out (FIFO) queuing, packets wait in a buffer (queue) until the node (router or
switch) is ready to process them. If the average arrival rate is higher than the average
processing rate, the queue will fill up and new packets will be discarded. A FIFO queue is
familiar to those who have had to wait for a bus at a bus stop.
ii. Priority Queuing
In priority queuing, packets are first assigned to a priority class. Each priority class has its
own queue. The packets in the highest-priority queue are processed first. Packets in the
lowest- priority queue are processed last. Note that the system does not stop serving a queue
until it is empty. Figure 4.32 shows priority queuing with two priority levels (for simplicity).

A priority queue can provide better QoS than the FIFO queue because higher priority traffic,
such as multimedia, can reach the destination with less delay. However, there is a potential
drawback. If there is a continuous flow in a high-priority queue, the packets in the lower-
priority queues will never have a chance to be processed. This is a condition called
starvation

iii. Weighted Fair Queuing


A better scheduling method is weighted fair queuing. In this technique, the packets are still
assigned to different classes and admitted to different queues. The queues, however, are
weighted based on the priority of the queues; higher priority means a higher weight. The
system processes packets in each queue in a round-robin fashion with the number of packets
selected from each queue based on the corresponding weight. For example, if the weights
are 3, 2, and 1, three packets are processed from the first queue, two from the second queue,
and one from the third queue. If the system does not impose priority on the classes, all
weights can be equal. In this way, we have fair queuing with priority. Figure 4.33 shows the
technique with three classes.

b. Traffic Shaping

Traffic shaping is a mechanism to control the amount and the rate of the traffic sent to the
network. Two techniques can shape traffic: leaky bucket and token bucket

i. Leaky Bucket

If a bucket has a small hole at the bottom, the water leaks from the bucket at a constant rate
as long as there is water in the bucket. The rate at which the water leaks does not depend on
the rate at which the water is input to the bucket unless the bucket is empty. The input rate
can vary, but the output rate remains constant. Similarly, in networking, a technique called
leaky bucket can smooth out bursty traffic. Bursty chunks are stored in the bucket and sent
out at an average rate. Figure 4.34 shows a leaky bucket and its effects.
In the figure, we assume that the network has committed a bandwidth of 3 Mbps for a host.
The use of the leaky bucket shapes the input traffic to make it conform to this commitment.
In Figure 4.34 the host sends a burst of data at a rate of 12 Mbps for 2 s, for a total of 24 Mbits
of data. The host is silent for 5 s and then sends data at a rate of 2 Mbps for 3 s, for a total of 6
Mbits of data. In all, the host has sent 30 Mbits of data in lOs. The leaky bucket smooth’s the
traffic by sending out data at a rate of 3 Mbps during the same 10 s.

A simple leaky bucket implementation is shown in Figure 4.35. A FIFO queue holds the
packets. If the traffic consists of fixed-size packets (e.g., cells in ATM networks), the process
removes a fixed number of packets from the queue at each tick of the clock. If the traffic
consists of variable-length packets, the fixed output rate must be based on the number of
bytes or bits.

The following is an algorithm for variable-length packets:

· Initialize a counter to n at the tick of the clock.

· If n is greater than the size of the packet, send the packet and decrement the counter
by the packet size. Repeat this step until n is smaller than the packet size.

· Reset the counter and go to step 1.


A leaky bucket algorithm shapes bursty traffic into fixed-rate traffic by averaging the data
rate. It may drop the packets if the bucket is full.

ii. Token Bucket


The leaky bucket is very restrictive. It does not credit an idle host. For example, if a host is
not sending for a while, its bucket becomes empty. Now if the host has bursty data, the leaky
bucket allows only an average rate. The time when the host was idle is not taken into
account. On the other hand, the token bucket algorithm allows idle hosts to accumulate
credit for the future in the form of tokens. For each tick of the clock, the system sends n
tokens to the bucket. The system removes one token for every cell (or byte) of data sent. For
example, if n is 100 and the host is idle for 100 ticks, the bucket collects 10,000 tokens.

The token bucket can easily be implemented with a counter. The token is initialized to zero.
Each time a token is added, the counter is incremented by 1. Each time a unit of data is sent,
the counter is decremented by 1. When the counter is zero, the host cannot send data.

The token bucket allows bursty traffic at a regulated maximum rate.

Combining Token Bucket and Leaky Bucket

The two techniques can be combined to credit an idle host and at the same time regulate the
traffic. The leaky bucket is applied after the token bucket; the rate of the leaky bucket needs
to be higher than the rate of tokens dropped in the bucket.

c. Resource Reservation
A flow of data needs resources such as a buffer, bandwidth, CPU time, and so on. The
quality of service is improved if these resources are reserved beforehand. We discuss in this
section one QoS model called Integrated Services, which depends heavily on resource
reservation to improve the quality of service.

d. Admission Control
Admission control refers to the mechanism used by a router, or a switch, to accept or reject a
flow based on predefined parameters called flow specifications. Before a router accepts a
flow for processing, it checks the flow specifications to see if its capacity (in terms of
bandwidth, buffer size, CPU speed, etc.) and its previous commitments to other flows can
handle the new flow.

You might also like