Unit 3
Unit 3
The network layer comes with some design issues they are described as follows:
✓ Connectionless – The routing and insertion of packets into subnet is done individually.
No added setup is required.
✓ Connection-Oriented – Subnet must offer reliable service and all the packets must be
transmitted over a single route.
3. Implementation of Connectionless Service:
Packet are termed as “datagrams” and corresponding subnet as “datagram subnets”.
When the message size that has to be transmitted is 4 times the size of the packet, then the
network layer divides into 4 packets and transmits each packet to router via. a few
protocol.Each data packet has destination address and is routed independently irrespective
of the packets.
4. Implementation of Connection Oriented service:
To use a connection-oriented service, first we establishes a connection, use it and then
release it. In connection-oriented services, the data packets are delivered to the receiver in
the same order in which they have been sent by the sender.
It can be done in either two ways :
Congestion Control
Congestion control refers to techniques and mechanisms that can either prevent congestion
before it happens or remove congestion after it has happened. In general, we can divide
congestion control mechanisms into two broad categories: open-loop congestion control
(prevention) and closed-loop congestion control (removal).
Window Policy The type of window at the sender may also affect congestion. The Selective
Repeat window is better than the Go-Back-N window for congestion control.
Acknowledgment Policy The acknowledgment policy imposed by the receiver may also
affect congestion. If the receiver does not acknowledge every packet it receives, it may slow
down the sender and help prevent congestion. A receiver may send an acknowledgment
only if it has a packet to be sent or a special timer expires. A receiver may decide to
acknowledge only N packets at a time. the acknowledgments are also part of the load in a
network. Sending fewer acknowledgments means imposing less load on the network.
Discarding Policy A good discarding policy by the routers may prevent congestion and at
the same time may not harm the integrity of the transmission. For example, in audio
transmission, if the policy is to discard less sensitive packets when congestion is likely to
happen, the quality of sound is still preserved and congestion is prevented or alleviated.
Choke Packet A choke packet is a packet sent by a node to the source to inform it of
congestion. Note the difference between the backpressure and choke-packet methods. In
backpressure, the warning is from one node to its upstream node, although the warning may
eventually reach the source station. In the choke-packet method, the warning is from the
router, which has encountered congestion, directly to the source station. The intermediate
nodes through which the packet has travelled are not warned.
For example, when a source sends several packets and there is no acknowledgment for a
while, one assumption is that the network is congested. The delay in receiving an
acknowledgment is interpreted as congestion in the network; the source should slow down.
Explicit Signaling The node that experiences congestion can explicitly send a signal to the
source or destination. The explicit-signaling method, however, is different from the choke-
packet method. In the choke-packet method, a separate packet is used for this purpose; in
the explicit-signaling method, the signal is included in the packets that carry data. Explicit
signaling can occur in either the forward or the backward direction. This type of congestion
control can be seen in an ATM network.
QoS(Quality of Service) is nothing but a quality that every flow seeks to attain (smooth
movement without any congestion).
a. Scheduling
Packets from different flows arrive at a switch or router for processing. A good scheduling
technique treats the different flows in a fair and appropriate manner. Several scheduling
techniques are designed to improve the quality of service. We discuss three of them here:
FIFO queuing, priority queuing, and weighted fair queuing.
i. FIFO Queuing
In first-in, first-out (FIFO) queuing, packets wait in a buffer (queue) until the node (router or
switch) is ready to process them. If the average arrival rate is higher than the average
processing rate, the queue will fill up and new packets will be discarded. A FIFO queue is
familiar to those who have had to wait for a bus at a bus stop.
ii. Priority Queuing
In priority queuing, packets are first assigned to a priority class. Each priority class has its
own queue. The packets in the highest-priority queue are processed first. Packets in the
lowest- priority queue are processed last. Note that the system does not stop serving a queue
until it is empty. Figure 4.32 shows priority queuing with two priority levels (for simplicity).
A priority queue can provide better QoS than the FIFO queue because higher priority traffic,
such as multimedia, can reach the destination with less delay. However, there is a potential
drawback. If there is a continuous flow in a high-priority queue, the packets in the lower-
priority queues will never have a chance to be processed. This is a condition called
starvation
b. Traffic Shaping
Traffic shaping is a mechanism to control the amount and the rate of the traffic sent to the
network. Two techniques can shape traffic: leaky bucket and token bucket
i. Leaky Bucket
If a bucket has a small hole at the bottom, the water leaks from the bucket at a constant rate
as long as there is water in the bucket. The rate at which the water leaks does not depend on
the rate at which the water is input to the bucket unless the bucket is empty. The input rate
can vary, but the output rate remains constant. Similarly, in networking, a technique called
leaky bucket can smooth out bursty traffic. Bursty chunks are stored in the bucket and sent
out at an average rate. Figure 4.34 shows a leaky bucket and its effects.
In the figure, we assume that the network has committed a bandwidth of 3 Mbps for a host.
The use of the leaky bucket shapes the input traffic to make it conform to this commitment.
In Figure 4.34 the host sends a burst of data at a rate of 12 Mbps for 2 s, for a total of 24 Mbits
of data. The host is silent for 5 s and then sends data at a rate of 2 Mbps for 3 s, for a total of 6
Mbits of data. In all, the host has sent 30 Mbits of data in lOs. The leaky bucket smooth’s the
traffic by sending out data at a rate of 3 Mbps during the same 10 s.
A simple leaky bucket implementation is shown in Figure 4.35. A FIFO queue holds the
packets. If the traffic consists of fixed-size packets (e.g., cells in ATM networks), the process
removes a fixed number of packets from the queue at each tick of the clock. If the traffic
consists of variable-length packets, the fixed output rate must be based on the number of
bytes or bits.
· If n is greater than the size of the packet, send the packet and decrement the counter
by the packet size. Repeat this step until n is smaller than the packet size.
The token bucket can easily be implemented with a counter. The token is initialized to zero.
Each time a token is added, the counter is incremented by 1. Each time a unit of data is sent,
the counter is decremented by 1. When the counter is zero, the host cannot send data.
The two techniques can be combined to credit an idle host and at the same time regulate the
traffic. The leaky bucket is applied after the token bucket; the rate of the leaky bucket needs
to be higher than the rate of tokens dropped in the bucket.
c. Resource Reservation
A flow of data needs resources such as a buffer, bandwidth, CPU time, and so on. The
quality of service is improved if these resources are reserved beforehand. We discuss in this
section one QoS model called Integrated Services, which depends heavily on resource
reservation to improve the quality of service.
d. Admission Control
Admission control refers to the mechanism used by a router, or a switch, to accept or reject a
flow based on predefined parameters called flow specifications. Before a router accepts a
flow for processing, it checks the flow specifications to see if its capacity (in terms of
bandwidth, buffer size, CPU speed, etc.) and its previous commitments to other flows can
handle the new flow.