0% found this document useful (0 votes)
7 views99 pages

Jamia-BCA-II-Data - Communication - and - Computer - Networks - Basics Unit IV

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views99 pages

Jamia-BCA-II-Data - Communication - and - Computer - Networks - Basics Unit IV

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 99

UNIT – 4 DATA LINK LAYER AND MULTIPLE

ACCESS PROTOCOLS

STRUCTURE

4.0 Learning objectives

4.1 Introduction

4.2 Error detection and error correction techniques;

4.3 Data-Link Control

4.4 Framing And Flow Control

4.5 Error Recovery Protocols

4.5.1 Stop and wait ARQ

4.5.2 Go-back-n ARQ

4.5.3 Point to Point Protocol on Internet

4.6 Routing

4.6.1 Routing algorithms;

4.7 Network Layer Protocol Of Internet

4.7.1 IP protocol

4.8 Internet control protocols

4.9 Summary

4.10 Keywords

4.11 Learning Activity

4.12 Unit End Questions

4.13 References

87
4.0 LEARNING OBJECTIVES

After studying this unit, you will be able to:

 Understand the Error detection and error correction techniques;

 Know about data-link control and framing and flow control

 Understand error recovery protocols, stop and wait ARQ, go-back-n ARQ and Point
to Point Protocol on Internet

 Understand the Routing and various routing algorithms;

4.1 INTRODUCTION

Error detection and correction techniques are implemented either at the data link layer and the
transport layer of the OSI model.

Data can be corrupted during transmission. for reliable communication, errors must be
detected and corrected techniques.

 Bits lost

 Bits changed

 Bits added

Types of Error

 Single bit errors

 Burst errors

In information theory and techniques theory in computer


science and telecommunication, error detection and correction techniques that enable
reliable delivery of digital data over unreliable communication channels.

SINGLE BIT ERROR – The term single bit error means that only 1 bit of a given data unit
(such as a byte, character, data unit, or packet) is changed from 1 to 0 or from 0 to 1.

88
Fig 4.1 Single Bit Error

BURST ERROR – The term burst error means two or more bits within the data unit have
changed from 1 to 0 or from 0 to 1.

Fig 4.2 Burst Error

4.2 ERROR DETECTION AND ERROR CORRECTION TECHNIQUES

ERROR DETECTION

 Error Detection techniques concede the destination to detect errors.

 Sometimes undetected errors will still remain but the goal is to minimize these errors

ERROR DETECTION

 To detect and correct errors, sufficient redundancy bits need to be sent with data.

 Redundancy bits are the extra bits sent by the source to inform the destination about
the data sent.

89
ERROR DETECTION

 Parity Check

 Cyclic Redundancy Check(based on binary division)

 Checksum

 Hamming Distance Check

ERROR CORRECTION

BackwardErrorCorrection
ForwardErrorCorrection

BACKWORD ERROR CORRECTION

 Known as Automatic Repeat Request(ARR)

 The receiver device sends a request to the source device to re-send the data after
detecting the error or errors

 More often used because it requires less bandwidth

 A return channel is required for backward error correction

Backward Error Correction

 There are two ways to overcome the errors

1 Positive acknowledgment

The receiver returns confirmation of every block received correctly. The transmitter-sends the
block that is not acknowledged.

2 Negative acknowledgment

Receiver returns a request to retransmit only the data with error

FORWARD ERROR CORRECTION

 This technique allows the receiver to detect and correct errors without asking the
send error retransmission

 The bandwidth requirements higher but the return channel is not needed

 Redundant data sent by transmitters also called error-correction code

90
Forward error correction

 Redundancy bits are added to the transmitted information using predetermined


information

 Each redundancy bit is often a function of the many parts of original data or can
also be non-systematic

Fig 4.3 Example Of Forwarding Error Correction

FORWARD ERROR CORRECTION

Two main categories

1 BlockCoding: Reed-Solomon Coding, Hamming Codes, Binary BCH


2 Convolutional Coding: Viterbi algorithm

Block Coding works on fixed-size packets of bits

Mostly common used algorithmic Reed-Solomon

91
1. BLOCK CODING: REED-SOLOMON CODING

 A Reed-Solomon code is specified as RS(n,k) with s-bit symbols

 This means that the encoder takes k data symbols of s bits each and adds parity
symbols to make any symbol codeword

 There are n-k parity symbols of s bits each. A Reed-Solomon decoder can correct up
to t symbols that contain errors during a codeword, where 2t=n-k.

EXAMPLE OF REED-SOLOMON

 Example: A popular Reed-Solomon code is RS(255,223) with 8-bit symbols. Each


codeword contains 255 code word bytes, of which 223 bytes are data and 32 bytes is
parity. For this code:

 n = 255, k = 223, s = 8

 2t = 32, t = 16

 The decoder can correct any 16 symbol errors within the code word

2. CONVOLUTIONAL CODING

Convolutional codes work on bitstreams


If desired convolutional code can be turned into a block code
Most widely used algorithm is Vitebi Algorithm if desired

 Viterbi decoder examines a whole received data sequence of a given length at a


time interval, then computes a metric for each path and makes a decision based on this
metric

 One of the common metric used by the Viterbi Algorithm for paths comparison is the
Hamming distance metric, which is a bit-wise comparison between the received
codeword and the allowable codeword

92
4.3 DATA-LINK CONTROL

Concepts of the Data link layer i.e Data Link Control.

There are two main functions of the Data link layer and these are Data Link Control and
Media Access control. Data link control mainly deals with the design and procedure of
communication between two adjacent nodes: node-to-node communication.

Media access control is another main function of the Data Link layer which mainly
specifies how the link is shared.

Let us first cover Data link control in this tutorial and then in the next tutorial we will move
on to Media access control.

Functions in Data Link Control

The functions included in data link control are:

 Framing

 Flow and Error Control

 Software Implemented protocols(that provides smooth and reliable transmission of


frames between nodes.)

4.4 FRAMING AND FLOW CONTROL

Framing

In the Physical layer, data transmission means moving bits are in the form of a signal from
the source to the destination. The Physical layer also provides synchronization that mainly
ensures that the sender and the receiver use the same bit durations and timings.

The bits are packed into the frames by the data link layer; so that each frame is
distinguishable from another frame.

The Framing in the data link layer separates a message from one source to a
destination or from other messages to other destinations just by adding a sender address
and destination address; where the destination address specifies where the packet has to go
and the sender address helps the recipient to acknowledge the receipt.

93
Frames can be either of fixed size or of variable size. By using frames the data can be easily
get broken up into recoverable chunks and in order to check the corruption in transmission,
these chunks can be checked easily.

Problems in Framing

Given below are some of the problems caused due to the framing:

1. Detecting the start of the frame Whenever a frame is transmitted then every station must
be able to detect this frame. Any Station detects the frame by looking out for a special
sequence of bits that are marked at the beginning of the frame that is Starting Frame
Delimiter(SFD).

2. How any station detects a frame Every station in the network listens to the link for the
SFD pattern through the sequential circuit. If an SFD is detected then the sequential circuit
alerts the station. After that, the Station checks the destination address in order to accept or
reject the frame.

3. Detecting the end of the frame It is when to stop reading the frame.

Parts of a frame

Different parts of a frame are as follows:

Fig 4.4 Different parts of a frame

1. Flag A flag is used to mark the beginning and end of the frame.

2. Header The frame header mainly contains the address of the source and the destination of
the frame.

3. Trailer The frame trailer mainly contains the error detection and error correction bits.

4. Payload Field This field contains the message to be delivered.

Types of Framing

Framing is mainly categorized into two parts:

 Fixed-size Framing

 Variable-size Framing

94
Let us cover the above given two types one-by-one;

Fixed-size framing

In the fixed-size framing, there is no need for defining the boundaries of the frame. The size
or length of the frame itself can be used as a delimiter.

One drawback of fixed size framing is that it will suffer from Internal fragmentation if the
size of data is less than the size of the frame.

Variable-size framing

In Variable-size framing, the size of each frame is different. Thus there is a need of the way
in order to define the end of the frame and the beginning of the next.

There are two approaches used for Variable-size framing:

Fig 4.5 Variable-size framing

Character-Oriented Protocols

In the Character-Oriented protocol, data to be carried are 8-bit characters from a coding
system such as ASCII.

Now the parts of the frame in Character-Oriented Framing are as follows:

1.Frame Header

The header of the frame contains the address of the source and destination in the form of
bytes.

95
2.Payload Field

The Payload field mainly contains the message that is to be delivered. In this case, it is a
variable sequence of data bytes.

3.Frame trailer

The trailer of the frame contains the bytes for error correction and error detection.

4.Flag

In order to separate one frame from the next an 8-bit(that is 1 byte) flag is added at the
beginning and end of the frame.

Let us take a look at the frame in Character-Oriented Protocol:

This technique was popular when the data was in the form of text that was exchanged by the
data link layers. The flag selected could be any character that is not used for text
communication. But there is a need to send other types of information like graphs, audio, and
video.

Now any pattern that is used for the flag could also be a part of the Information. If this
happens then the receiver encounters this pattern in the middle of the data and then thinks that
it has reached the end of the frame.

In order to fix the above problem, the byte-stuffing strategy was added to the character-
oriented framing.

Byte-stuffing

It is a process of adding 1 special byte whenever there is a character with the same pattern as
the flag.

 The data section is stuffed with an extra byte and this byte is usually called the Escape
character(ESC) and it has a predefined bit pattern.

 Whenever the receiver encounters an ESC character, then it removes it from the data
section and then treats the next character as the data.

96
Byte stuffing and unstuffing

Fig 4.6 Byte stuffing and unstuffing

The disadvantage of using character-oriented framing is that due to this there becomes too
much overhead on to the message due to which the total size of the frame gets increases. The
another drawback is that the current coding system has 16-bit or 32-bit characters that surely
get conflicted with the 8-bit encoding.

Bit-Oriented Protocols

In Bit-oriented framing mainly the data section of the frame is a sequence of bits that are to
be interpreted by the upper layer as text, graphics, audio, video, etc.

In this, there is also a need for a delimiter in order to separate one frame from the other.

Let us take a look at the frame in Bit-oriented Protocol:

97
Bit-Stuffing

The process by which an extra 0 is added whenever five consecutive 1s follow a 0 in the data
so that the receiver does not mistake the pattern 01111110 for a flag is commonly referred to
as Bit stuffing.

Fig 4.7 Bit-Stuffing

The above figure shows the bit stuffing at the sender and the bit removal at the receiver. It is
important to note that even if we have a 0 after five 1s, we will still stuff a 0. The removal of
0 is done by the receiver.

It simply means that whenever the flag-like pattern 01111110 appears in the data then it will
change the data to 011111010(stuffed) and then it is not mistaken as a flag by the receiver.

The real flag 01111110 is not stuffed by the sender and thus is recognized by the receiver.

98
Flow Control

Flow Control mainly coordinates with the amount of data that can be sent before receiving an
acknowledgment from the receiver and it is one of the major duties of the data link layer.

 For most of the protocols, flow control is a set of procedures that mainly tells the
sender how much data the sender can send before it must wait for an
acknowledgment from the receiver.

 The data flow must not be allowed to overwhelm the receiver; because any receiving
device has a very limited speed at which the device can process the incoming data and
the limited amount of memory to store the incoming data.

 The processing rate is slower than the transmission rate; due to this reason each
receiving device has a block of memory that is commonly known as buffer, that is
used to store the incoming data until this data will be processed. In case the buffer
begins to fillup then the receiver must be able to tell the sender to halt the
transmission until once again the receiver become able to receive.

Thus the flow control makes the sender; wait for the acknowledgment from the receiver
before the continuation to send more data to the receiver.

Some of the common flow control techniques are: Stop-and-Wait and sliding window
technique.

Another important design issue that occurs in the data link layer is what to do with a sender
that systematically wants to transmit frames faster than the receiver can accept them. This
situation can easily occur when the sender is running on a fast computer and the receiver is
running on a slow machine. The sender keeps pumping the frames out at a high rate until the
receiver is completely swamped. Even if the transmission is error free, at a certain point the
receiver will simply be unable to handle the frames as they arrive and will start to lose some.
Clearly, something has to be done to prevent this situation. Two approaches are commonly
used. In the first one, feedback-based flow control, the receiver sends back information to the
sender giving it permission to send more data or at least telling the sender how the receiver is
doing. In the second one, rate-based flow control, the protocol has a built-in mechanism that
limits the rate at which senders may transmit data, without using feedback from the receiver

99
4.5 ERROR RECOVERY PROTOCOLS

Protocols

The implementation of protocols is mainly implemented in the software by using one of the
common programming languages. The classification of the protocols can be mainly done on
the basis of where they are being used.

Protocols can be used for noiseless channels(that is error-free) and also used for noisy
channels(that is error-creating). The protocols used for noiseless channels mainly cannot be
used in real-life and are mainly used to serve as the basis for the protocols used for noisy
channels.

Fig 4.8 Protocols

All the above-given protocols are unidirectional in the sense that the data frames travel from
one node i.e Sender to the other node i.e receiver.

The special frames called acknowledgment (ACK) and negative acknowledgment (NAK)
both can flow in opposite direction for flow and error control purposes and the data can flow
in only one direction.

100
But in the real-life network, the protocols of the data link layer are implemented as
bidirectional which means the flow of the data is in both directions. And in these protocols,
the flow control and error control information such as ACKs and NAKs are included in the
data frames in a technique that is commonly known as piggybacking.

Also, bidirectional protocols are more complex than the unidirectional protocol.

In our further tutorials we will be covering the above mentioned protocols in detail.

Simplest Protocol

Simplest Protocol that lies under the category Noiseless Channels in the Data link layer.

Simplest Protocol is a protocol that neither has flow control nor has error control( as we have
already told you that it lies under the category of Noiseless channels).

 The simplest protocol is basically a unidirectional protocol in which data frames


only travel in one direction; from the sender to the receiver.

 In this, the receiver can immediately handle the frame it receives whose processing
time is small enough to be considered as negligible.

 Basically, the data link layer of the receiver immediately removes the header from the
frame and then hand over the data packet to the network layer that also accepts the
data packet immediately.

 We can also say that in the case of this protocol the receiver never gets overwhelmed
with the incoming frames from the sender.

Design of the Simplest Protocol

The flow control is not needed by the Simplest Protocol. The data link layer at the sender side
mainly gets the data from the network layer and then makes the frame out of data and sends
it. On the Receiver site, the data link layer receives the frame from the physical layer and
then extracts the data from the frame, and then delivers the data to its network layer.

101
Fig 4.9 Data Frame

The Datalink layers of both sender and receiver mainly provide transmission services for
their Network layers. The data link layer also uses the services provided by the physical layer
such as signaling, multiplexing, etc for the physical transmission of the bits.

The procedure used by the data link layer

Let us now take a look at the procedure used by the data link layer at both sides(sender as
well as the receiver).

 There is no frame send by the data link layer of the sender site until its network layer
has a data packet to send.

 Similarly, the receiver site cannot deliver a data packet to its network layer until a
frame arrives.

 In case if the implementation of the protocol is done as a procedure then there is a


need to introduce the idea of events in the protocol.

 The procedure at the sender site runs constantly; there is no action until there is a
request from the network layer.

102
 Also, the procedure at the receiver site runs constantly; there is no action until there is
a notification from the physical layer.

 Both the procedure runs continuously because either of them doesn't know when the
corresponding events will occur.

Let us take a look at the algorithm used at the sender's site:

while(true) //Repeat Forever

WaitForEvent(); //Sleep until there is occurrence of an event

if(Event(RequestToSend)) //means there is a packet to send

GetData();

MakeFrame();

SendFrame(); //Send the frame

Given below is the algorithm used at the receiver's site:

while(true) //Repeat Forever

WaitForEvent(); //Sleep until there is occurrence of an event

if(Event(ArrivalNotification)) //means there is a packet to send

ReceiveFrame();

ExtractData();

DeliverData(); //Send the frame

103
Flow Diagram for Simplest Protocol

Using the simplest protocol the sender A sends a sequence of frames without even thinking
about receiver B.

Fig 4.10 Simplest Protocol

In order to send the three frames, there will be an occurrence of three events at sender A and
three events at the receiver B.

It is important to note that in the above figure the data frames are shown with the help of
boxes.

The height of the box mainly indicates the transmission time difference between the first bit
and the last bit of the frame.

4.5.1 Stop and wait ARQ

Stop-and-wait Protocol is used in the data link layer for the transmission in the noiseless
channels. Let us first understand why there is a need to use this protocol then we will cover
this protocol in detail.

We have studied the simplest protocol in the previous tutorial, suppose there is a scenario in
which the data frames arrive at the receiver's site faster than they can be processed means the
rate of transmission is more than the processing rate of the frames. Also, it is normal that the

104
receiver does not have enough space, and the data is also coming from multiple sources. Then
due to all these, there may occur discarding of frames or denial of service.

In order to prevent the receiver from overwhelming, there is a need to tell the sender to slow
down the transmission of frames. We can make use of feedback from the receiver to the
sender.

Now from the next section, we will cover the concept of the Stop-and-wait protocol.

As the name suggests, when we use this protocol during transmission, then the sender sends
one frame, then stops until it receives the confirmation from the receiver, after receiving the
confirmation sender sends the next frame.

 There is unidirectional communication for the data frames, but the acknowledgment
or ACK frames travel from the other direction. Thus the flow control is added here.

 Thus the stop-and-wait is one of the flow control protocol which makes the use of
flow control service provided by the data link layer.

 For every sent frame, the acknowledgment is needed and it takes the same amount of
time for propagation in order to get back to the sender.

 In order to end up the transmission, the sender transmits an end of transmission that
means(EOT frame).

Design of the Stop-and-Wait protocol

Datalink layer at the sender side waits for its network layer in order to send the data packet.
After that data link checks that it can send the frame or not. In case of receiving a positive
notification from the physical layer; the data link layer makes the frame out of the data
provided by the network layer and then sends it to the physical layer. After sending the data it
will then wait for the acknowledgment before sending the next frame.

The data link layer on the receiver side waits for the frame to arrive. When the frame arrives
then the receiver processes the frame and then delivers it to the network layer. After that, it
will send the acknowledgment or we can say that ACK frame back to the sender.

105
Fig 4.11 Stop And Wait Protocol

The algorithm used at the sender site for the stop-and-wait protocol

This is an algorithm used at the sender site for the stop-and-wait protocol. Applications can
have its implementation in its own programming lamguage.

while(true) //Repeat forever

canSend=true //It will allow the first frame to go.

WaitForEvent(); //sleep until the occurrence of an event

if(Event(RequestToSend) AND canSend)

GetData();

MakeFrame();

SendFrame(); //Send the data frame

106
canSend=false; //cannot send until the acknowledgement arrives.

WaitForEvent(); //sleep until the occurrence of an event

if(Event(ArrivalNotification)) //indicates the arrival of the acknowledgement

ReceiveFrame(); //Means the ACK frame received

canSend=true;

Algorithm At the Receiver Side

This is an algorithm used at the receiver side for the stop-and-wait protocol. Applications
can have their implementation in their own programming language.

while(true) //means Repeat forever

WaitForEvent(); //sleep until the occurrence of an event

if(Event(ArrivalNotification)) //indicates arrival of the data frame

ReceiveFrame();

ExtractData();

Deliver(data); //delivers the data to the network layer.

SendFrame(); //Send the ACK frame

107
Flow diagram of the stop-and-wait protocol

Given below is the flow diagram of the stop-and-wait protocol:

Fig 4.12 Flow Diagram

Advantages

One of the main advantages of the stop-and-wait protocol is the accuracy provided. As the
transmission of the next frame is only done after receiving the acknowledgment of the
previous frame. Thus there is no chance for data loss.

Disadvantages

Given below are some of the drawbacks of using the stop-and-wait Protocol:

 Using this protocol only one frame can be transmitted at a time.

 Suppose in a case, the frame is sent by the sender but it gets lost during the
transmission and then the receiver can neither get it nor can send an acknowledgment
back to the sender. Upon not receiving the acknowledgment the sender will not send
the next frame. Thus there will occur two situations and these are: The receiver has to

108
wait for an infinite amount of time for the data and the sender has to wait for an
infinite amount of time in order to send the next frame.

 In the case of the transmission over a long distance, this is not suitable because the
propagation delay becomes much longer than the transmission delay.

 In case the sender sends the data and this data has also been received by the receiver.
After receiving the data the receiver then sends the acknowledgment but due to some
reasons, this acknowledgment is received by the sender after the timeout period. Now
as this acknowledgment is received too late; thus it can be wrongly considered as the
acknowledgment of another data packet.

 The time spent waiting for the acknowledgment for each frame also adds up in the
total transmission time.

4.5.2 Go-back-n ARQ

Go-Back-N ARQ is mainly a specific instance of Automatic Repeat Request (ARQ)


protocol where the sending process continues to send a number of frames as specified by the
window size even without receiving an acknowledgement(ACK) packet from the receiver.

The sender keeps a copy of each frame until the arrival of acknowledgement.

This protocol is a practical approach to the sliding window.

 In Go-Back-N ARQ, the size of the sender is N and the size of the receiver window is
always 1.

 This protocol makes the use of cumulative acknowledgements means here the
receiver maintains an acknowledgement timer; whenever the receiver receives a new
frame from the sender then it starts a new acknowledgement timer. When the timer
expires then the receiver sends the cumulative acknowledgement for all the frames
that are unacknowledged by the receiver at that moment.

 It is important to note that the new acknowledgement timer only starts after the
receiving of a new frame, it does not start after the expiry of the old
acknowledgement timer.

 If the receiver receives a corrupted frame, then it silently discards that corrupted
frame and the correct frame is retransmitted by the sender after the timeout timer

109
expires. Thus receiver silently discards the corrupted frame. By discarding silently we
mean that: “Simply rejecting the frame and not taking any action for the frame".

 In case after the expiry of the acknowledgement timer, suppose there is only one
frame that is left to be acknowledged. In that case, the receiver sends the independent
acknowledgement for that frame.

 In case if the receiver receives the out of order frame then it simply discards all the
frames.

 In case if the sender does not receive any acknowledgement then the entire window of
the frame will be retransmitted in that case.

 Using the Go-Back-N ARQ protocol leads to the retransmission of the lost frames
after the expiry of the timeout timer.

The Need of Go-Back-N ARQ

This protocol is used to send more than one frame at a time. With the help of Go-Back-N
ARQ, there is a reduction in the waiting time of the sender.

With the help of the Go-Back-N ARQ protocol the efficiency in the transmission increases.

Send (sliding) window for Go-Back-N ARQ

Basically, the range which is in the concern of the sender is known as the send sliding
window for the Go-Back-N ARQ. It is an imaginary box that covers the sequence numbers of
the data frame which can be in transit.

The size of this imaginary box is 2m-1 having three variables Sf( which indicates send
window, the first outstanding frame), Sn(indicates the send window, the next frame to be
sent), SSize.(indicates the send window, size).

 The sender can transmit N frames before receiving the ACK frame.

 The size of the send sliding window is N.

 The copy of sent data is maintained in the sent buffer of the sender until all the sent
packets are acknowledged.

 If the timeout timer runs out then the sender will resend all the packets.

 Once the data get acknowledged by the receiver then that particular data will be
removed from the buffer.

110
Whenever a valid acknowledgement arrives then the send window can slide one or more
slots.

Fig 4.13 Sliding

Sender Window Size

As we have already told you the Sender window size is N.The value of N must be greater
than 1.

In case if the value of N is equal to 1 then this protocol becomes a stop-and-wait protocol.

Receive (sliding) window for Go-Back-N ARQ

The range that is in the concern of the receiver is called the receiver sliding window.

 The receive window is mainly an abstract concept of defining an imaginary box


whose size is 1and has a single variable Rn.

 The window slides when a correct frame arrives, the sliding occurs one slot at a time.

 The receiver always looks for a specific frame to arrive in the specific order.

 Any frame that arrives out of order at the receiver side will be discarded and thus need
to be resent by the sender.

 If a frame arrives at the receiver safely and in a particular order then the receiver send
ACK back to the sender.

 The silence of the receiver causes the timer of the unacknowledged frame to expire.

111
Fig 4.14 Sliding Window

Design of Go-Back-N ARQ

With the help of Go-Back-N ARQ, multiple frames can be transit in the forward direction and
multiple acknowledgements can transit in the reverse direction. The idea of this protocol is
similar to the Stop-and-wait ARQ but there is a difference and it is the window of Go-Back-N
ARQ allows us to have multiple frames in the transition as there are many slots in the send
window.

112
Fig 4.15 GO-BACK-N ARQ

Window size for Go-Back-N ARQ

In the Go-Back-N ARQ, the size of the send window must be always less than 2m and the
size of the receiver window is always 1.

113
Fig 4.16 Window size for Go-Back-N ARQ

114
Flow Diagram

Fig 4.17 Flow Control

Advantages

Given below are some of the benefits of using the Go-Back-N ARQ protocol:

 The efficiency of this protocol is more.

 The waiting time is pretty much low in this protocol.

115
 With the help of this protocol, the timer can be set for many frames.

 Also, the sender can send many frames at a time.

 Only one ACK frame can acknowledge more than one frame.

Disadvantages

Given below are some drawbacks:

 Timeout timer runs at the receiver side only.

 The transmitter needs to store the last N packets.

 The retransmission of many error-free packets follows an erroneous packet.

4.5.3 Point to Point Protocol on Internet

PPP(Point-To-Point) protocol is a protocol used in the data link layer. The PPP protocol is
mainly used to establish a direct connection between two nodes.

The Point-To-Point protocol mainly provides connections over multiple links.

 This protocol defines how two devices can authenticate with each other.

 PPP protocol also defines the format of the frames that are to be exchanged between
the devices.

 This protocol also defines how the data of the network layer are encapsulated in the
data link frame.

 The PPP protocol defines how the two devices can negotiate the establishment of the
link and then can exchange the data.

 This protocol provides multiple services of the network layer and also supports
various network-layer protocols.

 This protocol also provides connection over multiple links.

116
Some services that are not offered by the PPP protocol are as follows:

1. This protocol does not provide a flow control mechanism. Because when using this
protocol the sender can send any number of frames to the receiver one after the other without
even thinking about overwhelming the receiver.

2. This protocol does not provide any mechanism for addressing in order to handle the
frames in the multipoint configuration.

3. The PPP protocol provides a very simple mechanism for error control. There is a CRC
field that detects the errors. In case if there is a corrupted frame then it is discarded silently.

In the PPP protocol, the framing is done using the byte-oriented technique.

PPP Frame Format

Given below figure shows the format of the PPP Frame:

Fig 4.18 PPP FORMAT

Let us discuss each field of the PPP frame format one by one:

1. Flag

The PPP frame mainly starts and ends with a 1-byte flag field that has the bit pattern:
01111110. It is important to note that this pattern is the same as the flag pattern used in
HDLC. But there is a difference too and that is PPP is a byte-oriented protocol whereas the
HDLC is a bit-oriented protocol.

2. Address

The value of this field in PPP protocol is constant and it is set to 11111111 which is a
broadcast address. The two parties can negotiate and can omit this byte.

3. Control

117
The value of this field is also a constant value of 11000000. We have already told you that
PPP does not provide any flow control and also error control is limited to error detection. The
two parties can negotiate and can omit this byte.

4. Protocol

This field defines what is being carried in the data field. It can either be user information or
other information. By default, this field is 2 bytes long.

5. Payload field

This field carries the data from the network layer. The maximum length of this field is 1500
bytes. This can also be negotiated between the endpoints of communication.

6. FCS

It is simply a 2-byte or 4-byte standard CRC(Cyclic redundancy check).

Byte Stuffing in PPP

As we have told you that the major difference between PPP and HDLC is that PPP is a byte-
oriented protocol. It means that the flag in the PPP is a byte and it is needed to be escaped
wherever it appears in the data section of the frame.

The escape byte is 011111101 which means whenever the flag-like pattern appears in the data
then the extra byte is stuffed that mainly tells the receiver that the next byte is not a flag.

Transition Phases in the PPP Protocol

The PPP protocol has to go through various phases and these are shown in the diagram given
below;

118
Fig 4.19 Transition Phases

Dead

In this phase, the link is not being used.No active carrier is there at the physical layer and the
line is simply quiet.

Establish

If one of the nodes starts the communication then the connection goes into the established
phase. In this phase, options are negotiated between the two parties. In case if the negotiation
is done successfully then the system goes into the Authenticate phase (in case if there is the
requirement of authentication otherwise goes into the network phase.)

Several packets are exchanged here.

Authenticate

This is an optional phase. During the establishment phase, the two nodes may decide not to
skip this phase. If the two nodes decide to proceed with the authentication then they send
several authentication packets.

119
If the result of this is successful then the connection goes into the networking phase otherwise
goes into the termination phase.

Network

In this phase, the negotiation of the protocols of the network layer takes place. The PPP
protocol specifies that the two nodes establish an agreement of the network layer before the
data at the network layer can be exchanged. The reason behind this is PPP supports multiple
protocols at the network layer.

In case if any node is running multiple protocols at the network layer simultaneously then the
receiving node needs to know that which protocol will receive the data.

Open

In this phase the transfer of the data takes place. Whenever a connection reaches this phase,
then the exchange of data packets can be started. The Connection remains in this phase until
one of the endpoints in the communication terminates the connection.

Terminate

In this phase, the connection is terminated. There is an exchange of several packets between
two ends for house cleaning and then closing the link.

Components of PPP/ PPP stack

Basically, PPP is a layered protocol. There are three components of the PPP protocol and
these are as follows:

 Link Control Protocol

 Authentication Protocol

 Network Control Protocol

120
Fig 4.20 Network Layer

Link Control protocol

This protocol is mainly responsible for establishing, maintaining, configuring, and


terminating the links. The link control protocol provides the negotiation mechanism in order
to set the options between the two endpoints.

Both endpoints of the link must need to reach an agreement about the options before the link
can be established.

Authentication protocol

This protocol plays a very important role in the PPP protocol because the PPP is designed for
use over the dial-up links where the verification of user identity is necessary. Thus this
protocol is mainly used to authenticate the endpoints for the use of other services.

There are two protocols for authentication:

 Password Authentication Protocol

 Challenge handshake authentication Protocol

121
Network Control Protocol

The Network Control Protocol is mainly used for negotiating the parameters and facilities for
the network layer.

For every higher-layer protocol supported by PPP protocol; there is one Network control
protocol.

Some of the Network Control protocol of the PPP are as follows;

1. Internet Protocol Control Protocol (IPCP)

2. Internetwork Packet Exchange Control Protocol (IPXCP)

3. DECnet Phase IV Control Protocol (DNCP)

4. NetBIOS Frames Control Protocol (NBFCP)

5. IPv6 Control Protocol (IPV6CP)

4.6 ROUTING

Routing is the process of forwarding of a packet in a network so that it reaches its intended
destination. The main goals of routing are:

1. Correctness: The routing should be done properly and correctly so that the packets
may reach their proper destination.

2. Simplicity: The routing should be done in a simple manner so that the overhead is as
low as possible. With increasing complexity of the routing algorithms the overhead
also increases.

3. Robustness: Once a major network becomes operative, it may be expected to run


continuously for years without any failures. The algorithms designed for routing
should be robust enough to handle hardware and software failures and should be able
to cope with changes in the topology and traffic without requiring all jobs in all hosts
to be aborted and the network rebooted every time some router goes down.

4. Stability: The routing algorithms should be stable under all possible circumstances.

5. Fairness: Every node connected to the network should get a fair chance of
transmitting their packets. This is generally done on a first come first serve basis.

122
6. Optimality: The routing algorithms should be optimal in terms of throughput and
minimizing mean packet delays. Here there is a trade-off and one has to choose
depending on his suitability.

Classification of Routing Algorithms

The routing algorithms may be classified as follows:

1. Adaptive Routing Algorithm: These algorithms change their routing decisions to


reflect changes in the topology and in traffic as well. These get their routing
information from adjacent routers or from all routers. The optimization parameters are
the distance, number of hops and estimated transit time. This can be further classified
as follows:

1. Centralized: In this type some central node in the network gets entire
information about the network topology, about the traffic and about other
nodes. This then transmits this information to the respective routers. The
advantage of this is that only one node is required to keep the information. The
disadvantage is that if the central node goes down the entire network is down,
i.e. single point of failure.

2. Isolated: In this method the node decides the routing without seeking
information from other nodes. The sending node does not know about the
status of a particular link. The disadvantage is that the packet may be send
through a congested route resulting in a delay. Some examples of this type of
algorithm for routing are:

 Hot Potato: When a packet comes to a node, it tries to get rid of it as


fast as it can, by putting it on the shortest output queue without regard
to where that link leads. A variation of this algorithm is to combine
static routing with the hot potato algorithm. When a packet arrives, the
routing algorithm takes into account both the static weights of the links
and the queue lengths.

 Backward Learning: In this method the routing tables at each node


gets modified by information from the incoming packets. One way to
implement backward learning is to include the identity of the source
node in each packet, together with a hop counter that is incremented on
each hop. When a node receives a packet in a particular line, it notes

123
down the number of hops it has taken to reach it from the source node.
If the previous value of hop count stored in the node is better than the
current one then nothing is done but if the current value is better then
the value is updated for future use. The problem with this is that when
the best route goes down then it cannot recall the second best route to a
particular node. Hence all the nodes have to forget the stored
informations periodically and start all over again.

3. Distributed: In this the node receives information from its neighbouring


nodes and then takes the decision about which way to send the packet. The
disadvantage is that if in between the the interval it receives information and
sends the paket something changes then the packet may be delayed.

2. Non-Adaptive Routing Algorithm: These algorithms do not base their routing


decisions on measurements and estimates of the current traffic and topology. Instead
the route to be taken in going from one node to the other is computed in advance, off-
line, and downloaded to the routers when the network is booted. This is also known as
static routing. This can be further classified as:

1. Flooding: Flooding adapts the technique in which every incoming packet is


sent on every outgoing line except the one on which it arrived. One problem
with this method is that packets may go in a loop. As a result of this a node
may receive several copies of a particular packet which is undesirable. Some
techniques adapted to overcome these problems are as follows:

 Sequence Numbers: Every packet is given a sequence number. When


a node receives the packet it sees its source address and sequence
number. If the node finds that it has sent the same packet earlier then it
will not transmit the packet and will just discard it.

 Hop Count: Every packet has a hop count associated with it. This is
decremented(or incremented) by one by each node which sees it. When
the hop count becomes zero(or a maximum possible value) the packet
is dropped.

 Spanning Tree: The packet is sent only on those links that lead to the
destination by constructing a spanning tree routed at the source. This

124
avoids loops in transmission but is possible only when all the
intermediate nodes have knowledge of the network topology.

Flooding is not practical for general kinds of applications. But in cases where high degree of
robustness is desired such as in military applications, flooding is of great help.

2. Random Walk: In this method a packet is sent by the node to one of its
neighbours randomly. This algorithm is highly robust. When the network is
highly interconnected, this algorithm has the property of making excellent use
of alternative routes. It is usually implemented by sending the packet onto the
least queued link.

Delta Routing

Delta routing is a hybrid of the centralized and isolated routing algorithms. Here each node
computes the cost of each line (i.e some functions of the delay, queue length, utilization,
bandwidth etc) and periodically sends a packet to the central node giving it these values
which then computes the k best paths from node i to node j. Let Cij1 be the cost of the best i-
j path, Cij2 the cost of the next best path and so on.If Cijn - Cij1 < delta, (Cijn - cost
of n'th best i-j path, delta is some constant) then path n is regarded equivalent to the best i-
j path since their cost differ by so little. When delta -> 0 this algorithm becomes centralized
routing and when delta -> infinity all the paths become equivalent.

Multipath Routing

In the above algorithms it has been assumed that there is a single best path between any pair
of nodes and that all traffic between them should use it. In many networks however there are
several paths between pairs of nodes that are almost equally good. Sometimes in order to
improve the performance multiple paths between single pair of nodes are used. This
technique is called multipath routing or bifurcated routing. In this each node maintains a table
with one row for each possible destination node. A row gives the best, second best, third best,
etc outgoing line for that destination, together with a relative weight. Before forwarding a
packet, the node generates a random number and then chooses among the alternatives, using
the weights as probabilities. The tables are worked out manually and loaded into the nodes
before the network is brought up and not changed thereafter.

125
Hierarchical Routing

In this method of routing the nodes are divided into regions based on hierarchy. A particular
node can communicate with nodes at the same hierarchial level or the nodes at a lower level
and directly under it. Here, the path from any source to a destination is fixed and is exactly
one if the heirarchy is a tree.

4.6.1 Routing algorithms;

Distance Vector Routing Algorithm

 The Distance vector algorithm is iterative, asynchronous and distributed.

 Distributed: It is distributed in that each node receives information from one


or more of its directly attached neighbors, performs calculation and then
distributes the result back to its neighbors.

 Iterative: It is iterative in that its process continues until no more


information is available to be exchanged between neighbors.

 Asynchronous: It does not require that all of its nodes operate in the lock
step with each other.

 The Distance vector algorithm is a dynamic algorithm.

 It is mainly used in ARPANET, and RIP.

 Each router maintains a distance table known as Vector.

Three Keys to understand the working of Distance Vector Routing Algorithm:

 Knowledge about the whole network: Each router shares its knowledge through the
entire network. The Router sends its collected knowledge about the network to its
neighbors.

 Routing only to neighbors: The router sends its knowledge about the network to
only those routers which have direct links. The router sends whatever it has about the
network through the ports. The information is received by the router and uses the
information to update its own routing table.

 Information sharing at regular intervals: Within 30 seconds, the router sends the
information to the neighboring routers.

126
Distance Vector Routing Algorithm

Let dx(y) be the cost of the least-cost path from node x to node y. The least costs are related
by Bellman-Ford equation,

dx(y) = minv{c(x,v) + dv(y)}

Where the minv is the equation taken for all x neighbors. After traveling from x to v, if we
consider the least-cost path from v to y, the path cost will be c(x,v)+dv(y). The least cost
from x to y is the minimum of c(x,v)+dv(y) taken over all neighbors.

With the Distance Vector Routing algorithm, the node x contains the following routing
information:

 For each neighbor v, the cost c(x,v) is the path cost from x to directly attached
neighbor, v.

 The distance vector x, i.e., Dx = [ Dx(y) : y in N ], containing its cost to all


destinations, y, in N.

 The distance vector of each of its neighbors, i.e., Dv = [ Dv(y) : y in N ] for each
neighbor v of x.

Distance vector routing is an asynchronous algorithm in which node x sends the copy of its
distance vector to all its neighbors. When node x receives the new distance vector from one
of its neighboring vector, v, it saves the distance vector of v and uses the Bellman-Ford
equation to update its own distance vector. The equation is given below:

dx(y) = minv{ c(x,v) + dv(y)} for each node y in N

The node x has updated its own distance vector table by using the above equation and sends
its updated table to all its neighbors so that they can update their own distance vectors.

127
Algorithm

At each node x,

Initialization

for all destinations y in N:

Dx(y) = c(x,y) // If y is not a neighbor then c(x,y) = ∞ for each neighbor w

Dw(y) = ? for all destination y in N. for each neighbor w

send distance vector Dx = [ Dx(y) : y in N ] to w

loop

wait(until I receive any distance vector from some neighbor w) for each y in N:

Dx(y) = minv{c(x,v)+Dv(y)}

If Dx(y) is changed for any destination y

Send distance vector Dx = [ Dx(y) : y in N ] to all neighbors

forever

Note: In Distance vector algorithm, node x update its table when it either see any cost
changein one directly linked nodes or receives any vector update from some neighbor.

Let's understand through an example: Sharing Information

128
Fig 4.21 Cloud represents the network

 In the above figure, each cloud represents the network, and the number inside the
cloud represents the network ID.

 All the LANs are connected by routers, and they are represented in boxes labeled as
A, B, C, D, E, F.

 Distance vector routing algorithm simplifies the routing process by assuming the cost
of every link is one unit. Therefore, the efficiency of transmission can be measured by
the number of links to reach the destination.In Distance vector routing, the cost is
based on hop count.

129
Fig 4.22 Distance Vector Routing

In the above figure, we observe that the router sends the knowledge to the immediate
neighbors. The neighbors add this knowledge to their own knowledge and sends the updated
table to their

own neighbors. In this way, routers get its own information plus the new information about
the neighbors.

Routing Table

Two process occurs:

 Creating the Table

 Updating the Table

Creating the Table

Initially, the routing table is created for each router that contains atleast three types of
information such as Network ID, the cost and the next hop.

130
 NET ID: The Network ID defines the final destination of the packet.

 Cost: The cost is the number of hops that packet must take to get there.

 Next hop: It is the router to which the packet must be delivered.

Fig 4.23 Original Routing Tables Are Shown Of All The Routers

 In the above figure, the original routing tables are shown of all the routers. In a
routing table, the first column represents the network ID, the second column
represents the cost of the link, and the third column is empty.

 These routing tables are sent to all the neighbors.

131
For Example:

1. A sends its routing table to B, F & E.

2. B sends its routing table to A & C.

3. C sends its routing table to B & D.

4. D sends its routing table to E & C.

5. E sends its routing table to A & D.

6. F sends its routing table to A.

Updating the Table

 When A receives a routing table from B, then it uses its information to update the
table.

 The routing table of B shows how the packets can move to the networks 1 and 4.

 The B is a neighbor to the A router, the packets from A to B can reach in one hop. So,
1 is added to all the costs given in the B's table and the sum will be the cost to reach a
particular network.

 After adjustment, A then combines this table with its own table to create a combined
table.

132
 The combined table may contain some duplicate data. In the above figure, the
combined table of router A contains the duplicate data, so it keeps only those data
which has the lowest cost. For example, A can send the data to network 1 in two
ways. The first, which uses no next router, so it costs one hop. The second requires
two hops (A to B, then B to Network 1). The first option has the lowest cost, therefore
it is kept and the second one is dropped.

 The process of creating the routing table continues for all routers. Every router
receives the information from the neighbors, and update the routing table.

133
Final routing tables of all the routers are given below:

Fig 4.24 Routing Tables Of All The Routers

Link State Routing

Link state routing is a technique in which each router shares the knowledge of its
neighborhood with every other router in the internetwork.

The three keys to understand the Link State Routing algorithm:

 Knowledge about the neighborhood: Instead of sending its routing table, a router
sends the information about its neighborhood only. A router broadcast its identities
and cost of the directly attached links to other routers.

 Flooding: Each router sends the information to every other router on the internetwork
except its neighbors. This process is known as Flooding. Every router that receives

134
the packet sends the copies to all its neighbors. Finally, each and every router receives
a copy of the same information.

 Information sharing: A router sends the information to every other router only when
the change occurs in the information.

Link State Routing has two phases:

Reliable Flooding

 Initial state: Each node knows the cost of its neighbors.

 Final state: Each node knows the entire graph.

Route Calculation

Each node uses Dijkstra's algorithm on the graph to calculate the optimal routes to all
nodes.

 The Link state routing algorithm is also known as Dijkstra's algorithm which is used
to find the shortest path from one node to every other node in the network.

 The Dijkstra's algorithm is an iterative, and it has the property that after k th iteration of
the algorithm, the least cost paths are well known for k destination nodes.

Let's describe some notations:

 c( i , j): Link cost from node i to node j. If i and j nodes are not directly linked, then
c(i

, j) = ∞.

 D(v): It defines the cost of the path from source code to destination v that has the least
cost currently.

 P(v): It defines the previous node (neighbor of v) along with current least cost path
from source to v.

 N: It is the total number of nodes available in the network.

Algorithm Initialization

N = {A} // A is a root node. for all nodes v

if v adjacent to A then D(v) = c(A,v) else D(v) = infinity loop

135
find w not in N such that D(w) is a minimum. Add w to N

Update D(v) for all v adjacent to w and not in N:

D(v) = min(D(v) , D(w) + c(w,v)) Until all nodes in N

In the above algorithm, an initialization step is followed by the loop. The number of times the
loop is executed is equal to the total number of nodes available in the network.

Let's understand through an example:

Fig 4.25 Algorithm Initialization Example

In the above figure, source vertex is A.

Step 1:

The first step is an initialization step. The currently known least cost path from A to its
directly attached neighbors, B, C, D are 2,5,1 respectively. The cost from A to B is set to 2,
from A to D is set to 1 and from A to C is set to 5. The cost from A to E and F are set to
infinity as they are not directly linked to A.

136
Step N D(B),P(B) D(C),P(C) D(D),P(D) D(E),P(E) D(F),P(F)

1 A 2,A 5,A 1,A ∞ ∞

Step 2:

In the above table, we observe that vertex D contains the least cost path in step 1. Therefore,
it is added in N. Now, we need to determine a least-cost path through D vertex.

a) Calculating shortest path from A to B

1. v = B, w = D

2. D(B) = min( D(B) , D(D) + c(D,B) )

3. = min( 2, 1+2)>

4. = min( 2, 3)

5. The minimum value is 2. Therefore, the currently shortest path from A to B is 2.

b) Calculating shortest path from A to C

1. v = C, w = D

2. D(B) = min( D(C) , D(D) + c(D,C) )

3. = min( 5, 1+3)

4. = min( 5, 4)

5. The minimum value is 4. Therefore, the currently shortest path from A to C is 4.</p>

c) Calculating shortest path from A to E

1. v = E, w = D

2. D(B) = min( D(E) , D(D) + c(D,E) )

3. = min( ∞, 1+1)

4. = min(∞, 2)

137
5. The minimum value is 2. Therefore, the currently shortest path from A to E is 2.

Step N D(B),P(B) D(C),P(C) D(D),P(D) D(E),P(E) D(F),P(F)

1 A 2,A 5,A 1,A ∞ ∞

2 AD 2,A 4,D 2,D ∞

Step 3:

In the above table, we observe that both E and B have the least cost path in step 2. Let's
consider the E vertex. Now, we determine the least cost path of remaining vertices through E.

a) Calculating the shortest path from A to B.

1. v = B, w = E

2. D(B) = min( D(B) , D(E) + c(E,B) )

3. = min( 2 , 2+ ∞ )

4. = min( 2, ∞)

5. The minimum value is 2. Therefore, the currently shortest path from A to B is 2.

b) Calculating the shortest path from A to C.

1. v = C, w = E

2. D(B) = min( D(C) , D(E) + c(E,C) )

3. = min( 4 , 2+1 )

4. = min( 4,3)

5. The minimum value is 3. Therefore, the currently shortest path from A to C is 3.

138
c) Calculating the shortest path from A to F.

1. v = F, w = E

2. D(B) = min( D(F) , D(E) + c(E,F) )

3. = min( ∞ , 2+2 )

4. = min(∞ ,4)

5. The minimum value is 4. Therefore, the currently shortest path from A to F is 4.

Step N D(B),P(B) D(C),P(C) D(D),P(D) D(E),P(E) D(F),P(F)

1 A 2,A 5,A 1,A ∞ ∞

2 AD 2,A 4,D 2,D ∞

3 ADE 2,A 3,E 4,E

Step 4:

In the above table, we observe that B vertex has the least cost path in step 3. Therefore, it is
added in N. Now, we determine the least cost path of remaining vertices through B.

a) Calculating the shortest path from A to C.

1. v = C, w = B

2. D(B) = min( D(C) , D(B) + c(B,C) )

3. = min( 3 , 2+3 )

4. = min( 3,5)

5. The minimum value is 3. Therefore, the currently shortest path from A to C is 3.

139
b) Calculating the shortest path from A to F.

1. v = F, w = B

2. D(B) = min( D(F) , D(B) + c(B,F) )

3. = min( 4, ∞)

4. = min(4, ∞)

5. The minimum value is 4. Therefore, the currently shortest path from A to F is 4.

Step N D(B),P(B) D(C),P(C) D(D),P(D) D(E),P(E) D(F),P(F)

1 A 2,A 5,A 1,A ∞ ∞

2 AD 2,A 4,D 2,D ∞

3 ADE 2,A 3,E 4,E

4 ADEB 3,E 4,E

Step 5:

In the above table, we observe that C vertex has the least cost path in step 4. Therefore, it is
added in N. Now, we determine the least cost path of remaining vertices through C.

a) Calculating the shortest path from A to F.

1. v = F, w = C

2. D(B) = min( D(F) , D(C) + c(C,F) )

3. = min( 4, 3+5)

140
4. = min(4,8)

5. The minimum value is 4. Therefore, the currently shortest path from A to F is 4.

Ste N D(B),P(B D(C),P(C D(D),P(D D(E),P(E D(F),P(F


p ) ) ) ) )

1 A 2,A 5,A 1,A ∞ ∞

2 AD 2,A 4,D 2,D ∞

3 ADE 2,A 3,E 4,E

4 ADEB 3,E 4,E

5 ADEB C 4,E

Final table:

Step N D(B),P(B) D(C),P(C) D(D),P(D) D(E),P(E) D(F),P(F)

1 A 2,A 5,A 1,A ∞ ∞

2 AD 2,A 4,D 2,D ∞

3 ADE 2,A 3,E 4,E

4 ADEB 3,E 4,E

5 ADEBC 4,E

6 ADEBCF

141
Disadvantage:

Heavy traffic is created in Line state routing due to Flooding. Flooding can cause an infinite
looping, this problem can be solved by using Time-to-leave field

What is congestion?

A state occurring in network layer when the message traffic is so heavy that it slows down
network response time.

Effects of Congestion

 As delay increases, performance decreases.

 If delay increases, retransmission occurs, making situation worse.

Congestion control algorithms

 Leaky Bucket Algorithm

Let us consider an example to understand

Imagine a bucket with a small hole in the bottom. No matter at what rate water enters the
bucket, the outflow is at constant rate. When the bucket is full with water additional water
entering spills over the sides and is lost.

Fig 4.26 Leaky Bucket Algorithm

142
Similarly, each network interface contains a leaky bucket and the following steps are
involved in leaky bucket algorithm:

1. When host wants to send packet, packet is thrown into the bucket.

2. The bucket leaks at a constant rate, meaning the network interface transmits
packets at a constant rate.

3. Bursty traffic is converted to a uniform traffic by the leaky bucket.

4. In practice the bucket is a finite queue that outputs at a finite rate.

 Token bucket Algorithm Need of token bucket Algorithm:-

The leaky bucket algorithm enforces output pattern at the average rate, no matter how
bursty the traffic is. So in order to deal with the bursty traffic we need a flexible algorithm
so that the data is not lost. One such algorithm is token bucket algorithm.

Steps of this algorithm can be described as follows:

1. In regular intervals tokens are thrown into the bucket. ƒ

2. The bucket has a maximum capacity. ƒ

3. If there is a ready packet, a token is removed from the bucket, and the packet is
sent.

4. If there is no token in the bucket, the packet cannot be sent. Let’s understand
with an example,

In figure (A) we see a bucket holding three tokens, with five packets waiting to be
transmitted. For a packet to be transmitted, it must capture and destroy one token. In figure
(B) We see that three of the five packets have gotten through, but the other two are stuck
waiting for more tokens to be generated.

Ways in which token bucket is superior to leaky bucket:

The leaky bucket algorithm controls the rate at which the packets are introduced in the
network, but it is very conservative in nature. Some flexibility is introduced in the token
bucket algorithm. In the token bucket, algorithm tokens are generated at each tick (up to a
certain limit). For an incoming packet to be transmitted, it must capture a token and the
transmission

143
takes place at the same rate. Hence some of the busty packets are transmitted at the same
rate if tokens are available and thus introduces some amount of flexibility in the system.

Formula: M * s = C + ρ * s where S – is time taken

M – Maximum output rate ρ – Token arrival rate

C – Capacity of the token bucket in byte Let’s understand with an example,

Fig 4.27 Token bucket

Network Layer

 The Network Layer is the third layer of the OSI model.

 It handles the service requests from the transport layer and further forwards the
service request to the data link layer.

 The network layer translates the logical addresses into physical addresses

 It determines the route from the source to the destination and also manages the traffic
problems such as switching, routing and controls the congestion of data packets.

 The main role of the network layer is to move the packets from sending host to the
receiving host.

144
The main functions performed by the network layer are:

 Routing: When a packet reaches the router's input link, the router will move the
packets to the router's output link. For example, a packet from S1 to R1 must be
forwarded to the next router on the path to S2.

 Logical Addressing: The data link layer implements the physical addressing and
network layer implements the logical addressing. Logical addressing is also used to
distinguish between source and destination system. The network layer adds a header
to the packet which includes the logical addresses of both the sender and the receiver.

 Internetworking: This is the main role of the network layer that it provides the
logical connection between different types of networks.

 Fragmentation: The fragmentation is a process of breaking the packets into the


smallest individual data units that travel through different networks.

Forwarding & Routing

In Network layer, a router is used to forward the packets. Every router has a forwarding table.
A router forwards a packet by examining a packet's header field and then using the header
field value to index into the forwarding table. The value stored in the forwarding table
corresponding to the header field value indicates the router's outgoing interface link to which
the packet is to be forwarded.

145
For example, the router with a header field value of 0111 arrives at a router, and then router
indexes this header value into the forwarding table that determines the output link interface is
The router forwards the packet to the interface 2. The routing algorithm determines the values
that are inserted in the forwarding table. The routing algorithm can be centralized or
decentralized.

Fig 4.28 Routing Algorithm

Services Provided by the Network Layer

 Guaranteed delivery: This layer provides the service which guarantees that the
packet will arrive at its destination.

146
 Guaranteed delivery with bounded delay: This service guarantees that the packet
will be delivered within a specified host-to-host delay bound.

 In-Order packets: This service ensures that the packet arrives at the destination in
the order in which they are sent.

 Guaranteed max jitter: This service ensures that the amount of time taken between
two successive transmissions at the sender is equal to the time between their receipt at
the destination.

 Security services: The network layer provides security by using a session key
between the source and destination host. The network layer in the source host
encrypts the payloads of datagrams being sent to the destination host. The network
layer in the destination host would then decrypt the payload. In such a way, the
network layer maintains the data integrity and source authentication services.

TCP/IP Model

The OSI Model we just looked at is just a reference/logical model. It was designed to
describe the functions of the communication system by dividing the communication
procedure into smaller and simpler components. But when we talk about the TCP/IP model, it
was designed and developed by Department of Defense (DoD) in 1960s and is based on
standard protocols. It stands for Transmission Control Protocol/Internet Protocol. The
TCP/IP model is a concise version of the OSI model. It contains four layers, unlike seven
layers in the OSI model. The layers are:

1. Process/Application Layer

2. Host-to-Host/Transport Layer

3. Internet Layer

4. Network Access/Link Layer

The diagrammatic comparison of the TCP/IP and OSI model is as follows :

Network Addressing

 Network Addressing is one of the major responsibilities of the network layer.

 Network addresses are always logical, i.e., software-based addresses.

147
 A host is also known as end system that has one link to the network. The boundary
between the host and link is known as an interface. Therefore, the host can have only
one interface.

 A router is different from the host in that it has two or more links that connect to it.
When a router forwards the datagram, then it forwards the packet to one of the links.
The boundary between the router and link is known as an interface, and the router can
have multiple interfaces, one for each of its links. Each interface is capable of sending
and receiving the IP packets, so IP requires each interface to have an address.

 Each IP address is 32 bits long, and they are represented in the form of "dot-decimal
notation" where each byte is written in the decimal form, and they are separated by
the period. An IP address would look like 193.32.216.9 where 193 represents the
decimal notation of first 8 bits of an address, 32 represents the decimal notation of
second 8 bits of an address.

Internet Protocol being a layer-3 protocol (OSI) takes data Segments from layer-4 (Transport)
and divides it into packets. IP packet encapsulates data unit received from above layer and
add to its own header information.

The encapsulated data is referred to as IP Payload. IP header contains all the necessary
information to deliver the packet at the other end.

148
Fig 4.29 IP Header

IP header includes many relevant information including Version Number, which, in this
context, is 4. Other details are as follows −

 Version − Version no. of Internet Protocol used (e.g. IPv4).

 IHL − Internet Header Length; Length of entire IP header.

 DSCP − Differentiated Services Code Point; this is Type of Service.

 ECN − Explicit Congestion Notification; It carries information about the congestion


seen in the route.

 Total Length − Length of entire IP Packet (including IP header and IP Payload).

 Identification − If IP packet is fragmented during the transmission, all the fragments


contain same identification number. to identify original IP packet they belong to.

 Flags − As required by the network resources, if IP Packet is too large to handle,


these ‘flags’ tells if they can be fragmented or not. In this 3-bit flag, the MSB is
always set to ‘0’.

 Fragment Offset − This offset tells the exact position of the fragment in the original
IP Packet.

149
 Time to Live − To avoid looping in the network, every packet is sent with some TTL
value set, which tells the network how many routers (hops) this packet can cross. At
each hop, its value is decremented by one and when the value reaches zero, the packet
is discarded.

 Protocol − Tells the Network layer at the destination host, to which Protocol this
packet belongs to, i.e. the next level Protocol. For example protocol number of ICMP
is 1, TCP is 6 and UDP is 17.

 Header Checksum − This field is used to keep checksum value of entire header
which is then used to check if the packet is received error-free.

 Source Address − 32-bit address of the Sender (or source) of the packet.

 Destination Address − 32-bit address of the Receiver (or destination) of the packet.

 Options − This is optional field, which is used if the value of IHL is greater than 5.
These options may contain values for options such as Security, Record Route, Time
Stamp, etc.

150
Let's understand through a simple example.

Fig 4.30 A router has three interfaces

 In the above figure, a router has three interfaces labeled as 1, 2 & 3 and each router
interface contains its own IP address.

 Each host contains its own interface and IP address.

 All the interfaces attached to the LAN 1 is having an IP address in the form of
223.1.1.xxx, and the interfaces attached to the LAN 2 and LAN 3 have an IP address
in the form of 223.1.2.xxx and 223.1.3.xxx respectively.

 Each IP address consists of two parts. The first part (first three bytes in IP address)
specifies the network and second part (last byte of an IP address) specifies the host in
the network.

Classful Addressing

An IP address is 32-bit long. An IP address is divided into sub-classes:

 Class A

151
 Class B

 Class C

 Class D

 Class E

An ip address is divided into two parts:

 Network ID: It represents the number of networks.

 Host ID: It represents the number of hosts.

Fig 4.31 Each class have a specific range of IP addresses

In the above diagram, we observe that each class have a specific range of IP addresses. The
class of IP address is used to determine the number of bits used in a class and number of
networks and hosts available in the class.

152
Class A

In Class A, an IP address is assigned to those networks that contain a large number of hosts.

 The network ID is 8 bits long.

 The host ID is 24 bits long.

In Class A, the first bit in higher order bits of the first octet is always set to 0 and the
remaining 7 bits determine the network ID. The 24 bits determine the host ID in any network.

The total number of networks in Class A = 27 = 128 network address The total number of
hosts in Class A = 224 - 2 = 16,777,214 host address

Class B

In Class B, an IP address is assigned to those networks that range from small-sized to large-
sized networks.

 The Network ID is 16 bits long.

 The Host ID is 16 bits long.

In Class B, the higher order bits of the first octet is always set to 10, and the remaining14 bits
determine the network ID. The other 16 bits determine the Host ID.

The total number of networks in Class B = 214 = 16384 network address The total number of
hosts in Class B = 216 - 2 = 65534 host address

153
Class C

In Class C, an IP address is assigned to only small-sized networks.

 The Network ID is 24 bits long.

 The host ID is 8 bits long.

In Class C, the higher order bits of the first octet is always set to 110, and the remaining 21
bits determine the network ID. The 8 bits of the host ID determine the host in a network.

The total number of networks = 221 = 2097152 network address The total number of hosts =
28 - 2 = 254 host address

Class D

In Class D, an IP address is reserved for multicast addresses. It does not possess subnetting.
The higher order bits of the first octet is always set to 1110, and the remaining bits
determines the host ID in any network.

Class E

In Class E, an IP address is used for the future use or for the research and development
purposes. It does not possess any subnetting. The higher order bits of the first octet is always
set to 1111, and the remaining bits determines the host ID in any network.

154
Rules for assigning Host ID:

The Host ID is used to determine the host within any network. The Host ID is assigned based
on the following rules:

 The Host ID must be unique within any network.

 The Host ID in which all the bits are set to 0 cannot be assigned as it is used to
represent the network ID of the IP address.

 The Host ID in which all the bits are set to 1 cannot be assigned as it is reserved for
the multicast address.

Rules for assigning Network ID:

If the hosts are located within the same local network, then they are assigned with the same
network ID. The following are the rules for assigning Network ID:

 The network ID cannot start with 127 as 127 is used by Class A.

 The Network ID in which all the bits are set to 0 cannot be assigned as it is used to
specify a particular host on the local network.

 The Network ID in which all the bits are set to 1 cannot be assigned as it is reserved
for the multicast address.

Classful Network Architecture

Class Higher NET ID HOST ID No.of No.of hostsRange


bits networks per network
bits bits

A 0 8 24 27 224 0.0.0.0 to

127.255.255.255

155
B 10 16 16 214 216 128.0.0.0 to
191.255.255.255

C 110 24 8 221 28 192.0.0.0 to


223.255.255.255

D 1110 Not Not Not Defined Not 224.0.0.0 to


Defined Defined Defined 239.255.255.255

E 1111 Not Not Not Defined Not 240.0.0.0 to


Defined Defined Defined 255.255.255.255

Network Layer Protocols

TCP/IP supports the following protocols:

ARP

 ARP stands for Address Resolution Protocol.

 It is used to associate an IP address with the MAC address.

 Each device on the network is recognized by the MAC address imprinted on the NIC.
Therefore, we can say that devices need the MAC address for communication on a
local area network. MAC address can be changed easily. For example, if the NIC on a
particular machine fails, the MAC address changes but IP address does not change.
ARP is used to find the MAC address of the node when an internet address is known.

Note: MAC address: The MAC address is used to identify the actual
device. IP address: It is an address used to locate a device on the network.

How ARP works

If the host wants to know the physical address of another host on its network, then it sends an
ARP query packet that includes the IP address and broadcast it over the network. Every host
on the network receives and processes the ARP packet, but only the intended recipient

156
recognizes the IP address and sends back the physical address. The host holding the datagram
adds the physical address to the cache memory and to the datagram header, then sends back
to the sender.

Fig 4.32 How ARP works

Steps taken by ARP protocol

If a device wants to communicate with another device, the following steps are taken by the
device:

 The device will first look at its internet list, called the ARP cache to check whether an
IP address contains a matching MAC address or not.

157
 It will check the ARP cache in command prompt by using a command arp-a.

 If ARP cache is empty, then device broadcast the message to the entire network
asking each device for a matching MAC address.

 The device that has the matching IP address will then respond back to the sender with
its MAC address

 Once the MAC address is received by the device, then the communication can take
place between two devices.

 If the device receives the MAC address, then the MAC address gets stored in the ARP
cache. We can check the ARP cache in command prompt by using a command arp -a.

Note: ARP cache is used to make a network more efficient.

158
In the above screenshot, we observe the association of IP address to the MAC address. There
are two types of ARP entries:

 Dynamic entry: It is an entry which is created automatically when the sender


broadcast its message to the entire network. Dynamic entries are not permanent, and
they are removed periodically.

 Static entry: It is an entry where someone manually enters the IP to MAC address
association by using the ARP command utility.

RARP

 RARP stands for Reverse Address Resolution Protocol.

 If the host wants to know its IP address, then it broadcast the RARP query packet that
contains its physical address to the entire network. A RARP server on the network
recognizes the RARP packet and responds back with the host IP address.

 The protocol which is used to obtain the IP address from a server is known as Reverse
Address Resolution Protocol.

 The message format of the RARP protocol is similar to the ARP protocol.

 Like ARP frame, RARP frame is sent from one machine to another encapsulated in
the data portion of a frame.

159
Fig 4.33 RARP

ICMP

 ICMP stands for Internet Control Message Protocol.

 The ICMP is a network layer protocol used by hosts and routers to send the
notifications of IP datagram problems back to the sender.

 ICMP uses echo test/reply to check whether the destination is reachable and
responding.

 ICMP handles both control and error messages, but its main function is to report the
error but not to correct them.

160
 An IP datagram contains the addresses of both source and destination, but it does not
know the address of the previous router through which it has been passed. Due to this
reason, ICMP can only send the messages to the source, but not to the immediate
routers.

 ICMP protocol communicates the error messages to the sender. ICMP messages cause
the errors to be returned back to the user processes.

 ICMP messages are transmitted within IP datagram.

The Format of an ICMP message

 The first field specifies the type of the message.

 The second field specifies the reason for a particular message type.

 The checksum field covers the entire ICMP message. Error Reporting

ICMP protocol reports the error messages to the sender.

Five types of errors are handled by the ICMP protocol:

 Destination unreachable

 Source Quench

 Time Exceeded

 Parameter problems

 Redirection

161
 Destination unreachable: The message of "Destination Unreachable" is sent from
receiver to the sender when destination cannot be reached, or packet is discarded
when the destination is not reachable.

 Source Quench: The purpose of the source quench message is congestion control.
The message sent from the congested router to the source host to reduce the
transmission rate. ICMP will take the IP of the discarded packet and then add the
source quench message to the IP datagram to inform the source host to reduce its
transmission rate. The source host will reduce the transmission rate so that the router
will be free from congestion.

 Time Exceeded: Time Exceeded is also known as "Time-To-Live". It is a parameter


that defines how long a packet should live before it would be discarded.

There are two ways when Time Exceeded message can be generated:

Sometimes packet discarded due to some bad routing implementation, and this causes the
looping issue and network congestion. Due to the looping issue, the value of TTL keeps on
decrementing, and when it reaches zero, the router discards the datagram. However, when the
datagram is discarded by the router, the time exceeded message will be sent by the router to
the source host.

When destination host does not receive all the fragments in a certain time limit, then the
received fragments are also discarded, and the destination host sends time Exceeded message
to the source host.

 Parameter problems: When a router or host discovers any missing value in the IP
datagram, the router discards the datagram, and the "parameter problem" message is
sent back to the source host.

162
 Redirection: Redirection message is generated when host consists of a small routing
table. When the host consists of a limited number of entries due to which it sends the
datagram to a wrong router. The router that receives a datagram will forward a
datagram to a correct router and also sends the "Redirection message" to the host to
update its routing table.

IGMP

 IGMP stands for Internet Group Message Protocol.

 The IP protocol supports two types of communication:

 Unicasting: It is a communication between one sender and one receiver.


Therefore, we can say that it is one-to-one communication.

 Multicasting: Sometimes the sender wants to send the same message to a


large number of receivers simultaneously. This process is known as
multicasting which has one-to-many communication.

 The IGMP protocol is used by the hosts and router to support multicasting.

 The IGMP protocol is used by the hosts and router to identify the hosts in a LAN that
are the members of a group.

 IGMP is a part of the IP layer, and IGMP has a fixed-size message.

Fig 4.34 IGMP Protocol

163
 The IGMP message is encapsulated within an IP datagram.

The Format of IGMP message

Where,

Type: It determines the type of IGMP message. There are three types of IGMP message:
Membership Query, Membership Report and Leave Report.

Maximum Response Time: This field is used only by the Membership Query message. It
determines the maximum time the host can send the Membership Report message in response
to the Membership Query message.

Checksum: It determines the entire payload of the IP datagram in which IGMP message is
encapsulated.

Group Address: The behavior of this field depends on the type of the message sent.

 For Membership Query, the group address is set to zero for General Query and set
to multicast group address for a specific query.

 For Membership Report, the group address is set to the multicast group address.

 For Leave Group, it is set to the multicast group address.

164
IGMP Messages

Fig 4.35 IGMP Messages

Membership Query message

 This message is sent by a router to all hosts on a local area network to determine the
set of all the multicast groups that have been joined by the host.

 It also determines whether a specific multicast group has been joined by the hosts on a
attached interface.

 The group address in the query is zero since the router expects one response from a
host for every group that contains one or more members on that host.

Membership Report message

 The host responds to the membership query message with a membership report
message.

 Membership report messages can also be generated by the host when a host wants to
join the multicast group without waiting for a membership query message from the
router.

 Membership report messages are received by a router as well as all the hosts on an
attached interface.

165
 Each membership report message includes the multicast address of a single group that
the host wants to join.

 IGMP protocol does not care which host has joined the group or how many hosts are
present in a single group. It only cares whether one or more attached hosts belong to a
single multicast group.

 The membership Query message sent by a router also includes a "Maximum


Response time". After receiving a membership query message and before sending the
membership report message, the host waits for the random amount of time from 0 to
the maximum response time. If a host observes that some other attached host has sent
the "Maximum Report message", then it discards its "Maximum Report message"
as it knows that the attached router already knows that one or more hosts have joined
a single multicast group. This process is known as feedback suppression. It provides
the performance optimization, thus avoiding the unnecessary transmission of a
"Membership Report message".

 Leave Report

When the host does not send the "Membership Report message", it means that the host has
left the group. The host knows that there are no members in the group, so even when it
receives the next query, it would not report the group.

Internet Protocol version 6 (IPv6)

IP v6 was developed by Internet Engineering Task Force (IETF) to deal with the problem of
IP v4 exhaustion. IP v6 is 128-bits address having an address space of 2^128, which is way
bigger than IPv4. In IPv6 we use Colon-Hexa representation. There are 8 groups and each
group represents 2 Bytes.

166
In IPv6 representation, we have three addressing methods :

Unicast Multicast Anycast

Unicast Address: Unicast Address identifies a single network interface. A packet sent to
unicast address is delivered to the interface identified by that address.

Multicast Address: Multicast Address is used by multiple hosts, called as Group, acquires a
multicast destination address. These hosts need not be geographically together. If any packet
is sent to this multicast address, it will be distributed to all interfaces corresponding to that
multicast address.

Anycast Address: Anycast Address is assigned to a group of interfaces. Any packet sent to
anycast address will be delivered to only one member interface (mostly nearest host
possible).

Note : Broadcast is not defined in IPv6.

Types of IPv6 address:

We have 128 bits in IPv6 address but by looking at first few bits we can identify what type of
address it is.

PREFIX ALLOCATION FRACTION OF ADDRESS SPACE

0000 0000 Reserved 1/256

0000 0001 Unassigned (UA) 1/256

0000 001 Reserved for NSAP 1/128

0000 01 UA 1/64

0000 1 UA 1/32

0001 UA 1/16

001 Global Unicast 1/8

010 UA 1/8

167
011 UA 1/8

100 UA 1/8

101 UA 1/8

110 UA 1/8

1110 UA 1/16

1111 0 UA 1/32

1111 10 UA 1/64

1111 110 UA 1/128

1111 1110 0 UA 1/512

1111 1110 10 Link-Local Unicast Addresses 1/1024

1111 1110 11 Site-Local Unicast Addresses 1/1024

1111 1111 Multicast Address 1/256

Note : In IPv6, all 0’s and all 1’s can be assigned to any host, there is not any restriction like
IPv4.

Provider based Unicast address :

These are used for global communication.

First 3 bits identifies it as of this type.

168
Registry Id (5-bits) : Registry Id identifies the region to which it belongs. Out of 32 (i.e.
2^5), only 4 registry id’s are being used.

Provider Id : Depending on the number of service providers that operates under a region,
certain bits will be allocated to Provider Id field. This field need not be fixed. Let’s say if
Provider Id = 10 bits then Subscriber Id will be 56 – 10 = 46 bits.

Subscriber Id : After Provider Id is fixed, remaining part can be used by ISP as normal IP
address.

Intra Subscriber : This part can be modified as per need of organization that is using the
service.

Geography based Unicast address :

Global routing prefix : Global routing prefix contains all the details of Latitude and
Longitude. As of now, it is not being used. In Geography based Unicast address routing will
be based on location.

Interface Id : In IPv6, instead of using Host Id, we use the term Interface Id.

169
Some special addresses:

Unspecified –

Loopback –

IPv4 Compatible –

IPv4 mapped –

Local Unicast Addresses :

There are two types of Local Unicast addresses defined- Link local and Site Local.

Link local address:

170
Link local address is used for addressing on a single link. It can also be used to communicate
with nodes on the same link. Link local address always begins with 1111111010 (i.e. FE80).
Router will not forward any packet with Link local address.

Site local address:

Site local addresses are equivalent to private IP address in IPv4. Likely, some address space
is reserved, which can only be routed within an organization. First 10-bits are set to
1111111011, which is why Site local addresses always begin with FEC0. Following 32 bits
are Subnet ID, which can be used to create subnet within organization. Node address is used
to uniquely identify the link; therefore, we use 48-bits MAC address here.

4.7 NETWORK LAYER PROTOCOL OF INTERNET-

In the OSI (Open System Interconnection) model, the Network layer is the third layer. This
layer is mostly associated with the movement of data by, which is achieved by the means of
addressing and routing.

This layer directs the flow of data from a source to a destination, and this may be
irrespective of the thing that the communicating machines might not be connected by the
same physical medium. This is achieved through finding an appropriate path from one
communicating machine to the other. For the purpose of transmitting data, if necessary, this
layer can break the data into smaller chunks. Sometimes, the breaking of data becomes
necessary. At the end, this layer is responsible for reassembling those smaller broken pieces
of data into the original data after the data has reached its destination.

In other words, the network layers help in establishing communication with devices. These
devices, connected over the internet, might be located on logically separate networks. The
network layer uses various routing algorithms to guide data packets from a source to a
destination network. A key element of this layer is that each network in the whole web of

171
networks is assigned a network address; and such addresses are used to route packets
(which is covered under the topics of Addressing and Switching, explained later on).

This is an overview of the Network Layer Protocols PDF, if you want to read full article in
pdf, we have provided download link below

When the data is passed down to the next layer, then this implies that the lower layer needs
to perform some services for the higher layer. In order for these services to be performed,
the lower layer adds some information to the already existing header or trailer. For instance,
the transport layer (lower layer) provides with its data and header to the network layer
(higher layer). Then the Network layer adds header with the correct destination network
layer address in order to facilitate the delivery of the packet to the other recipient machine.

Also, this layer translates the logical address into physical address. This layer decides the
route from source to destination. It also manages the traffic problems such as switching,
routing and controlling the congestion of data packets.

Besides this, the main functions which are performed by the network layer are:

1. Routing: This can be seen as a three step process: First, sending data from source
computer to some nearby router. Second, delivering data from the router near the
source to a router near a destination. Third, delivering the data from the router near
the destination to the end destination computer.

Routing involves Route Selection and Route Discovery.

2. Logical Addressing: The network layer deals with implementing the logical
addressing of the data packets, as the data link layer implements the physical
addressing. Logical addressing is used make a difference between the source and
destination system. The network layer adds a header to the data packet; this header
contains the logical addresses of both, the sender and the receiver.

3. Switching: It is the method of moving data through a network. There exist multiple
redundant paths between the source and destination. The three major types are:
Circuit switching, Message switching, and Packet switching.

In Circuit switching, the path for communication remains fixed during the duration of
connection; this enables a well defined bandwidth and dedicated paths.

In Message Switching, each message is treated as an independent entity carrying its own
address info and destination details. The info is used at each switch to transfer the message

172
to the next switch in the route. The benefits in this mode: relatively low cost devices, data
channel sharing, efficient use of bandwidth.

In Packet Switching, messages are divided into smaller packets. Each packet contains
source and destination address information. Packets could be routed through the network
independently, without the need to be stored temporarily anywhere. This switching mode
routes the data through the network more rapidly and efficiently.

This is an overview of the Network Layer Protocols PDF, if you want to read full article in
pdf, we have provided download link below

3. Internetworking: In a network, there might be different networks of various


configurations, i.e. the interconnected networks might not be similar. Here, this layer
provides a way for logical connections between different types of networks.

4. Fragmentation: As explained earlier, sometimes the breaking up of data into various


packets become a necessity for the purpose of transmission of data. The smallest
individual data unites might travel through different networks to reach its destination,
where they are again reassembled at this layer.

Services which are provided by the Network Layer:

 Guaranteed delivery of packets is a service provided by this layer. There is an


assurance that a packet would arrive at its destination.

 Guaranteed delivery, with bounded delay. This service guarantees that the packet will
be delivered within a specified host-to-host delay bound.

 Ordered delivery of packets: It is ensured by this layer that each packet arrives at the
destination in the order which they are sent.

 Guaranteed max jitter:

In networking, Jitter is the variation in the latency of packet flow between sender and
receiver systems. This happens when some packets take longer time to travel from one
system to the other. Jitter in a network might be due to network congestion, time drift and
change in routes.

This is an overview of the Network Layer protocol PDF, if you want to read full article in
pdf, we have provided download link below

173
Network layer service ensures that there is guaranteed maximum jitter. This means that
the amount of time taken between two successive transmissions at the sender side is
equal to the amount of time taken between two successive receipts at the destination side.

 Security services: The network layer uses a session key to provide security between
the source and destination. At the source side encryption of the payloads of datagrams
being sent takes place with the help of this layer. Then, at the destination side, this
layer again helps in the decryption of the received payload. Through this, the network
layer is able to maintain data integrity and source authentication services.

In order to achieve its goal, the network layer must take into consideration the topology of
the communication subnet (i.e. the set of all routers) and choose appropriate paths through
it. At the same time, it should choose routes in such a ways so as to avoid overloading some
of the communication lines and routers while leaving others idle. At last, when the source
and destination are in different networks, it is incumbent on the network layer to deal with
the differences in the networks and the problems arising out such differences.

The third layer of the Open Systems Interconnection (OSI) is called the network layer.
While the Data Link Layer functions mostly inside Wide Area Network (WAN) and Local
Area Network (LAN), Network Layer handles the responsibility of the transmission of data
in different networks.

It has no use in the place where two computers are connected on the same path or link. It
has the ability to route signals through different channels and because of that, it is
considered a network controller. Through this layer, data is sent in the form of packets.

This is an overview of the Network Layer Protocols PDF, if you want to read full article in
pdf, we have provided download link below

The primary responsibilities handled by the Layer are the Logical connection of a setup,
routing, delivery error reporting, and data forwarding.

174
Fig 4.36 Network layer.

Because of its functionality and responsibilities, the Network Layer is often seen as the
backbone of the entire OSI Model. Hardware devices such as routers, bridges, firewalls, and
switches are a part of it with which it creates a logical image of the communication route that
can be implemented with a physical medium.

The protocols needed for the functionality of the Network Layer are present in every router
and host. Making it one of the most useful of all the layers.

Most known among these protocols are IP (internet protocol), Internetwork Packet Exchange
(IPX) and Sequenced Packet Exchange (SPX) or as they are collectively known; IPX/SPX.
IPX protocol is also used by the Transport Layer, which alongside the Data Link Layer works
with Network Layer as they are placed above and below this layer respectively.

This is an overview of the Network Layer Protocols PDF, if you want to read full article in
pdf, we have provided download link below

The functions of the Network layer are as follow:

1. Translation of logical network address into a physical address.

2. Service is provided by this layer to the transport layer for sending the data packets to
the destination of the request.

3. As it consists of Internet Protocol (IP), a connectionless protocol, that is to say, it


doesn’t need acknowledgment to send transmit data packets, it is capable of forming a
connectionless communication, making it the only layer in the model to able to do so.

175
It is also capable of supporting connection-oriented communication like other layers
but only one kind of communication can be established at one time.

4. It also works as a locator of the IP address from where the data packets were
requested from and it also works as a host for that address.

5. It is commonly possible for two different subnets to have different addresses and
protocols. That’s Protocols of Network Layer are found in all the router and host so
this layer can resolve the issues and provide a common ground for them to form a
connection.

4.7.1 IP protocol

Internet Protocol (IP) is the method or protocol by which data is sent from one computer to
another on the internet. Each computer -- known as a host -- on the internet has at least one IP
address that uniquely identifies it from all other computers on the internet.

IP is the defining set of protocols that enable the modern internet. It was initially defined in
May 1974 in a paper titled, "A Protocol for Packet Network Intercommunication," published
by the Institute of Electrical and Electronics Engineers and authored by Vinton Cerf and
Robert Kahn.

At the core of what is commonly referred to as IP are additional transport protocols that
enable the actual communication between different hosts. One of the core protocols that runs
on top of IP is the Transmission Control Protocol (TCP), which is often why IP is also
referred to as TCP/IP. However, TCP isn't the only protocol that is part of IP.

How does IP routing work?

When data is received or sent -- such as an email or a webpage -- the message is divided into
chunks called packets. Each packet contains both the sender's internet address and the
receiver's address. Any packet is sent first to a gateway computer that understands a small
part of the internet. The gateway computer reads the destination address and forwards the
packet to an adjacent gateway that in turn reads the destination address and so forth until one

176
gateway recognizes the packet as belonging to a computer within its immediate neighborhood
-- or domain. That gateway then forwards the packet directly to the computer whose address
is specified.

Because a message is divided into a number of packets, each packet can, if necessary, be sent
by a different route across the internet. Packets can arrive in a different order than the order
they were sent. The Internet Protocol just delivers them. It's up to another protocol -- the
Transmission Control Protocol -- to put them back in the right order.

IP packets

While IP defines the protocol by which data moves around the internet, the unit that does the
actual moving is the IP packet.

An IP packet is like a physical parcel or a letter with an envelope indicating address


information and the data contained within.

An IP packet's envelope is called the header. The packet header provides the information
needed to route the packet to its destination. An IP packet header is up to 24 bytes long and
includes the source IP address, the destination IP address and information about the size of
the whole packet.

The other key part of an IP packet is the data component, which can vary in size. Data inside
an IP packet is the content that is being transmitted.

What is an IP address?

IP provides mechanisms that enable different systems to connect to each other to transfer
data. Identifying each machine in an IP network is enabled with an IP address.

Similar to the way a street address identifies the location of a home or business, an IP
address provides an address that identifies a specific system so data can be sent to it or
received from it.

177
An IP address is typically assigned via the DHCP (Dynamic Host Configuration Protocol).
DHCP can be run at an internet service provider, which will assign a public IP address to a
particular device. A public IP address is one that is accessible via the public internet.

A local IP address can be generated via DHCP running on a local network router, providing
an address that can only be accessed by users on the same local area network.

Differences between IPv4 and IPv6

The most widely used version of IP for most of the internet's existence has been Internet
Protocol Version 4 (IPv4).

IPv4 provides a 32-bit IP addressing system that has four sections. For example, a sample
IPv4 address might look like 192.168.0.1, which coincidentally is also commonly the
default IPv4 address for a consumer router. IPv4 supports a total of 4,294,967,296
addresses.

A key benefit of IPv4 is its ease of deployment and its ubiquity, so it is the default protocol.
A drawback of IPv4 is the limited address space and a problem commonly referred to as
IPv4 address exhaustion. There aren't enough IPv4 addresses available for all IP use cases.
Since 2011, IANA (Internet Assigned Numbers Authority) hasn't had any new IPv4 address
blocks to allocate. As such, Regional Internet Registries (RIRs) have had limited ability to
provide new public IPv4 addresses.

In contrast, IPv6 defines a 128-bit address space, which provides substantially more space
than IPv4, with 340 trillion IP addresses. An IPv6 address has eight sections. The text form
of the IPv6 address is xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx, where each x is a
hexadecimal digit, representing 4 bits.

The massive availability of address space is the primary benefit of IPv6 and its most
obvious impact. The challenges of IPv6, however, are that it is complex due to its large
address space and is often challenging for network administrators to monitor and manage.

178
IP network protocols

IP is a connectionless protocol, which means that there is no continuing connection between


the end points that are communicating. Each packet that travels through the internet is
treated as an independent unit of data without any relation to any other unit of data. The
reason the packets are reassembled in the right order is because of TCP, the connection-
oriented protocol that keeps track of the packet sequence in a message.

In the OSI model (Open Systems Interconnection), IP is in layer 3, the networking layer.

There are several commonly used network protocols that run on top of IP, including:

TCP. Transmission Control Protocol enables the flow of data across IP address connections.

UDP. User Datagram Protocol provides a way to transfer low-latency process


communication that is widely used on the internet for DNS lookup and voice over Internet
Protocol.

FTP. File Transfer Protocol is a specification that is purpose-built for accessing, managing,
loading, copying and deleting files across connected IP hosts.

HTTP. Hypertext Transfer Protocol is the specification that enables the modern web. HTTP
enables websites and web browsers to view content. It typically runs over port 80.

HTTPS. Hypertext Transfer Protocol Secure is HTTP that runs with encryption via Secure
Sockets Layer or Transport Layer Security. HTTPS typically is served over port 443.

4.8 INTERNET CONTROL PROTOCOLS

The ICMP represents Internet Control Message Protocol. It is a network layer protocol. It can
be used for error handling in the network layer, and it is generally used on network devices,
including routers. IP Protocol is a best-effect delivery service that delivers a datagram from
its original source to its final destination. It has two deficiencies−

 Lack of Error Control

 Lack of assistance mechanisms

179
IP protocol also lacks a structure for host and management queries. A host needs to resolve if
a router or another host is alive, and sometimes a network manager needs information from
another host or router.

ICMP has been created to compensate for these deficiencies. It is a partner to the IP protocol.

Fig 4.37 (a) ICMP

ICMP is a network layer protocol. But, its messages are not passed directly to the data link
layer. Instead, the messages are first encapsulated inside the IP datagrams before going to the
lower layer.

The cost of the protocol field in the IP datagram is I, to indicate that IP data is an ICMP
message.

The error reporting messages report issues that a router or a host (destination) may encounter
when it phases an IP packet.

The query messages, which appear in pairs, help a host or a network manager to get specific
data from a router or another host.

ICMP Message Format

AN ICMP message includes an 8-byte header and a variable size data format.

180
Fig 4.37 (b) ICMP.

 Type: It is an 8-bit field. It represents the ICMP message type. The values area from 0
to 127 are described for ICMPv6, and the values from 128 to 255 are the data
messages.

 Code: It is an 8-bit field that represents the subtype of the ICMP message.

 Checksum: It is a 16-bit field to recognize whether the error exists in the message or
not.

4.9 SUMMARY

The network layer provides services to the transport layer. It can be based on either virtual
circuits or datagrams. In both cases, its main job is routing packets from the source to the
destination. In virtual-circuit subnets, a routing decision is made when the virtual circuit is set
up. In datagram subnets, it is made on every packet. Many routing algorithms are used in
computer networks. Static algorithms include shortest path routing and flooding. Dynamic
algorithms include distance vector routing and link state routing. Most actual networks use
one of these. Subnets can easily become congested, increasing the delay and lowering the
throughput for packets. Network designers attempt to avoid congestion by proper design.
Networks differ in various ways, so when multiple networks are interconnected problems can
occur. Sometimes the problems can be finessed by tunneling a packet through a hostile
network, but if the source and destination networks are different, this approach fails.
Protocols described are IP, a new version of IP i.e. IPv6.

181
4.10 KEYWORDS

 Dataspeed : An AT&T marketing term used to describe a variety of data


communications devices.

 Data Stream : The transmission of characters and data bits through a channel

 Data Switch : A device used to connect data processing equipment to network lines,
offering flexibility in line /device selection.

 Data Terminal Equipment (DTE) : A term used to describe numerous data


processing equipment such as computers, terminals, controllers and printers.

 Data Transfer Rate, Data Rate : The measure of the speed of data transmission,
usually expressed in bits per second. Synonymous with speed, the data rate is often
incorrectly expressed in baud .

4.11 LEARNING ACTIVITY

1 A sender sends a series of packets to the same destination using 5-bit sequence numbers.
If the sequence number starts with 0, what is the sequence number after sending 100
packets?

___________________________________________________________________________
___________________________________________________________________________

2 Using 5-bit sequence numbers, what is the maximum size of the send and receive
windows for each of the following protocols?

a. Stop-and-Wait ARQ

b. Go-Back-N ARQ

c. Selective-Repeat ARQ

___________________________________________________________________________
___________________________________________________________________________

182
4.12 UNIT END QUESTIONS

A. Descriptive Questions

Short Questions:

1. What are the responsibilities of network layer?

2. What is Token Bus?

3. What is Network Topology?

4. Define Error detection and correction

5. What is the use of two-dimensional parity in error detection?

Long Questions:

1. What do you mean by ARP?

2. What do you mean by RARP?

3. What are the functions of MAC?

4. What is ALOHA. Differentiate between pure and slotted ALOHA.

5. Explain switching concept in depth.

B. Multiple Choice Questions

1. The physical layer concerns with

a. process to process delivery

b. bit-by-bit delivery

c. application to application delivery

d. none of the mentioned bit-by-bit delivery

2. The term that refers to a physical layer technique is called

a. FDMA

b. CDMA

c. TDMA

d. TDM

183
3. A message travels over a physical path is called___.

a. Signals

b. Medium

c. Protocols

d. All the above

4. The _____ is the portion of the physical layer that interfaces with the media access
control sublayer

a. physical address sublayer

b. physical data sublayer

c. physical signalling sublayer

d. none of the mentioned

5. The physical layer is responsible for movements of individual

a. Frames

b. Bit

c. Packet

d. Bytes

Answers:

1-b,2-d,3-b,4-c,5-b

184
4.13 REFERENCES

 Computer Networks, A. S. Tanenbaum 4th Edition, Practice Hall of India, New Delhi.
2003.

 Introduction to Data Communication & Networking, 3rd Edition, Behrouz Forouzan,


Tata McGraw Hill.

 Computer Networking, J.F. Kurose & K.W. Ross, A Top-Down Approach Featuring
the Internet, Pearson Edition, 2003.

 Communications Networks, Leon Garcia, and Widjaja, Tata McGraw Hill, 2000.

 Data and Computer Communications, Willian Stallings, 6th Edition, Pearson


Education, New Delhi.

 www. wikipedia.org

 Larry L. Peterson, Computer Networks: A Systems Approach, 3rd Edition (The


Morgan Kaufmann Series in Networking).

185

You might also like