Jamia-BCA-II-Data - Communication - and - Computer - Networks - Basics Unit IV
Jamia-BCA-II-Data - Communication - and - Computer - Networks - Basics Unit IV
ACCESS PROTOCOLS
STRUCTURE
4.1 Introduction
4.6 Routing
4.7.1 IP protocol
4.9 Summary
4.10 Keywords
4.13 References
87
4.0 LEARNING OBJECTIVES
Understand error recovery protocols, stop and wait ARQ, go-back-n ARQ and Point
to Point Protocol on Internet
4.1 INTRODUCTION
Error detection and correction techniques are implemented either at the data link layer and the
transport layer of the OSI model.
Data can be corrupted during transmission. for reliable communication, errors must be
detected and corrected techniques.
Bits lost
Bits changed
Bits added
Types of Error
Burst errors
SINGLE BIT ERROR – The term single bit error means that only 1 bit of a given data unit
(such as a byte, character, data unit, or packet) is changed from 1 to 0 or from 0 to 1.
88
Fig 4.1 Single Bit Error
BURST ERROR – The term burst error means two or more bits within the data unit have
changed from 1 to 0 or from 0 to 1.
ERROR DETECTION
Sometimes undetected errors will still remain but the goal is to minimize these errors
ERROR DETECTION
To detect and correct errors, sufficient redundancy bits need to be sent with data.
Redundancy bits are the extra bits sent by the source to inform the destination about
the data sent.
89
ERROR DETECTION
Parity Check
Checksum
ERROR CORRECTION
BackwardErrorCorrection
ForwardErrorCorrection
The receiver device sends a request to the source device to re-send the data after
detecting the error or errors
1 Positive acknowledgment
The receiver returns confirmation of every block received correctly. The transmitter-sends the
block that is not acknowledged.
2 Negative acknowledgment
This technique allows the receiver to detect and correct errors without asking the
send error retransmission
The bandwidth requirements higher but the return channel is not needed
90
Forward error correction
Each redundancy bit is often a function of the many parts of original data or can
also be non-systematic
91
1. BLOCK CODING: REED-SOLOMON CODING
This means that the encoder takes k data symbols of s bits each and adds parity
symbols to make any symbol codeword
There are n-k parity symbols of s bits each. A Reed-Solomon decoder can correct up
to t symbols that contain errors during a codeword, where 2t=n-k.
EXAMPLE OF REED-SOLOMON
n = 255, k = 223, s = 8
2t = 32, t = 16
The decoder can correct any 16 symbol errors within the code word
2. CONVOLUTIONAL CODING
One of the common metric used by the Viterbi Algorithm for paths comparison is the
Hamming distance metric, which is a bit-wise comparison between the received
codeword and the allowable codeword
92
4.3 DATA-LINK CONTROL
There are two main functions of the Data link layer and these are Data Link Control and
Media Access control. Data link control mainly deals with the design and procedure of
communication between two adjacent nodes: node-to-node communication.
Media access control is another main function of the Data Link layer which mainly
specifies how the link is shared.
Let us first cover Data link control in this tutorial and then in the next tutorial we will move
on to Media access control.
Framing
Framing
In the Physical layer, data transmission means moving bits are in the form of a signal from
the source to the destination. The Physical layer also provides synchronization that mainly
ensures that the sender and the receiver use the same bit durations and timings.
The bits are packed into the frames by the data link layer; so that each frame is
distinguishable from another frame.
The Framing in the data link layer separates a message from one source to a
destination or from other messages to other destinations just by adding a sender address
and destination address; where the destination address specifies where the packet has to go
and the sender address helps the recipient to acknowledge the receipt.
93
Frames can be either of fixed size or of variable size. By using frames the data can be easily
get broken up into recoverable chunks and in order to check the corruption in transmission,
these chunks can be checked easily.
Problems in Framing
Given below are some of the problems caused due to the framing:
1. Detecting the start of the frame Whenever a frame is transmitted then every station must
be able to detect this frame. Any Station detects the frame by looking out for a special
sequence of bits that are marked at the beginning of the frame that is Starting Frame
Delimiter(SFD).
2. How any station detects a frame Every station in the network listens to the link for the
SFD pattern through the sequential circuit. If an SFD is detected then the sequential circuit
alerts the station. After that, the Station checks the destination address in order to accept or
reject the frame.
3. Detecting the end of the frame It is when to stop reading the frame.
Parts of a frame
1. Flag A flag is used to mark the beginning and end of the frame.
2. Header The frame header mainly contains the address of the source and the destination of
the frame.
3. Trailer The frame trailer mainly contains the error detection and error correction bits.
Types of Framing
Fixed-size Framing
Variable-size Framing
94
Let us cover the above given two types one-by-one;
Fixed-size framing
In the fixed-size framing, there is no need for defining the boundaries of the frame. The size
or length of the frame itself can be used as a delimiter.
One drawback of fixed size framing is that it will suffer from Internal fragmentation if the
size of data is less than the size of the frame.
Variable-size framing
In Variable-size framing, the size of each frame is different. Thus there is a need of the way
in order to define the end of the frame and the beginning of the next.
Character-Oriented Protocols
In the Character-Oriented protocol, data to be carried are 8-bit characters from a coding
system such as ASCII.
1.Frame Header
The header of the frame contains the address of the source and destination in the form of
bytes.
95
2.Payload Field
The Payload field mainly contains the message that is to be delivered. In this case, it is a
variable sequence of data bytes.
3.Frame trailer
The trailer of the frame contains the bytes for error correction and error detection.
4.Flag
In order to separate one frame from the next an 8-bit(that is 1 byte) flag is added at the
beginning and end of the frame.
This technique was popular when the data was in the form of text that was exchanged by the
data link layers. The flag selected could be any character that is not used for text
communication. But there is a need to send other types of information like graphs, audio, and
video.
Now any pattern that is used for the flag could also be a part of the Information. If this
happens then the receiver encounters this pattern in the middle of the data and then thinks that
it has reached the end of the frame.
In order to fix the above problem, the byte-stuffing strategy was added to the character-
oriented framing.
Byte-stuffing
It is a process of adding 1 special byte whenever there is a character with the same pattern as
the flag.
The data section is stuffed with an extra byte and this byte is usually called the Escape
character(ESC) and it has a predefined bit pattern.
Whenever the receiver encounters an ESC character, then it removes it from the data
section and then treats the next character as the data.
96
Byte stuffing and unstuffing
The disadvantage of using character-oriented framing is that due to this there becomes too
much overhead on to the message due to which the total size of the frame gets increases. The
another drawback is that the current coding system has 16-bit or 32-bit characters that surely
get conflicted with the 8-bit encoding.
Bit-Oriented Protocols
In Bit-oriented framing mainly the data section of the frame is a sequence of bits that are to
be interpreted by the upper layer as text, graphics, audio, video, etc.
In this, there is also a need for a delimiter in order to separate one frame from the other.
97
Bit-Stuffing
The process by which an extra 0 is added whenever five consecutive 1s follow a 0 in the data
so that the receiver does not mistake the pattern 01111110 for a flag is commonly referred to
as Bit stuffing.
The above figure shows the bit stuffing at the sender and the bit removal at the receiver. It is
important to note that even if we have a 0 after five 1s, we will still stuff a 0. The removal of
0 is done by the receiver.
It simply means that whenever the flag-like pattern 01111110 appears in the data then it will
change the data to 011111010(stuffed) and then it is not mistaken as a flag by the receiver.
The real flag 01111110 is not stuffed by the sender and thus is recognized by the receiver.
98
Flow Control
Flow Control mainly coordinates with the amount of data that can be sent before receiving an
acknowledgment from the receiver and it is one of the major duties of the data link layer.
For most of the protocols, flow control is a set of procedures that mainly tells the
sender how much data the sender can send before it must wait for an
acknowledgment from the receiver.
The data flow must not be allowed to overwhelm the receiver; because any receiving
device has a very limited speed at which the device can process the incoming data and
the limited amount of memory to store the incoming data.
The processing rate is slower than the transmission rate; due to this reason each
receiving device has a block of memory that is commonly known as buffer, that is
used to store the incoming data until this data will be processed. In case the buffer
begins to fillup then the receiver must be able to tell the sender to halt the
transmission until once again the receiver become able to receive.
Thus the flow control makes the sender; wait for the acknowledgment from the receiver
before the continuation to send more data to the receiver.
Some of the common flow control techniques are: Stop-and-Wait and sliding window
technique.
Another important design issue that occurs in the data link layer is what to do with a sender
that systematically wants to transmit frames faster than the receiver can accept them. This
situation can easily occur when the sender is running on a fast computer and the receiver is
running on a slow machine. The sender keeps pumping the frames out at a high rate until the
receiver is completely swamped. Even if the transmission is error free, at a certain point the
receiver will simply be unable to handle the frames as they arrive and will start to lose some.
Clearly, something has to be done to prevent this situation. Two approaches are commonly
used. In the first one, feedback-based flow control, the receiver sends back information to the
sender giving it permission to send more data or at least telling the sender how the receiver is
doing. In the second one, rate-based flow control, the protocol has a built-in mechanism that
limits the rate at which senders may transmit data, without using feedback from the receiver
99
4.5 ERROR RECOVERY PROTOCOLS
Protocols
The implementation of protocols is mainly implemented in the software by using one of the
common programming languages. The classification of the protocols can be mainly done on
the basis of where they are being used.
Protocols can be used for noiseless channels(that is error-free) and also used for noisy
channels(that is error-creating). The protocols used for noiseless channels mainly cannot be
used in real-life and are mainly used to serve as the basis for the protocols used for noisy
channels.
All the above-given protocols are unidirectional in the sense that the data frames travel from
one node i.e Sender to the other node i.e receiver.
The special frames called acknowledgment (ACK) and negative acknowledgment (NAK)
both can flow in opposite direction for flow and error control purposes and the data can flow
in only one direction.
100
But in the real-life network, the protocols of the data link layer are implemented as
bidirectional which means the flow of the data is in both directions. And in these protocols,
the flow control and error control information such as ACKs and NAKs are included in the
data frames in a technique that is commonly known as piggybacking.
Also, bidirectional protocols are more complex than the unidirectional protocol.
In our further tutorials we will be covering the above mentioned protocols in detail.
Simplest Protocol
Simplest Protocol that lies under the category Noiseless Channels in the Data link layer.
Simplest Protocol is a protocol that neither has flow control nor has error control( as we have
already told you that it lies under the category of Noiseless channels).
In this, the receiver can immediately handle the frame it receives whose processing
time is small enough to be considered as negligible.
Basically, the data link layer of the receiver immediately removes the header from the
frame and then hand over the data packet to the network layer that also accepts the
data packet immediately.
We can also say that in the case of this protocol the receiver never gets overwhelmed
with the incoming frames from the sender.
The flow control is not needed by the Simplest Protocol. The data link layer at the sender side
mainly gets the data from the network layer and then makes the frame out of data and sends
it. On the Receiver site, the data link layer receives the frame from the physical layer and
then extracts the data from the frame, and then delivers the data to its network layer.
101
Fig 4.9 Data Frame
The Datalink layers of both sender and receiver mainly provide transmission services for
their Network layers. The data link layer also uses the services provided by the physical layer
such as signaling, multiplexing, etc for the physical transmission of the bits.
Let us now take a look at the procedure used by the data link layer at both sides(sender as
well as the receiver).
There is no frame send by the data link layer of the sender site until its network layer
has a data packet to send.
Similarly, the receiver site cannot deliver a data packet to its network layer until a
frame arrives.
The procedure at the sender site runs constantly; there is no action until there is a
request from the network layer.
102
Also, the procedure at the receiver site runs constantly; there is no action until there is
a notification from the physical layer.
Both the procedure runs continuously because either of them doesn't know when the
corresponding events will occur.
GetData();
MakeFrame();
ReceiveFrame();
ExtractData();
103
Flow Diagram for Simplest Protocol
Using the simplest protocol the sender A sends a sequence of frames without even thinking
about receiver B.
In order to send the three frames, there will be an occurrence of three events at sender A and
three events at the receiver B.
It is important to note that in the above figure the data frames are shown with the help of
boxes.
The height of the box mainly indicates the transmission time difference between the first bit
and the last bit of the frame.
Stop-and-wait Protocol is used in the data link layer for the transmission in the noiseless
channels. Let us first understand why there is a need to use this protocol then we will cover
this protocol in detail.
We have studied the simplest protocol in the previous tutorial, suppose there is a scenario in
which the data frames arrive at the receiver's site faster than they can be processed means the
rate of transmission is more than the processing rate of the frames. Also, it is normal that the
104
receiver does not have enough space, and the data is also coming from multiple sources. Then
due to all these, there may occur discarding of frames or denial of service.
In order to prevent the receiver from overwhelming, there is a need to tell the sender to slow
down the transmission of frames. We can make use of feedback from the receiver to the
sender.
Now from the next section, we will cover the concept of the Stop-and-wait protocol.
As the name suggests, when we use this protocol during transmission, then the sender sends
one frame, then stops until it receives the confirmation from the receiver, after receiving the
confirmation sender sends the next frame.
There is unidirectional communication for the data frames, but the acknowledgment
or ACK frames travel from the other direction. Thus the flow control is added here.
Thus the stop-and-wait is one of the flow control protocol which makes the use of
flow control service provided by the data link layer.
For every sent frame, the acknowledgment is needed and it takes the same amount of
time for propagation in order to get back to the sender.
In order to end up the transmission, the sender transmits an end of transmission that
means(EOT frame).
Datalink layer at the sender side waits for its network layer in order to send the data packet.
After that data link checks that it can send the frame or not. In case of receiving a positive
notification from the physical layer; the data link layer makes the frame out of the data
provided by the network layer and then sends it to the physical layer. After sending the data it
will then wait for the acknowledgment before sending the next frame.
The data link layer on the receiver side waits for the frame to arrive. When the frame arrives
then the receiver processes the frame and then delivers it to the network layer. After that, it
will send the acknowledgment or we can say that ACK frame back to the sender.
105
Fig 4.11 Stop And Wait Protocol
The algorithm used at the sender site for the stop-and-wait protocol
This is an algorithm used at the sender site for the stop-and-wait protocol. Applications can
have its implementation in its own programming lamguage.
GetData();
MakeFrame();
106
canSend=false; //cannot send until the acknowledgement arrives.
canSend=true;
This is an algorithm used at the receiver side for the stop-and-wait protocol. Applications
can have their implementation in their own programming language.
ReceiveFrame();
ExtractData();
107
Flow diagram of the stop-and-wait protocol
Advantages
One of the main advantages of the stop-and-wait protocol is the accuracy provided. As the
transmission of the next frame is only done after receiving the acknowledgment of the
previous frame. Thus there is no chance for data loss.
Disadvantages
Given below are some of the drawbacks of using the stop-and-wait Protocol:
Suppose in a case, the frame is sent by the sender but it gets lost during the
transmission and then the receiver can neither get it nor can send an acknowledgment
back to the sender. Upon not receiving the acknowledgment the sender will not send
the next frame. Thus there will occur two situations and these are: The receiver has to
108
wait for an infinite amount of time for the data and the sender has to wait for an
infinite amount of time in order to send the next frame.
In the case of the transmission over a long distance, this is not suitable because the
propagation delay becomes much longer than the transmission delay.
In case the sender sends the data and this data has also been received by the receiver.
After receiving the data the receiver then sends the acknowledgment but due to some
reasons, this acknowledgment is received by the sender after the timeout period. Now
as this acknowledgment is received too late; thus it can be wrongly considered as the
acknowledgment of another data packet.
The time spent waiting for the acknowledgment for each frame also adds up in the
total transmission time.
The sender keeps a copy of each frame until the arrival of acknowledgement.
In Go-Back-N ARQ, the size of the sender is N and the size of the receiver window is
always 1.
This protocol makes the use of cumulative acknowledgements means here the
receiver maintains an acknowledgement timer; whenever the receiver receives a new
frame from the sender then it starts a new acknowledgement timer. When the timer
expires then the receiver sends the cumulative acknowledgement for all the frames
that are unacknowledged by the receiver at that moment.
It is important to note that the new acknowledgement timer only starts after the
receiving of a new frame, it does not start after the expiry of the old
acknowledgement timer.
If the receiver receives a corrupted frame, then it silently discards that corrupted
frame and the correct frame is retransmitted by the sender after the timeout timer
109
expires. Thus receiver silently discards the corrupted frame. By discarding silently we
mean that: “Simply rejecting the frame and not taking any action for the frame".
In case after the expiry of the acknowledgement timer, suppose there is only one
frame that is left to be acknowledged. In that case, the receiver sends the independent
acknowledgement for that frame.
In case if the receiver receives the out of order frame then it simply discards all the
frames.
In case if the sender does not receive any acknowledgement then the entire window of
the frame will be retransmitted in that case.
Using the Go-Back-N ARQ protocol leads to the retransmission of the lost frames
after the expiry of the timeout timer.
This protocol is used to send more than one frame at a time. With the help of Go-Back-N
ARQ, there is a reduction in the waiting time of the sender.
With the help of the Go-Back-N ARQ protocol the efficiency in the transmission increases.
Basically, the range which is in the concern of the sender is known as the send sliding
window for the Go-Back-N ARQ. It is an imaginary box that covers the sequence numbers of
the data frame which can be in transit.
The size of this imaginary box is 2m-1 having three variables Sf( which indicates send
window, the first outstanding frame), Sn(indicates the send window, the next frame to be
sent), SSize.(indicates the send window, size).
The sender can transmit N frames before receiving the ACK frame.
The copy of sent data is maintained in the sent buffer of the sender until all the sent
packets are acknowledged.
If the timeout timer runs out then the sender will resend all the packets.
Once the data get acknowledged by the receiver then that particular data will be
removed from the buffer.
110
Whenever a valid acknowledgement arrives then the send window can slide one or more
slots.
As we have already told you the Sender window size is N.The value of N must be greater
than 1.
In case if the value of N is equal to 1 then this protocol becomes a stop-and-wait protocol.
The range that is in the concern of the receiver is called the receiver sliding window.
The window slides when a correct frame arrives, the sliding occurs one slot at a time.
The receiver always looks for a specific frame to arrive in the specific order.
Any frame that arrives out of order at the receiver side will be discarded and thus need
to be resent by the sender.
If a frame arrives at the receiver safely and in a particular order then the receiver send
ACK back to the sender.
The silence of the receiver causes the timer of the unacknowledged frame to expire.
111
Fig 4.14 Sliding Window
With the help of Go-Back-N ARQ, multiple frames can be transit in the forward direction and
multiple acknowledgements can transit in the reverse direction. The idea of this protocol is
similar to the Stop-and-wait ARQ but there is a difference and it is the window of Go-Back-N
ARQ allows us to have multiple frames in the transition as there are many slots in the send
window.
112
Fig 4.15 GO-BACK-N ARQ
In the Go-Back-N ARQ, the size of the send window must be always less than 2m and the
size of the receiver window is always 1.
113
Fig 4.16 Window size for Go-Back-N ARQ
114
Flow Diagram
Advantages
Given below are some of the benefits of using the Go-Back-N ARQ protocol:
115
With the help of this protocol, the timer can be set for many frames.
Only one ACK frame can acknowledge more than one frame.
Disadvantages
PPP(Point-To-Point) protocol is a protocol used in the data link layer. The PPP protocol is
mainly used to establish a direct connection between two nodes.
This protocol defines how two devices can authenticate with each other.
PPP protocol also defines the format of the frames that are to be exchanged between
the devices.
This protocol also defines how the data of the network layer are encapsulated in the
data link frame.
The PPP protocol defines how the two devices can negotiate the establishment of the
link and then can exchange the data.
This protocol provides multiple services of the network layer and also supports
various network-layer protocols.
116
Some services that are not offered by the PPP protocol are as follows:
1. This protocol does not provide a flow control mechanism. Because when using this
protocol the sender can send any number of frames to the receiver one after the other without
even thinking about overwhelming the receiver.
2. This protocol does not provide any mechanism for addressing in order to handle the
frames in the multipoint configuration.
3. The PPP protocol provides a very simple mechanism for error control. There is a CRC
field that detects the errors. In case if there is a corrupted frame then it is discarded silently.
In the PPP protocol, the framing is done using the byte-oriented technique.
Let us discuss each field of the PPP frame format one by one:
1. Flag
The PPP frame mainly starts and ends with a 1-byte flag field that has the bit pattern:
01111110. It is important to note that this pattern is the same as the flag pattern used in
HDLC. But there is a difference too and that is PPP is a byte-oriented protocol whereas the
HDLC is a bit-oriented protocol.
2. Address
The value of this field in PPP protocol is constant and it is set to 11111111 which is a
broadcast address. The two parties can negotiate and can omit this byte.
3. Control
117
The value of this field is also a constant value of 11000000. We have already told you that
PPP does not provide any flow control and also error control is limited to error detection. The
two parties can negotiate and can omit this byte.
4. Protocol
This field defines what is being carried in the data field. It can either be user information or
other information. By default, this field is 2 bytes long.
5. Payload field
This field carries the data from the network layer. The maximum length of this field is 1500
bytes. This can also be negotiated between the endpoints of communication.
6. FCS
As we have told you that the major difference between PPP and HDLC is that PPP is a byte-
oriented protocol. It means that the flag in the PPP is a byte and it is needed to be escaped
wherever it appears in the data section of the frame.
The escape byte is 011111101 which means whenever the flag-like pattern appears in the data
then the extra byte is stuffed that mainly tells the receiver that the next byte is not a flag.
The PPP protocol has to go through various phases and these are shown in the diagram given
below;
118
Fig 4.19 Transition Phases
Dead
In this phase, the link is not being used.No active carrier is there at the physical layer and the
line is simply quiet.
Establish
If one of the nodes starts the communication then the connection goes into the established
phase. In this phase, options are negotiated between the two parties. In case if the negotiation
is done successfully then the system goes into the Authenticate phase (in case if there is the
requirement of authentication otherwise goes into the network phase.)
Authenticate
This is an optional phase. During the establishment phase, the two nodes may decide not to
skip this phase. If the two nodes decide to proceed with the authentication then they send
several authentication packets.
119
If the result of this is successful then the connection goes into the networking phase otherwise
goes into the termination phase.
Network
In this phase, the negotiation of the protocols of the network layer takes place. The PPP
protocol specifies that the two nodes establish an agreement of the network layer before the
data at the network layer can be exchanged. The reason behind this is PPP supports multiple
protocols at the network layer.
In case if any node is running multiple protocols at the network layer simultaneously then the
receiving node needs to know that which protocol will receive the data.
Open
In this phase the transfer of the data takes place. Whenever a connection reaches this phase,
then the exchange of data packets can be started. The Connection remains in this phase until
one of the endpoints in the communication terminates the connection.
Terminate
In this phase, the connection is terminated. There is an exchange of several packets between
two ends for house cleaning and then closing the link.
Basically, PPP is a layered protocol. There are three components of the PPP protocol and
these are as follows:
Authentication Protocol
120
Fig 4.20 Network Layer
Both endpoints of the link must need to reach an agreement about the options before the link
can be established.
Authentication protocol
This protocol plays a very important role in the PPP protocol because the PPP is designed for
use over the dial-up links where the verification of user identity is necessary. Thus this
protocol is mainly used to authenticate the endpoints for the use of other services.
121
Network Control Protocol
The Network Control Protocol is mainly used for negotiating the parameters and facilities for
the network layer.
For every higher-layer protocol supported by PPP protocol; there is one Network control
protocol.
4.6 ROUTING
Routing is the process of forwarding of a packet in a network so that it reaches its intended
destination. The main goals of routing are:
1. Correctness: The routing should be done properly and correctly so that the packets
may reach their proper destination.
2. Simplicity: The routing should be done in a simple manner so that the overhead is as
low as possible. With increasing complexity of the routing algorithms the overhead
also increases.
4. Stability: The routing algorithms should be stable under all possible circumstances.
5. Fairness: Every node connected to the network should get a fair chance of
transmitting their packets. This is generally done on a first come first serve basis.
122
6. Optimality: The routing algorithms should be optimal in terms of throughput and
minimizing mean packet delays. Here there is a trade-off and one has to choose
depending on his suitability.
1. Centralized: In this type some central node in the network gets entire
information about the network topology, about the traffic and about other
nodes. This then transmits this information to the respective routers. The
advantage of this is that only one node is required to keep the information. The
disadvantage is that if the central node goes down the entire network is down,
i.e. single point of failure.
2. Isolated: In this method the node decides the routing without seeking
information from other nodes. The sending node does not know about the
status of a particular link. The disadvantage is that the packet may be send
through a congested route resulting in a delay. Some examples of this type of
algorithm for routing are:
123
down the number of hops it has taken to reach it from the source node.
If the previous value of hop count stored in the node is better than the
current one then nothing is done but if the current value is better then
the value is updated for future use. The problem with this is that when
the best route goes down then it cannot recall the second best route to a
particular node. Hence all the nodes have to forget the stored
informations periodically and start all over again.
Hop Count: Every packet has a hop count associated with it. This is
decremented(or incremented) by one by each node which sees it. When
the hop count becomes zero(or a maximum possible value) the packet
is dropped.
Spanning Tree: The packet is sent only on those links that lead to the
destination by constructing a spanning tree routed at the source. This
124
avoids loops in transmission but is possible only when all the
intermediate nodes have knowledge of the network topology.
Flooding is not practical for general kinds of applications. But in cases where high degree of
robustness is desired such as in military applications, flooding is of great help.
2. Random Walk: In this method a packet is sent by the node to one of its
neighbours randomly. This algorithm is highly robust. When the network is
highly interconnected, this algorithm has the property of making excellent use
of alternative routes. It is usually implemented by sending the packet onto the
least queued link.
Delta Routing
Delta routing is a hybrid of the centralized and isolated routing algorithms. Here each node
computes the cost of each line (i.e some functions of the delay, queue length, utilization,
bandwidth etc) and periodically sends a packet to the central node giving it these values
which then computes the k best paths from node i to node j. Let Cij1 be the cost of the best i-
j path, Cij2 the cost of the next best path and so on.If Cijn - Cij1 < delta, (Cijn - cost
of n'th best i-j path, delta is some constant) then path n is regarded equivalent to the best i-
j path since their cost differ by so little. When delta -> 0 this algorithm becomes centralized
routing and when delta -> infinity all the paths become equivalent.
Multipath Routing
In the above algorithms it has been assumed that there is a single best path between any pair
of nodes and that all traffic between them should use it. In many networks however there are
several paths between pairs of nodes that are almost equally good. Sometimes in order to
improve the performance multiple paths between single pair of nodes are used. This
technique is called multipath routing or bifurcated routing. In this each node maintains a table
with one row for each possible destination node. A row gives the best, second best, third best,
etc outgoing line for that destination, together with a relative weight. Before forwarding a
packet, the node generates a random number and then chooses among the alternatives, using
the weights as probabilities. The tables are worked out manually and loaded into the nodes
before the network is brought up and not changed thereafter.
125
Hierarchical Routing
In this method of routing the nodes are divided into regions based on hierarchy. A particular
node can communicate with nodes at the same hierarchial level or the nodes at a lower level
and directly under it. Here, the path from any source to a destination is fixed and is exactly
one if the heirarchy is a tree.
Asynchronous: It does not require that all of its nodes operate in the lock
step with each other.
Knowledge about the whole network: Each router shares its knowledge through the
entire network. The Router sends its collected knowledge about the network to its
neighbors.
Routing only to neighbors: The router sends its knowledge about the network to
only those routers which have direct links. The router sends whatever it has about the
network through the ports. The information is received by the router and uses the
information to update its own routing table.
Information sharing at regular intervals: Within 30 seconds, the router sends the
information to the neighboring routers.
126
Distance Vector Routing Algorithm
Let dx(y) be the cost of the least-cost path from node x to node y. The least costs are related
by Bellman-Ford equation,
Where the minv is the equation taken for all x neighbors. After traveling from x to v, if we
consider the least-cost path from v to y, the path cost will be c(x,v)+dv(y). The least cost
from x to y is the minimum of c(x,v)+dv(y) taken over all neighbors.
With the Distance Vector Routing algorithm, the node x contains the following routing
information:
For each neighbor v, the cost c(x,v) is the path cost from x to directly attached
neighbor, v.
The distance vector of each of its neighbors, i.e., Dv = [ Dv(y) : y in N ] for each
neighbor v of x.
Distance vector routing is an asynchronous algorithm in which node x sends the copy of its
distance vector to all its neighbors. When node x receives the new distance vector from one
of its neighboring vector, v, it saves the distance vector of v and uses the Bellman-Ford
equation to update its own distance vector. The equation is given below:
The node x has updated its own distance vector table by using the above equation and sends
its updated table to all its neighbors so that they can update their own distance vectors.
127
Algorithm
At each node x,
Initialization
loop
wait(until I receive any distance vector from some neighbor w) for each y in N:
Dx(y) = minv{c(x,v)+Dv(y)}
forever
Note: In Distance vector algorithm, node x update its table when it either see any cost
changein one directly linked nodes or receives any vector update from some neighbor.
128
Fig 4.21 Cloud represents the network
In the above figure, each cloud represents the network, and the number inside the
cloud represents the network ID.
All the LANs are connected by routers, and they are represented in boxes labeled as
A, B, C, D, E, F.
Distance vector routing algorithm simplifies the routing process by assuming the cost
of every link is one unit. Therefore, the efficiency of transmission can be measured by
the number of links to reach the destination.In Distance vector routing, the cost is
based on hop count.
129
Fig 4.22 Distance Vector Routing
In the above figure, we observe that the router sends the knowledge to the immediate
neighbors. The neighbors add this knowledge to their own knowledge and sends the updated
table to their
own neighbors. In this way, routers get its own information plus the new information about
the neighbors.
Routing Table
Initially, the routing table is created for each router that contains atleast three types of
information such as Network ID, the cost and the next hop.
130
NET ID: The Network ID defines the final destination of the packet.
Cost: The cost is the number of hops that packet must take to get there.
Fig 4.23 Original Routing Tables Are Shown Of All The Routers
In the above figure, the original routing tables are shown of all the routers. In a
routing table, the first column represents the network ID, the second column
represents the cost of the link, and the third column is empty.
131
For Example:
When A receives a routing table from B, then it uses its information to update the
table.
The routing table of B shows how the packets can move to the networks 1 and 4.
The B is a neighbor to the A router, the packets from A to B can reach in one hop. So,
1 is added to all the costs given in the B's table and the sum will be the cost to reach a
particular network.
After adjustment, A then combines this table with its own table to create a combined
table.
132
The combined table may contain some duplicate data. In the above figure, the
combined table of router A contains the duplicate data, so it keeps only those data
which has the lowest cost. For example, A can send the data to network 1 in two
ways. The first, which uses no next router, so it costs one hop. The second requires
two hops (A to B, then B to Network 1). The first option has the lowest cost, therefore
it is kept and the second one is dropped.
The process of creating the routing table continues for all routers. Every router
receives the information from the neighbors, and update the routing table.
133
Final routing tables of all the routers are given below:
Link state routing is a technique in which each router shares the knowledge of its
neighborhood with every other router in the internetwork.
Knowledge about the neighborhood: Instead of sending its routing table, a router
sends the information about its neighborhood only. A router broadcast its identities
and cost of the directly attached links to other routers.
Flooding: Each router sends the information to every other router on the internetwork
except its neighbors. This process is known as Flooding. Every router that receives
134
the packet sends the copies to all its neighbors. Finally, each and every router receives
a copy of the same information.
Information sharing: A router sends the information to every other router only when
the change occurs in the information.
Reliable Flooding
Route Calculation
Each node uses Dijkstra's algorithm on the graph to calculate the optimal routes to all
nodes.
The Link state routing algorithm is also known as Dijkstra's algorithm which is used
to find the shortest path from one node to every other node in the network.
The Dijkstra's algorithm is an iterative, and it has the property that after k th iteration of
the algorithm, the least cost paths are well known for k destination nodes.
c( i , j): Link cost from node i to node j. If i and j nodes are not directly linked, then
c(i
, j) = ∞.
D(v): It defines the cost of the path from source code to destination v that has the least
cost currently.
P(v): It defines the previous node (neighbor of v) along with current least cost path
from source to v.
Algorithm Initialization
135
find w not in N such that D(w) is a minimum. Add w to N
In the above algorithm, an initialization step is followed by the loop. The number of times the
loop is executed is equal to the total number of nodes available in the network.
Step 1:
The first step is an initialization step. The currently known least cost path from A to its
directly attached neighbors, B, C, D are 2,5,1 respectively. The cost from A to B is set to 2,
from A to D is set to 1 and from A to C is set to 5. The cost from A to E and F are set to
infinity as they are not directly linked to A.
136
Step N D(B),P(B) D(C),P(C) D(D),P(D) D(E),P(E) D(F),P(F)
Step 2:
In the above table, we observe that vertex D contains the least cost path in step 1. Therefore,
it is added in N. Now, we need to determine a least-cost path through D vertex.
1. v = B, w = D
3. = min( 2, 1+2)>
4. = min( 2, 3)
1. v = C, w = D
3. = min( 5, 1+3)
4. = min( 5, 4)
5. The minimum value is 4. Therefore, the currently shortest path from A to C is 4.</p>
1. v = E, w = D
3. = min( ∞, 1+1)
4. = min(∞, 2)
137
5. The minimum value is 2. Therefore, the currently shortest path from A to E is 2.
Step 3:
In the above table, we observe that both E and B have the least cost path in step 2. Let's
consider the E vertex. Now, we determine the least cost path of remaining vertices through E.
1. v = B, w = E
3. = min( 2 , 2+ ∞ )
4. = min( 2, ∞)
1. v = C, w = E
3. = min( 4 , 2+1 )
4. = min( 4,3)
138
c) Calculating the shortest path from A to F.
1. v = F, w = E
3. = min( ∞ , 2+2 )
4. = min(∞ ,4)
Step 4:
In the above table, we observe that B vertex has the least cost path in step 3. Therefore, it is
added in N. Now, we determine the least cost path of remaining vertices through B.
1. v = C, w = B
3. = min( 3 , 2+3 )
4. = min( 3,5)
139
b) Calculating the shortest path from A to F.
1. v = F, w = B
3. = min( 4, ∞)
4. = min(4, ∞)
Step 5:
In the above table, we observe that C vertex has the least cost path in step 4. Therefore, it is
added in N. Now, we determine the least cost path of remaining vertices through C.
1. v = F, w = C
3. = min( 4, 3+5)
140
4. = min(4,8)
5 ADEB C 4,E
Final table:
5 ADEBC 4,E
6 ADEBCF
141
Disadvantage:
Heavy traffic is created in Line state routing due to Flooding. Flooding can cause an infinite
looping, this problem can be solved by using Time-to-leave field
What is congestion?
A state occurring in network layer when the message traffic is so heavy that it slows down
network response time.
Effects of Congestion
Imagine a bucket with a small hole in the bottom. No matter at what rate water enters the
bucket, the outflow is at constant rate. When the bucket is full with water additional water
entering spills over the sides and is lost.
142
Similarly, each network interface contains a leaky bucket and the following steps are
involved in leaky bucket algorithm:
1. When host wants to send packet, packet is thrown into the bucket.
2. The bucket leaks at a constant rate, meaning the network interface transmits
packets at a constant rate.
The leaky bucket algorithm enforces output pattern at the average rate, no matter how
bursty the traffic is. So in order to deal with the bursty traffic we need a flexible algorithm
so that the data is not lost. One such algorithm is token bucket algorithm.
3. If there is a ready packet, a token is removed from the bucket, and the packet is
sent.
4. If there is no token in the bucket, the packet cannot be sent. Let’s understand
with an example,
In figure (A) we see a bucket holding three tokens, with five packets waiting to be
transmitted. For a packet to be transmitted, it must capture and destroy one token. In figure
(B) We see that three of the five packets have gotten through, but the other two are stuck
waiting for more tokens to be generated.
The leaky bucket algorithm controls the rate at which the packets are introduced in the
network, but it is very conservative in nature. Some flexibility is introduced in the token
bucket algorithm. In the token bucket, algorithm tokens are generated at each tick (up to a
certain limit). For an incoming packet to be transmitted, it must capture a token and the
transmission
143
takes place at the same rate. Hence some of the busty packets are transmitted at the same
rate if tokens are available and thus introduces some amount of flexibility in the system.
Network Layer
It handles the service requests from the transport layer and further forwards the
service request to the data link layer.
The network layer translates the logical addresses into physical addresses
It determines the route from the source to the destination and also manages the traffic
problems such as switching, routing and controls the congestion of data packets.
The main role of the network layer is to move the packets from sending host to the
receiving host.
144
The main functions performed by the network layer are:
Routing: When a packet reaches the router's input link, the router will move the
packets to the router's output link. For example, a packet from S1 to R1 must be
forwarded to the next router on the path to S2.
Logical Addressing: The data link layer implements the physical addressing and
network layer implements the logical addressing. Logical addressing is also used to
distinguish between source and destination system. The network layer adds a header
to the packet which includes the logical addresses of both the sender and the receiver.
Internetworking: This is the main role of the network layer that it provides the
logical connection between different types of networks.
In Network layer, a router is used to forward the packets. Every router has a forwarding table.
A router forwards a packet by examining a packet's header field and then using the header
field value to index into the forwarding table. The value stored in the forwarding table
corresponding to the header field value indicates the router's outgoing interface link to which
the packet is to be forwarded.
145
For example, the router with a header field value of 0111 arrives at a router, and then router
indexes this header value into the forwarding table that determines the output link interface is
The router forwards the packet to the interface 2. The routing algorithm determines the values
that are inserted in the forwarding table. The routing algorithm can be centralized or
decentralized.
Guaranteed delivery: This layer provides the service which guarantees that the
packet will arrive at its destination.
146
Guaranteed delivery with bounded delay: This service guarantees that the packet
will be delivered within a specified host-to-host delay bound.
In-Order packets: This service ensures that the packet arrives at the destination in
the order in which they are sent.
Guaranteed max jitter: This service ensures that the amount of time taken between
two successive transmissions at the sender is equal to the time between their receipt at
the destination.
Security services: The network layer provides security by using a session key
between the source and destination host. The network layer in the source host
encrypts the payloads of datagrams being sent to the destination host. The network
layer in the destination host would then decrypt the payload. In such a way, the
network layer maintains the data integrity and source authentication services.
TCP/IP Model
The OSI Model we just looked at is just a reference/logical model. It was designed to
describe the functions of the communication system by dividing the communication
procedure into smaller and simpler components. But when we talk about the TCP/IP model, it
was designed and developed by Department of Defense (DoD) in 1960s and is based on
standard protocols. It stands for Transmission Control Protocol/Internet Protocol. The
TCP/IP model is a concise version of the OSI model. It contains four layers, unlike seven
layers in the OSI model. The layers are:
1. Process/Application Layer
2. Host-to-Host/Transport Layer
3. Internet Layer
Network Addressing
147
A host is also known as end system that has one link to the network. The boundary
between the host and link is known as an interface. Therefore, the host can have only
one interface.
A router is different from the host in that it has two or more links that connect to it.
When a router forwards the datagram, then it forwards the packet to one of the links.
The boundary between the router and link is known as an interface, and the router can
have multiple interfaces, one for each of its links. Each interface is capable of sending
and receiving the IP packets, so IP requires each interface to have an address.
Each IP address is 32 bits long, and they are represented in the form of "dot-decimal
notation" where each byte is written in the decimal form, and they are separated by
the period. An IP address would look like 193.32.216.9 where 193 represents the
decimal notation of first 8 bits of an address, 32 represents the decimal notation of
second 8 bits of an address.
Internet Protocol being a layer-3 protocol (OSI) takes data Segments from layer-4 (Transport)
and divides it into packets. IP packet encapsulates data unit received from above layer and
add to its own header information.
The encapsulated data is referred to as IP Payload. IP header contains all the necessary
information to deliver the packet at the other end.
148
Fig 4.29 IP Header
IP header includes many relevant information including Version Number, which, in this
context, is 4. Other details are as follows −
Fragment Offset − This offset tells the exact position of the fragment in the original
IP Packet.
149
Time to Live − To avoid looping in the network, every packet is sent with some TTL
value set, which tells the network how many routers (hops) this packet can cross. At
each hop, its value is decremented by one and when the value reaches zero, the packet
is discarded.
Protocol − Tells the Network layer at the destination host, to which Protocol this
packet belongs to, i.e. the next level Protocol. For example protocol number of ICMP
is 1, TCP is 6 and UDP is 17.
Header Checksum − This field is used to keep checksum value of entire header
which is then used to check if the packet is received error-free.
Source Address − 32-bit address of the Sender (or source) of the packet.
Destination Address − 32-bit address of the Receiver (or destination) of the packet.
Options − This is optional field, which is used if the value of IHL is greater than 5.
These options may contain values for options such as Security, Record Route, Time
Stamp, etc.
150
Let's understand through a simple example.
In the above figure, a router has three interfaces labeled as 1, 2 & 3 and each router
interface contains its own IP address.
All the interfaces attached to the LAN 1 is having an IP address in the form of
223.1.1.xxx, and the interfaces attached to the LAN 2 and LAN 3 have an IP address
in the form of 223.1.2.xxx and 223.1.3.xxx respectively.
Each IP address consists of two parts. The first part (first three bytes in IP address)
specifies the network and second part (last byte of an IP address) specifies the host in
the network.
Classful Addressing
Class A
151
Class B
Class C
Class D
Class E
In the above diagram, we observe that each class have a specific range of IP addresses. The
class of IP address is used to determine the number of bits used in a class and number of
networks and hosts available in the class.
152
Class A
In Class A, an IP address is assigned to those networks that contain a large number of hosts.
In Class A, the first bit in higher order bits of the first octet is always set to 0 and the
remaining 7 bits determine the network ID. The 24 bits determine the host ID in any network.
The total number of networks in Class A = 27 = 128 network address The total number of
hosts in Class A = 224 - 2 = 16,777,214 host address
Class B
In Class B, an IP address is assigned to those networks that range from small-sized to large-
sized networks.
In Class B, the higher order bits of the first octet is always set to 10, and the remaining14 bits
determine the network ID. The other 16 bits determine the Host ID.
The total number of networks in Class B = 214 = 16384 network address The total number of
hosts in Class B = 216 - 2 = 65534 host address
153
Class C
In Class C, the higher order bits of the first octet is always set to 110, and the remaining 21
bits determine the network ID. The 8 bits of the host ID determine the host in a network.
The total number of networks = 221 = 2097152 network address The total number of hosts =
28 - 2 = 254 host address
Class D
In Class D, an IP address is reserved for multicast addresses. It does not possess subnetting.
The higher order bits of the first octet is always set to 1110, and the remaining bits
determines the host ID in any network.
Class E
In Class E, an IP address is used for the future use or for the research and development
purposes. It does not possess any subnetting. The higher order bits of the first octet is always
set to 1111, and the remaining bits determines the host ID in any network.
154
Rules for assigning Host ID:
The Host ID is used to determine the host within any network. The Host ID is assigned based
on the following rules:
The Host ID in which all the bits are set to 0 cannot be assigned as it is used to
represent the network ID of the IP address.
The Host ID in which all the bits are set to 1 cannot be assigned as it is reserved for
the multicast address.
If the hosts are located within the same local network, then they are assigned with the same
network ID. The following are the rules for assigning Network ID:
The Network ID in which all the bits are set to 0 cannot be assigned as it is used to
specify a particular host on the local network.
The Network ID in which all the bits are set to 1 cannot be assigned as it is reserved
for the multicast address.
A 0 8 24 27 224 0.0.0.0 to
127.255.255.255
155
B 10 16 16 214 216 128.0.0.0 to
191.255.255.255
ARP
Each device on the network is recognized by the MAC address imprinted on the NIC.
Therefore, we can say that devices need the MAC address for communication on a
local area network. MAC address can be changed easily. For example, if the NIC on a
particular machine fails, the MAC address changes but IP address does not change.
ARP is used to find the MAC address of the node when an internet address is known.
Note: MAC address: The MAC address is used to identify the actual
device. IP address: It is an address used to locate a device on the network.
If the host wants to know the physical address of another host on its network, then it sends an
ARP query packet that includes the IP address and broadcast it over the network. Every host
on the network receives and processes the ARP packet, but only the intended recipient
156
recognizes the IP address and sends back the physical address. The host holding the datagram
adds the physical address to the cache memory and to the datagram header, then sends back
to the sender.
If a device wants to communicate with another device, the following steps are taken by the
device:
The device will first look at its internet list, called the ARP cache to check whether an
IP address contains a matching MAC address or not.
157
It will check the ARP cache in command prompt by using a command arp-a.
If ARP cache is empty, then device broadcast the message to the entire network
asking each device for a matching MAC address.
The device that has the matching IP address will then respond back to the sender with
its MAC address
Once the MAC address is received by the device, then the communication can take
place between two devices.
If the device receives the MAC address, then the MAC address gets stored in the ARP
cache. We can check the ARP cache in command prompt by using a command arp -a.
158
In the above screenshot, we observe the association of IP address to the MAC address. There
are two types of ARP entries:
Static entry: It is an entry where someone manually enters the IP to MAC address
association by using the ARP command utility.
RARP
If the host wants to know its IP address, then it broadcast the RARP query packet that
contains its physical address to the entire network. A RARP server on the network
recognizes the RARP packet and responds back with the host IP address.
The protocol which is used to obtain the IP address from a server is known as Reverse
Address Resolution Protocol.
The message format of the RARP protocol is similar to the ARP protocol.
Like ARP frame, RARP frame is sent from one machine to another encapsulated in
the data portion of a frame.
159
Fig 4.33 RARP
ICMP
The ICMP is a network layer protocol used by hosts and routers to send the
notifications of IP datagram problems back to the sender.
ICMP uses echo test/reply to check whether the destination is reachable and
responding.
ICMP handles both control and error messages, but its main function is to report the
error but not to correct them.
160
An IP datagram contains the addresses of both source and destination, but it does not
know the address of the previous router through which it has been passed. Due to this
reason, ICMP can only send the messages to the source, but not to the immediate
routers.
ICMP protocol communicates the error messages to the sender. ICMP messages cause
the errors to be returned back to the user processes.
The second field specifies the reason for a particular message type.
The checksum field covers the entire ICMP message. Error Reporting
Destination unreachable
Source Quench
Time Exceeded
Parameter problems
Redirection
161
Destination unreachable: The message of "Destination Unreachable" is sent from
receiver to the sender when destination cannot be reached, or packet is discarded
when the destination is not reachable.
Source Quench: The purpose of the source quench message is congestion control.
The message sent from the congested router to the source host to reduce the
transmission rate. ICMP will take the IP of the discarded packet and then add the
source quench message to the IP datagram to inform the source host to reduce its
transmission rate. The source host will reduce the transmission rate so that the router
will be free from congestion.
There are two ways when Time Exceeded message can be generated:
Sometimes packet discarded due to some bad routing implementation, and this causes the
looping issue and network congestion. Due to the looping issue, the value of TTL keeps on
decrementing, and when it reaches zero, the router discards the datagram. However, when the
datagram is discarded by the router, the time exceeded message will be sent by the router to
the source host.
When destination host does not receive all the fragments in a certain time limit, then the
received fragments are also discarded, and the destination host sends time Exceeded message
to the source host.
Parameter problems: When a router or host discovers any missing value in the IP
datagram, the router discards the datagram, and the "parameter problem" message is
sent back to the source host.
162
Redirection: Redirection message is generated when host consists of a small routing
table. When the host consists of a limited number of entries due to which it sends the
datagram to a wrong router. The router that receives a datagram will forward a
datagram to a correct router and also sends the "Redirection message" to the host to
update its routing table.
IGMP
The IGMP protocol is used by the hosts and router to support multicasting.
The IGMP protocol is used by the hosts and router to identify the hosts in a LAN that
are the members of a group.
163
The IGMP message is encapsulated within an IP datagram.
Where,
Type: It determines the type of IGMP message. There are three types of IGMP message:
Membership Query, Membership Report and Leave Report.
Maximum Response Time: This field is used only by the Membership Query message. It
determines the maximum time the host can send the Membership Report message in response
to the Membership Query message.
Checksum: It determines the entire payload of the IP datagram in which IGMP message is
encapsulated.
Group Address: The behavior of this field depends on the type of the message sent.
For Membership Query, the group address is set to zero for General Query and set
to multicast group address for a specific query.
For Membership Report, the group address is set to the multicast group address.
164
IGMP Messages
This message is sent by a router to all hosts on a local area network to determine the
set of all the multicast groups that have been joined by the host.
It also determines whether a specific multicast group has been joined by the hosts on a
attached interface.
The group address in the query is zero since the router expects one response from a
host for every group that contains one or more members on that host.
The host responds to the membership query message with a membership report
message.
Membership report messages can also be generated by the host when a host wants to
join the multicast group without waiting for a membership query message from the
router.
Membership report messages are received by a router as well as all the hosts on an
attached interface.
165
Each membership report message includes the multicast address of a single group that
the host wants to join.
IGMP protocol does not care which host has joined the group or how many hosts are
present in a single group. It only cares whether one or more attached hosts belong to a
single multicast group.
Leave Report
When the host does not send the "Membership Report message", it means that the host has
left the group. The host knows that there are no members in the group, so even when it
receives the next query, it would not report the group.
IP v6 was developed by Internet Engineering Task Force (IETF) to deal with the problem of
IP v4 exhaustion. IP v6 is 128-bits address having an address space of 2^128, which is way
bigger than IPv4. In IPv6 we use Colon-Hexa representation. There are 8 groups and each
group represents 2 Bytes.
166
In IPv6 representation, we have three addressing methods :
Unicast Address: Unicast Address identifies a single network interface. A packet sent to
unicast address is delivered to the interface identified by that address.
Multicast Address: Multicast Address is used by multiple hosts, called as Group, acquires a
multicast destination address. These hosts need not be geographically together. If any packet
is sent to this multicast address, it will be distributed to all interfaces corresponding to that
multicast address.
Anycast Address: Anycast Address is assigned to a group of interfaces. Any packet sent to
anycast address will be delivered to only one member interface (mostly nearest host
possible).
We have 128 bits in IPv6 address but by looking at first few bits we can identify what type of
address it is.
0000 01 UA 1/64
0000 1 UA 1/32
0001 UA 1/16
010 UA 1/8
167
011 UA 1/8
100 UA 1/8
101 UA 1/8
110 UA 1/8
1110 UA 1/16
1111 0 UA 1/32
1111 10 UA 1/64
Note : In IPv6, all 0’s and all 1’s can be assigned to any host, there is not any restriction like
IPv4.
168
Registry Id (5-bits) : Registry Id identifies the region to which it belongs. Out of 32 (i.e.
2^5), only 4 registry id’s are being used.
Provider Id : Depending on the number of service providers that operates under a region,
certain bits will be allocated to Provider Id field. This field need not be fixed. Let’s say if
Provider Id = 10 bits then Subscriber Id will be 56 – 10 = 46 bits.
Subscriber Id : After Provider Id is fixed, remaining part can be used by ISP as normal IP
address.
Intra Subscriber : This part can be modified as per need of organization that is using the
service.
Global routing prefix : Global routing prefix contains all the details of Latitude and
Longitude. As of now, it is not being used. In Geography based Unicast address routing will
be based on location.
Interface Id : In IPv6, instead of using Host Id, we use the term Interface Id.
169
Some special addresses:
Unspecified –
Loopback –
IPv4 Compatible –
IPv4 mapped –
There are two types of Local Unicast addresses defined- Link local and Site Local.
170
Link local address is used for addressing on a single link. It can also be used to communicate
with nodes on the same link. Link local address always begins with 1111111010 (i.e. FE80).
Router will not forward any packet with Link local address.
Site local addresses are equivalent to private IP address in IPv4. Likely, some address space
is reserved, which can only be routed within an organization. First 10-bits are set to
1111111011, which is why Site local addresses always begin with FEC0. Following 32 bits
are Subnet ID, which can be used to create subnet within organization. Node address is used
to uniquely identify the link; therefore, we use 48-bits MAC address here.
In the OSI (Open System Interconnection) model, the Network layer is the third layer. This
layer is mostly associated with the movement of data by, which is achieved by the means of
addressing and routing.
This layer directs the flow of data from a source to a destination, and this may be
irrespective of the thing that the communicating machines might not be connected by the
same physical medium. This is achieved through finding an appropriate path from one
communicating machine to the other. For the purpose of transmitting data, if necessary, this
layer can break the data into smaller chunks. Sometimes, the breaking of data becomes
necessary. At the end, this layer is responsible for reassembling those smaller broken pieces
of data into the original data after the data has reached its destination.
In other words, the network layers help in establishing communication with devices. These
devices, connected over the internet, might be located on logically separate networks. The
network layer uses various routing algorithms to guide data packets from a source to a
destination network. A key element of this layer is that each network in the whole web of
171
networks is assigned a network address; and such addresses are used to route packets
(which is covered under the topics of Addressing and Switching, explained later on).
This is an overview of the Network Layer Protocols PDF, if you want to read full article in
pdf, we have provided download link below
When the data is passed down to the next layer, then this implies that the lower layer needs
to perform some services for the higher layer. In order for these services to be performed,
the lower layer adds some information to the already existing header or trailer. For instance,
the transport layer (lower layer) provides with its data and header to the network layer
(higher layer). Then the Network layer adds header with the correct destination network
layer address in order to facilitate the delivery of the packet to the other recipient machine.
Also, this layer translates the logical address into physical address. This layer decides the
route from source to destination. It also manages the traffic problems such as switching,
routing and controlling the congestion of data packets.
Besides this, the main functions which are performed by the network layer are:
1. Routing: This can be seen as a three step process: First, sending data from source
computer to some nearby router. Second, delivering data from the router near the
source to a router near a destination. Third, delivering the data from the router near
the destination to the end destination computer.
2. Logical Addressing: The network layer deals with implementing the logical
addressing of the data packets, as the data link layer implements the physical
addressing. Logical addressing is used make a difference between the source and
destination system. The network layer adds a header to the data packet; this header
contains the logical addresses of both, the sender and the receiver.
3. Switching: It is the method of moving data through a network. There exist multiple
redundant paths between the source and destination. The three major types are:
Circuit switching, Message switching, and Packet switching.
In Circuit switching, the path for communication remains fixed during the duration of
connection; this enables a well defined bandwidth and dedicated paths.
In Message Switching, each message is treated as an independent entity carrying its own
address info and destination details. The info is used at each switch to transfer the message
172
to the next switch in the route. The benefits in this mode: relatively low cost devices, data
channel sharing, efficient use of bandwidth.
In Packet Switching, messages are divided into smaller packets. Each packet contains
source and destination address information. Packets could be routed through the network
independently, without the need to be stored temporarily anywhere. This switching mode
routes the data through the network more rapidly and efficiently.
This is an overview of the Network Layer Protocols PDF, if you want to read full article in
pdf, we have provided download link below
Guaranteed delivery, with bounded delay. This service guarantees that the packet will
be delivered within a specified host-to-host delay bound.
Ordered delivery of packets: It is ensured by this layer that each packet arrives at the
destination in the order which they are sent.
In networking, Jitter is the variation in the latency of packet flow between sender and
receiver systems. This happens when some packets take longer time to travel from one
system to the other. Jitter in a network might be due to network congestion, time drift and
change in routes.
This is an overview of the Network Layer protocol PDF, if you want to read full article in
pdf, we have provided download link below
173
Network layer service ensures that there is guaranteed maximum jitter. This means that
the amount of time taken between two successive transmissions at the sender side is
equal to the amount of time taken between two successive receipts at the destination side.
Security services: The network layer uses a session key to provide security between
the source and destination. At the source side encryption of the payloads of datagrams
being sent takes place with the help of this layer. Then, at the destination side, this
layer again helps in the decryption of the received payload. Through this, the network
layer is able to maintain data integrity and source authentication services.
In order to achieve its goal, the network layer must take into consideration the topology of
the communication subnet (i.e. the set of all routers) and choose appropriate paths through
it. At the same time, it should choose routes in such a ways so as to avoid overloading some
of the communication lines and routers while leaving others idle. At last, when the source
and destination are in different networks, it is incumbent on the network layer to deal with
the differences in the networks and the problems arising out such differences.
The third layer of the Open Systems Interconnection (OSI) is called the network layer.
While the Data Link Layer functions mostly inside Wide Area Network (WAN) and Local
Area Network (LAN), Network Layer handles the responsibility of the transmission of data
in different networks.
It has no use in the place where two computers are connected on the same path or link. It
has the ability to route signals through different channels and because of that, it is
considered a network controller. Through this layer, data is sent in the form of packets.
This is an overview of the Network Layer Protocols PDF, if you want to read full article in
pdf, we have provided download link below
The primary responsibilities handled by the Layer are the Logical connection of a setup,
routing, delivery error reporting, and data forwarding.
174
Fig 4.36 Network layer.
Because of its functionality and responsibilities, the Network Layer is often seen as the
backbone of the entire OSI Model. Hardware devices such as routers, bridges, firewalls, and
switches are a part of it with which it creates a logical image of the communication route that
can be implemented with a physical medium.
The protocols needed for the functionality of the Network Layer are present in every router
and host. Making it one of the most useful of all the layers.
Most known among these protocols are IP (internet protocol), Internetwork Packet Exchange
(IPX) and Sequenced Packet Exchange (SPX) or as they are collectively known; IPX/SPX.
IPX protocol is also used by the Transport Layer, which alongside the Data Link Layer works
with Network Layer as they are placed above and below this layer respectively.
This is an overview of the Network Layer Protocols PDF, if you want to read full article in
pdf, we have provided download link below
2. Service is provided by this layer to the transport layer for sending the data packets to
the destination of the request.
175
It is also capable of supporting connection-oriented communication like other layers
but only one kind of communication can be established at one time.
4. It also works as a locator of the IP address from where the data packets were
requested from and it also works as a host for that address.
5. It is commonly possible for two different subnets to have different addresses and
protocols. That’s Protocols of Network Layer are found in all the router and host so
this layer can resolve the issues and provide a common ground for them to form a
connection.
4.7.1 IP protocol
Internet Protocol (IP) is the method or protocol by which data is sent from one computer to
another on the internet. Each computer -- known as a host -- on the internet has at least one IP
address that uniquely identifies it from all other computers on the internet.
IP is the defining set of protocols that enable the modern internet. It was initially defined in
May 1974 in a paper titled, "A Protocol for Packet Network Intercommunication," published
by the Institute of Electrical and Electronics Engineers and authored by Vinton Cerf and
Robert Kahn.
At the core of what is commonly referred to as IP are additional transport protocols that
enable the actual communication between different hosts. One of the core protocols that runs
on top of IP is the Transmission Control Protocol (TCP), which is often why IP is also
referred to as TCP/IP. However, TCP isn't the only protocol that is part of IP.
When data is received or sent -- such as an email or a webpage -- the message is divided into
chunks called packets. Each packet contains both the sender's internet address and the
receiver's address. Any packet is sent first to a gateway computer that understands a small
part of the internet. The gateway computer reads the destination address and forwards the
packet to an adjacent gateway that in turn reads the destination address and so forth until one
176
gateway recognizes the packet as belonging to a computer within its immediate neighborhood
-- or domain. That gateway then forwards the packet directly to the computer whose address
is specified.
Because a message is divided into a number of packets, each packet can, if necessary, be sent
by a different route across the internet. Packets can arrive in a different order than the order
they were sent. The Internet Protocol just delivers them. It's up to another protocol -- the
Transmission Control Protocol -- to put them back in the right order.
IP packets
While IP defines the protocol by which data moves around the internet, the unit that does the
actual moving is the IP packet.
An IP packet's envelope is called the header. The packet header provides the information
needed to route the packet to its destination. An IP packet header is up to 24 bytes long and
includes the source IP address, the destination IP address and information about the size of
the whole packet.
The other key part of an IP packet is the data component, which can vary in size. Data inside
an IP packet is the content that is being transmitted.
What is an IP address?
IP provides mechanisms that enable different systems to connect to each other to transfer
data. Identifying each machine in an IP network is enabled with an IP address.
Similar to the way a street address identifies the location of a home or business, an IP
address provides an address that identifies a specific system so data can be sent to it or
received from it.
177
An IP address is typically assigned via the DHCP (Dynamic Host Configuration Protocol).
DHCP can be run at an internet service provider, which will assign a public IP address to a
particular device. A public IP address is one that is accessible via the public internet.
A local IP address can be generated via DHCP running on a local network router, providing
an address that can only be accessed by users on the same local area network.
The most widely used version of IP for most of the internet's existence has been Internet
Protocol Version 4 (IPv4).
IPv4 provides a 32-bit IP addressing system that has four sections. For example, a sample
IPv4 address might look like 192.168.0.1, which coincidentally is also commonly the
default IPv4 address for a consumer router. IPv4 supports a total of 4,294,967,296
addresses.
A key benefit of IPv4 is its ease of deployment and its ubiquity, so it is the default protocol.
A drawback of IPv4 is the limited address space and a problem commonly referred to as
IPv4 address exhaustion. There aren't enough IPv4 addresses available for all IP use cases.
Since 2011, IANA (Internet Assigned Numbers Authority) hasn't had any new IPv4 address
blocks to allocate. As such, Regional Internet Registries (RIRs) have had limited ability to
provide new public IPv4 addresses.
In contrast, IPv6 defines a 128-bit address space, which provides substantially more space
than IPv4, with 340 trillion IP addresses. An IPv6 address has eight sections. The text form
of the IPv6 address is xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx, where each x is a
hexadecimal digit, representing 4 bits.
The massive availability of address space is the primary benefit of IPv6 and its most
obvious impact. The challenges of IPv6, however, are that it is complex due to its large
address space and is often challenging for network administrators to monitor and manage.
178
IP network protocols
In the OSI model (Open Systems Interconnection), IP is in layer 3, the networking layer.
There are several commonly used network protocols that run on top of IP, including:
TCP. Transmission Control Protocol enables the flow of data across IP address connections.
FTP. File Transfer Protocol is a specification that is purpose-built for accessing, managing,
loading, copying and deleting files across connected IP hosts.
HTTP. Hypertext Transfer Protocol is the specification that enables the modern web. HTTP
enables websites and web browsers to view content. It typically runs over port 80.
HTTPS. Hypertext Transfer Protocol Secure is HTTP that runs with encryption via Secure
Sockets Layer or Transport Layer Security. HTTPS typically is served over port 443.
The ICMP represents Internet Control Message Protocol. It is a network layer protocol. It can
be used for error handling in the network layer, and it is generally used on network devices,
including routers. IP Protocol is a best-effect delivery service that delivers a datagram from
its original source to its final destination. It has two deficiencies−
179
IP protocol also lacks a structure for host and management queries. A host needs to resolve if
a router or another host is alive, and sometimes a network manager needs information from
another host or router.
ICMP has been created to compensate for these deficiencies. It is a partner to the IP protocol.
ICMP is a network layer protocol. But, its messages are not passed directly to the data link
layer. Instead, the messages are first encapsulated inside the IP datagrams before going to the
lower layer.
The cost of the protocol field in the IP datagram is I, to indicate that IP data is an ICMP
message.
The error reporting messages report issues that a router or a host (destination) may encounter
when it phases an IP packet.
The query messages, which appear in pairs, help a host or a network manager to get specific
data from a router or another host.
AN ICMP message includes an 8-byte header and a variable size data format.
180
Fig 4.37 (b) ICMP.
Type: It is an 8-bit field. It represents the ICMP message type. The values area from 0
to 127 are described for ICMPv6, and the values from 128 to 255 are the data
messages.
Code: It is an 8-bit field that represents the subtype of the ICMP message.
Checksum: It is a 16-bit field to recognize whether the error exists in the message or
not.
4.9 SUMMARY
The network layer provides services to the transport layer. It can be based on either virtual
circuits or datagrams. In both cases, its main job is routing packets from the source to the
destination. In virtual-circuit subnets, a routing decision is made when the virtual circuit is set
up. In datagram subnets, it is made on every packet. Many routing algorithms are used in
computer networks. Static algorithms include shortest path routing and flooding. Dynamic
algorithms include distance vector routing and link state routing. Most actual networks use
one of these. Subnets can easily become congested, increasing the delay and lowering the
throughput for packets. Network designers attempt to avoid congestion by proper design.
Networks differ in various ways, so when multiple networks are interconnected problems can
occur. Sometimes the problems can be finessed by tunneling a packet through a hostile
network, but if the source and destination networks are different, this approach fails.
Protocols described are IP, a new version of IP i.e. IPv6.
181
4.10 KEYWORDS
Data Stream : The transmission of characters and data bits through a channel
Data Switch : A device used to connect data processing equipment to network lines,
offering flexibility in line /device selection.
Data Transfer Rate, Data Rate : The measure of the speed of data transmission,
usually expressed in bits per second. Synonymous with speed, the data rate is often
incorrectly expressed in baud .
1 A sender sends a series of packets to the same destination using 5-bit sequence numbers.
If the sequence number starts with 0, what is the sequence number after sending 100
packets?
___________________________________________________________________________
___________________________________________________________________________
2 Using 5-bit sequence numbers, what is the maximum size of the send and receive
windows for each of the following protocols?
a. Stop-and-Wait ARQ
b. Go-Back-N ARQ
c. Selective-Repeat ARQ
___________________________________________________________________________
___________________________________________________________________________
182
4.12 UNIT END QUESTIONS
A. Descriptive Questions
Short Questions:
Long Questions:
b. bit-by-bit delivery
a. FDMA
b. CDMA
c. TDMA
d. TDM
183
3. A message travels over a physical path is called___.
a. Signals
b. Medium
c. Protocols
4. The _____ is the portion of the physical layer that interfaces with the media access
control sublayer
a. Frames
b. Bit
c. Packet
d. Bytes
Answers:
1-b,2-d,3-b,4-c,5-b
184
4.13 REFERENCES
Computer Networks, A. S. Tanenbaum 4th Edition, Practice Hall of India, New Delhi.
2003.
Computer Networking, J.F. Kurose & K.W. Ross, A Top-Down Approach Featuring
the Internet, Pearson Edition, 2003.
Communications Networks, Leon Garcia, and Widjaja, Tata McGraw Hill, 2000.
www. wikipedia.org
185