0% found this document useful (0 votes)
382 views

Data Communication and Computer Networks NOTES

The document discusses different types of communication including digital, analog, simplex, half-duplex, and full-duplex. It explains the differences between analog and digital communication, such as digital using discrete binary values while analog uses continuous values. Transmission modes like simplex allow unidirectional data flow while half and full duplex enable bidirectional communication, either not simultaneously or simultaneously, respectively.

Uploaded by

praveen kumar
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
382 views

Data Communication and Computer Networks NOTES

The document discusses different types of communication including digital, analog, simplex, half-duplex, and full-duplex. It explains the differences between analog and digital communication, such as digital using discrete binary values while analog uses continuous values. Transmission modes like simplex allow unidirectional data flow while half and full duplex enable bidirectional communication, either not simultaneously or simultaneously, respectively.

Uploaded by

praveen kumar
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 137

1

DATA COMMUNICATION

AND

COMPUTER

NETWORK

UNIT -1

Data Communication:-

Digital and analog communication:-

Digital Communication: 
In digital communication digital signal is used rather than analog signal for
communication in between the source and destination. They digital signal
consists of discrete values rather than continuous values. In digital
communication physical transfer of data occurs in the form of digital bit stream
i.e 0 or 1 over a point-to-point or point-to-multipoint transmission medium. In
digital communication the digital transmission data can be broken into packets as
discrete messages which is not allowed in analog communication. 

The below figure illustrates the Digital Communication System: 

1
2

 Analog Communication: 
In analog communication the data is transferred with the help of analog signal in
between transmitter and receiver. Any type of data is transferred in analog signal.
Any data is converted into electric form first and after that it is passed through
communication channel. Analog communication uses a continuous signal which
varies in amplitude, phase, or some other property with time in proportion to that
of a variable. 

The below figure illustrates the Analog Communication System: 

2
3

Difference between Analog Communication and Digital Communication: 

         ANALOG             DIGITAL
S .No.                     COMMUNICATION COMMUNICATION

In analog communication In digital communication


analog signal is used for digital signal is used for
01. information transmission. information transmission.

3
4

         ANALOG             DIGITAL
S .No.                     COMMUNICATION COMMUNICATION

Analog communication Digital communication uses


uses analog signal whose digital signal whose
amplitude varies amplitude is of two levels
continuously with time either Low i.e., 0 or either
02. from 0 to 100. High i.e., 1.

It gets affected by noise


highly during transmission It gets affected by noise less
through communication during transmission through
03. channel. communication channel.

In analog communication
only limited number of
channels can be It can broadcast large
broadcasted number of channels
04. simultaneously. simultaneously.

In analog communication In digital communication


05. error Probability is high. error Probability is low.

In analog communication In digital communication


06. noise immunity is poor. noise immunity is good.

07. In analog communication In digital communication


coding is not possible. coding is possible. Different
coding techniques can be
used to detect and correct

4
5

         ANALOG             DIGITAL
S .No.                     COMMUNICATION COMMUNICATION

errors.

Separating out noise and


signal in analog Separating out noise and
communication is not signal in digital
08. possible. communication is possible.

Digital communication
Analog communication system is having less
system is having complex complex hardware and
09. hardware and less flexible. more flexible.

In analog communication
for multiplexing  In Digital communication
for multiplexing 
Frequency Division
Multiplexing (FDM) is Time Division Multiplexing
used.  (TDM) is used. 

10.    

Analog communication Digital communication


11. system is low cost. system is high cost.

12. It requires low bandwidth. It requires high bandwidth.

13. Power consumption is Power consumption is low.

5
6

         ANALOG             DIGITAL
S .No.                     COMMUNICATION COMMUNICATION

high.

14. It is less portable. Portability is high.

No privacy or privacy is Privacy is high so it is


15. less so not highly secured. highly secured.

Not assures an accurate It assures a more accurate


16. data transmission. data transmission.

Synchronization problem is
17. Synchronization problem. easier.

 Transmission modes

The way in which data is transmitted from one device to another device is known
as transmission mode.

The transmission mode is also known as the communication mode.

Each communication channel has a direction associated with it, and transmission
media provide the direction. Therefore, the transmission mode is also known as a
directional mode.

The transmission mode is defined in the physical layer.

6
7

The Transmission mode is divided into three categories:

Simplex mode

Half-duplex mode

Full-duplex mode

Simplex mode

In Simplex mode, the communication is unidirectional, i.e., the data flow in one
direction.

A device can only send the data but cannot receive it or it can receive the data but
cannot send the data.

This transmission mode is not very popular as mainly communications require the
two-way exchange of data. The simplex mode is used in the business field as in
sales that do not require any corresponding reply.

7
8

The radio station is a simplex channel as it transmits the signal to the listeners but
never allows them to transmit back.

Keyboard and Monitor are the examples of the simplex mode as a keyboard can
only accept the data from the user and monitor can only be used to display the data
on the screen.

The main advantage of the simplex mode is that the full capacity of the
communication channel can be utilized during transmission.

Advantage of Simplex mode:

In simplex mode, the station can utilize the entire bandwidth of the communication
channel, so that more data can be transmitted at a time.

Disadvantage of Simplex mode:

Communication is unidirectional, so it has no inter-communication between


devices.

Half-Duplex mode

8
9

In a Half-duplex channel, direction can be reversed, i.e., the station can transmit
and receive the data as well.

Messages flow in both the directions, but not at the same time.

The entire bandwidth of the communication channel is utilized in one direction at a


time.

In half-duplex mode, it is possible to perform the error detection, and if any error
occurs, then the receiver requests the sender to retransmit the data.

A Walkie-talkie is an example of the Half-duplex mode. In Walkie-talkie, one


party speaks, and another party listens. After a pause, the other speaks and first
party listens. Speaking simultaneously will create the distorted sound which cannot
be understood.

Advantage of Half-duplex mode:

In half-duplex mode, both the devices can send and receive the data and also can
utilize the entire bandwidth of the communication channel during the transmission
of data.

Disadvantage of Half-Duplex mode:

In half-duplex mode, when one device is sending the data, then another has to wait,
this causes the delay in sending the data at the right time.

Full-duplex mode

9
10

In Full duplex mode, the communication is bi-directional, i.e., the data flow in both
the directions.

Both the stations can send and receive the message simultaneously.

Full-duplex mode has two simplex channels. One channel has traffic moving in
one direction, and another channel has traffic flowing in the opposite direction.

The Full-duplex mode is the fastest mode of communication between devices.

The most common example of the full-duplex mode is a telephone network. When
two people are communicating with each other by a telephone line, both can talk
and listen at the same time.

Advantage of Full-duplex mode:

Both the stations can send and receive the data at the same time.

Disadvantage of Full-duplex mode:

If there is no dedicated path exists between the devices, then the capacity of the
communication channel is divided into two parts.

Serial and Parallel Communication:-

Serial Communication
In serial communication the data bits are transmitted serially over a common
communication link one after the other. Basically it does not allow simultaneous
transmission of data because only a single channel is utilized. Thereby allowing
sequential transfer rather than simultaneous transfer.

The figure below shows the serial data transmission:

10
11

It is highly suitable for long distance signal transmission as only a single wire or
bus is used. So, it can be connected between two points that are separated at a large
distance with respect to each other. But as only a single data bit is transmitted
per clock pulse thus the transmission of data is a quiet time taking process.

Parallel Communication

In parallel communication the various data bits are simultaneously transmitted


using multiple communication links between sender and receiver. Here, despite
using a single channel between sender and receiver, various links are used and
each bit of data is transmitted separately over all the communication link.

11
12

The figure below shows the transmission of 8 byte data using parallel
communication technique:

Here, as we can see that for the transmission of 8-bit of data, 8 separate
communication links are utilized. And so rather following a sequential data
transmission, simultaneous transmission of data is allowed. This leads to a faster
communication between the sender and receiver.

But for connecting multiple lines between sender and receiver multiple connecting
unit are to be present between a pair of sender and receiver. And this is the reason
why parallel communication is not suitable for long distance transmission, because
connecting multiple lines to large distances is very difficult and expensive.

Key Differences between Serial and Parallel Communication

1. Due to the presence of single communication link the speed of data transmission
is slow. While multiple links in case of parallel communication allows data
transmission at comparatively faster rate.
2. Whenever there exists a need for system up-gradation then upgrading a system
that uses serial communication is quite an easy task as compared to upgrading a
parallel communication system.
3. In serial communication, the all data bits are transmitted over a common channel
thus proper spacing is required to be maintained in order to avoid interference.

12
13

While in parallel communication, the utilization of multiple link reduces the


chances of interference between the transmitted bits.
4. Serial communication supports higher bandwidth while parallel communication
supports comparatively lower bandwidth.
5. Serial communication is efficient for high frequency operation. However, parallel
communication shows its suitability more in case of low frequency operations.
6. Due to existence of single link, the problem of crosstalk is not present in serial
communication. But multiple links increase the chances of crosstalk in parallel
communication.
7. Serial communication is suitable for long distance transmission of data as against
parallel communication is suitable for short distance transmission of data.

Conclusion
So it is clear that utilizing multiple lines for data transmission in case of parallel
communication is advantageous as it offers faster data transmission. But as the
same time it is disadvantageous when considered in case of cost and transmission
distance.

Packet Switching:-

Packet switching is a method of transferring the data to a network in form of


packets. In order to transfer the file fast and efficient manner over the network
and minimize the transmission latency, the data is broken into small pieces of
variable length, called Packet. At the destination, all these small-parts (packets)
has to be reassembled, belonging to the same file. A packet composes of payload
and various control information. No pre-setup or reservation of resources is
needed.
Packet Switching uses Store and Forward technique while switching the
packets; while forwarding the packet each hop first store that packet then
forward. This technique is very beneficial because packets may get discarded at
any hop due to some reason. More than one path is possible between a pair of
source and destination. Each packet contains Source and destination address
using which they independently travel through the network. In other words,
packets belonging to the same file may or may not travel through the same path.
If there is congestion at some path, packets are allowed to choose different path
possible over existing network.
13
14

Packet-Switched networks were designed to overcome the weaknesses of Circuit-


Switched networks since circuit-switched networks were not very effective for
small messages.

Process
Each packet in a packet switching technique has two parts: a header and a payload.
The header contains the addressing information of the packet and is used by the
intermediate routers to direct it towards its destination. The payload carries the
actual data.
A packet is transmitted as soon as it is available in a node, based upon its header
information. The packets of a message are not routed via the same path. So, the
packets in the message arrives in the destination out of order. It is the responsibility
of the destination to reorder the packets in order to retrieve the original message.
The process is diagrammatically represented in the following figure. Here the
message comprises of four packets, A, B, C and D, which may follow different
routes from the sender to the receiver.

14
15

Modes of Packet Switching :-


1. Connection-oriented Packet Switching (Virtual Circuit):- Before starting
the transmission, it establishes a logical path or virtual connection using
signalling protocol, between sender and receiver and all packets belongs to
this flow will follow this predefined route. Virtual Circuit ID is provided by
switches/routers to uniquely identify this virtual connection. Data is divided
into small units and all these small units are appended with help of sequence
number. Overall, three phases takes place here- Setup, data transfer and tear
down phase.

All address information is only transferred during setup phase. Once the route
to destination is discovered, entry is added to switching table of each
intermediate node. During data transfer, packet header (local header) may
contain information such as length, timestamp, sequence number etc.
Connection-oriented switching is very useful in switched WAN. Some popular
protocols which use Virtual Circuit Switching approach are X.25, Frame-
Relay, ATM and MPLS(Multi-Protocol Label Switching).

2. Connectionless Packet Switching (Datagram) :- Unlike Connection-oriented


packet switching, In Connectionless Packet Switching each packet contains all
necessary addressing information such as source address, destination address
and port numbers etc. In Datagram Packet Switching, each packet is treated
independently. Packets belonging to one flow may take different routes
because routing decisions are made dynamically, so the packets arrived at
destination might be out of order. It has no connection setup and teardown

15
16

phase, like Virtual Circuits.


Packet delivery is not guaranteed in connectionless packet switching, so the
reliable delivery must be provided by end systems using additional protocols.

3. A---R1---R2---B
4.
5. A is the sender (start)
6. R1, R2 are two routers that store and forward data
7. B is receiver(destination)
To send a packet from A to B there are delays since this is a Store and
Forward network.

Delays in Packet switching :

1. Transmission Delay
2. Propagation Delay
3. Queuing Delay
4. Processing Delay

Transmission Delay :
Time taken to put a packet onto link. In other words, it is simply time required
to put data bits on the wire/communication medium. It depends on length of
packet and bandwidth of network.
Transmission Delay = Data size / bandwidth = (L/B) second

16
17

Propagation delay :
Time taken by the first bit to travel from sender to receiver end of the link. In
other words, it is simply the time required for bits to reach the destination
from the start point. Factors on which Propagation delay depends are Distance
and propagation speed.
Propagation delay = distance/transmission speed = d/s
Queuing Delay :
Queuing delay is the time a job waits in a queue until it can be executed. It
depends on congestion. It is the time difference between when the packet
arrived Destination and when the packet data was processed or executed. It
may be caused by mainly three reasons i.e. originating switches, intermediate

17
18

switches or call receiver servicing switches.

Average Queuing delay = (N-1)L/(2*R)


where N = no. of packets
L=size of packet
R=bandwidth
Processing Delay :
Processing delay is the time it takes routers to process the packet header.
Processing of packets helps in detecting bit-level errors that occur during
transmission of a packet to the destination. Processing delays in high-speed

18
19

routers are typically on the order of microseconds or less.


In simple words, it is just the time taken to process packets.

Total time or End-to-End time


= Transmission delay + Propagation delay+ Queuing delay
+ Processing delay

For M hops and N packets –


Total delay
= M*(Transmission delay + propagation delay)+
(M-1)*(Processing delay + Queuing delay) +
(N-1)*(Transmission delay)
For N connecting link in the circuit –
Transmission delay = N*L/R
Propagation delay = N*(d/s)

Question : How much time will it take to send a packet of size L bits from A
to B in given setup if Bandwidth is R bps, propagation speed is t meter/sec and
distance b/w any two points is d meters (ignore processing and queuing
delay) ?
A---R1---R2---B
Ans:
N = no. of links = no. of hops = no. of routers +1 = 3
File size = L bits
Bandwidth = R bps
Propagation speed = t meter/sec
Distance = d meters
Transmission delay = (N*L)/R = (3*L)/R sec
Propagation delay = N*(d/t) = (3*d)/t sec
Total time = 3*(L/R + d/t) sec

Advantages and Disadvantages of Packet Switching

19
20

Advantages
 Delay in delivery of packets is less, since packets are sent as soon as they are
available.
 Switching devices don’t require massive storage, since they don’t have to
store the entire messages before forwarding them to the next node.
 Data delivery can continue even if some parts of the network faces link
failure. Packets can be routed via other paths.
 It allows simultaneous usage of the same channel by multiple users.
 It ensures better bandwidth usage as a number of packets from multiple
sources can be transferred via the same link.
Disadvantages
 They are unsuitable for applications that cannot afford delays in
communication like high quality voice calls.
 Packet switching high installation costs.
 They require complex protocols for delivery.
 Network problems may introduce errors in packets, delay in delivery of
packets or loss of packets. If not properly handled, this may lead to loss of
critical information.

Circuit Switching:-
Circuit switching is a connection-oriented network switching technique. Here, a
dedicated route is established between the source and the destination and the entire
message is transferred through it.
Phases of Circuit Switch Connection
 Circuit Establishment: In this phase, a dedicated circuit is established from
the source to the destination through a number of intermediate switching
centres. The sender and receiver transmits communication signals to request
and acknowledge establishment of circuits.
 Data Transfer: Once the circuit has been established, data and voice are
transferred from the source to the destination. The dedicated connection
remains as long as the end parties communicate.

20
21

 Circuit Disconnection: When data transfer is complete, the connection is


relinquished. The disconnection is initiated by any one of the user.
Disconnection involves removal of all intermediate links from the sender to
the receiver.
Diagrammatic Representation of Circuit Switching in Telephone
The following diagram represents circuit established between two telephones
connected by circuit switched connection. The blue boxes represent the switching
offices and their connection with other switching offices. The black lines
connecting the switching offices represents the permanent link between the offices.
When a connection is requested, links are established within the switching offices
as denoted by white dotted lines, in a manner so that a dedicated circuit is
established between the communicating parties. The links remains as long as
communication continues.

Telephone system network is the one of example of Circuit switching. TDM


(Time Division Multiplexing) and FDM (Frequency Division
Multiplexing) are two methods of multiplexing multiple signals into a single
carrier.
 Frequency Division Multiplexing : Divides into multiple bands
Frequency Division Multiplexing or FDM is used when multiple data signals
are combined for simultaneous transmission via a shared communication
medium.It is a technique by which the total bandwidth is divided into a series
of non-overlapping frequency sub-bands,where each sub-band carry different

21
22

signal. Practical use in radio spectrum & optical fiber to share multiple
independent signals.
 Time Division Multiplexing : Divides into frames
Time-division multiplexing (TDM) is a method of transmitting and receiving
independent signals over a common signal path by means of synchronized
switches at each end of the transmission line. TDM is used for long-distance
communication links and bears heavy data traffic loads from end user.
Time division multiplexing (TDM) is also known as a digital circuit switched.

Formulas in Circuit Switching :-


Transmission rate = Link Rate or Bit rate /
no. of slots = R/h bps
Transmission time = size of file /
transmission rate
= x / (R/h) = (x*h)/R second
Total time to send packet to destination =
Transmission time + circuit setup time

Example 1 : How long it takes to send a file of ‘x bits’ from host A to host B
over a circuit switched network that uses TDM with ‘h slots’ and have a bit rate
of ‘R Mbps’, circuit establish time is k seconds.Find total time?
Explanation:
Transmission rate = Link Rate or Bit rate / no. of slots = R/h bps
Transmission time = size of file/ transmission rate = x / (R/h) = (x*h)/R
Total time = transmission time + circuit setup time = (x*h)/R secs + k secs
Advantages and Disadvantages of Circuit Switching
Advantages
 It is suitable for long continuous transmission, since a continuous
transmission route is established, that remains throughout the conversation.
 The dedicated path ensures a steady data rate of communication.
 No intermediate delays are found once the circuit is established. So, they are
suitable for real time communication of both voice and data transmission.
Disadvantages

22
23

 Circuit switching establishes a dedicated connection between the end parties.


This dedicated connection cannot be used for transmitting any other data,
even if the data load is very low.
 Bandwidth requirement is high even in cases of low data volume.
 There is underutilization of system resources. Once resources are allocated
to a particular connection, they cannot be used for other connections.
 Time required to establish connection may be high.

Message Switching –
Message switching was a technique developed as an alternate to circuit switching,
before packet switching was introduced. In message switching, end users
communicate by sending and receiving messages that included the entire data to
be shared. Messages are the smallest individual unit.
Also, the sender and receiver are not directly connected. There are a number of
intermediate nodes transfer data and ensure that the message reaches its
destination. Message switched data networks are hence called hop-by-hop
systems.
They provide 2 distinct and important characteristics:
1. Store and forward – The intermediate nodes have the responsibility of
transferring the entire message to the next node. Hence, each node must have
storage capacity. A message will only be delivered if the next hop and the link
connecting it are both available, otherwise it’ll be stored indefinitely. A store-
and-forward switch forwards a message only if sufficient resources are
available and the next hop is accepting data. This is called the store-and-
forward property.
2. Message delivery – This implies wrapping the entire information in a single
message and transferring it from the source to the destination node. Each
message must have a header that contains the message routing information,
including the source and destination.
Message switching network consists of transmission links (channels), store-and-
forward switch nodes and end stations as shown in the following picture:

23
24

Characteristics of message switching –


Message switching is advantageous as it enables efficient usage of network
resources. Also, because of the store-and-forward capability of intermediary
nodes, traffic can be efficiently regulated and controlled. Message delivery as one
unit, rather than in pieces, is another benefit.

However, message switching has certain disadvantages as well. Since messages


are stored indefinitely at each intermediate node, switches require large storage
capacity. Also, these are pretty slow. This is because at each node, first there us
wait till the entire message is received, then it must be stored and transmitted
after processing the next node and links to it depending on availability and
channel traffic. Hence, message switching cannot be used for real time or
interactive applications like video conference.

Advantages of Message Switching –


Message switching has the following advantages:
1. As message switching is able to store the message for which communication
channel is not available, it helps in reducing the traffic congestion in network.

24
25

2. In message switching, the data channels are shared by the network devices.
3. It makes the traffic management efficient by assigning priorities to the
messages.
Disadvantages of Message Switching –
Message switching has the following disadvantages:
1. Message switching cannot be used for real time applications as storing of
messages causes delay.
2. In message switching, message has to be stored for which every intermediate
devices in the network requires a large storing capacity.
Applications –
The store-and-forward method was implemented in telegraph message switching
centres. Today, although many major networks and systems are packet-switched
or circuit switched networks, their delivery processes can be based on message
switching. For example, in most electronic mail systems the delivery process is
based on message switching, while the network is in fact either circuit-switched
or packet-switched.

NETWORK MODELS:-
OSI Model:-
OSI stands for Open Systems Interconnection. It has been developed by ISO –
‘International Organization of Standardization‘, in the year 1984. It is a 7
layer architecture with each layer having specific functionality to perform. All
these 7 layers work collaboratively to transmit the data from one person to
another across the globe.

25
26

1. Physical Layer (Layer 1) :

The lowest layer of the OSI reference model is the physical layer. It is
responsible for the actual physical connection between the devices. The physical
layer contains information in the form of bits. It is responsible for transmitting
individual bits from one node to the next. When receiving data, this layer will get
the signal received and convert it into 0s and 1s and send them to the Data Link
layer, which will put the frame back together.

The functions of the physical layer are :


1. Bit synchronization: The physical layer provides the synchronization of the
bits by providing a clock. This clock controls both sender and receiver thus
providing synchronization at bit level.
2. Bit rate control: The Physical layer also defines the transmission rate i.e. the
number of bits sent per second.

26
27

3. Physical topologies: Physical layer specifies the way in which the different,


devices/nodes are arranged in a network i.e. bus, star or mesh topolgy.
4. Transmission mode: Physical layer also defines the way in which the data
flows between the two connected devices. The various transmission modes
possible are: Simplex, half-duplex and full-duplex.
* Hub, Repeater, Modem, Cables are Physical Layer devices.
** Network Layer, Data Link Layer and Physical Layer are also known as Lower
Layers or Hardware Layers.

2. Data Link Layer (DLL) (Layer 2) :

The data link layer is responsible for the node to node delivery of the message.
The main function of this layer is to make sure data transfer is error-free from one
node to another, over the physical layer. When a packet arrives in a network, it is
the responsibility of DLL to transmit it to the Host using its MAC address.
Data Link Layer is divided into two sub layers :
1. Logical Link Control (LLC)
2. Media Access Control (MAC)
The packet received from Network layer is further divided into frames depending
on the frame size of NIC(Network Interface Card). DLL also encapsulates Sender
and Receiver’s MAC address in the header.
The Receiver’s MAC address is obtained by placing an ARP(Address Resolution
Protocol) request onto the wire asking “Who has that IP address?” and the
destination host will reply with its MAC address.

The functions of the data Link layer are :


1. Framing: Framing is a function of the data link layer. It provides a way for a
sender to transmit a set of bits that are meaningful to the receiver. This can be
accomplished by attaching special bit patterns to the beginning and end of the
frame.
2. Physical addressing: After creating frames, Data link layer adds physical
addresses (MAC address) of sender and/or receiver in the header of each
frame.
3. Error control: Data link layer provides the mechanism of error control in
which it detects and retransmits damaged or lost frames.
4. Flow Control: The data rate must be constant on both sides else the data may
get corrupted thus , flow control coordinates that amount of data that can be
sent before receiving acknowledgement.

27
28

5. Access control: When a single communication channel is shared by multiple


devices, MAC sub-layer of data link layer helps to determine which device has
control over the channel at a given time.
* Packet in Data Link layer is referred as  Frame.
** Data Link layer is handled by the NIC (Network Interface Card) and device
drivers of host machines.
*** Switch & Bridge are Data Link Layer devices.

3. Network Layer (Layer 3) :

Network layer works for the transmission of data from one host to the other
located in different networks. It also takes care of packet routing i.e. selection of
the shortest path to transmit the packet, from the number of routes available. The
sender & receiver’s IP address are placed in the header by the network layer.
The functions of the Network layer are :
1. Routing: The network layer protocols determine which route is suitable from
source to destination. This function of network layer is known as routing.
2. Logical Addressing: In order to identify each device on internetwork
uniquely, network layer defines an addressing scheme. The sender &
receiver’s IP address are placed in the header by network layer. Such an
address distinguishes each device uniquely and universally.
* Segment  in Network layer is referred as Packet.

** Network layer is implemented by networking devices such as routers.

4. Transport Layer (Layer 4) :

Transport layer provides services to application layer and takes services from
network layer. The data in the transport layer is referred to as Segments. It is
responsible for the End to End Delivery of the complete message. The transport
layer also provides the acknowledgement of the successful data transmission and
re-transmits the data if an error is found.
• At sender’s side:
Transport layer receives the formatted data from the upper layers,
performs Segmentation and also implements Flow & Error control to ensure
proper data transmission. It also adds Source and Destination port number in its

28
29

header and forwards the segmented data to the Network Layer.


Note: The sender need to know the port number associated with the receiver’s
application.
Generally, this destination port number is configured, either by default or
manually. For example, when a web application makes a request to a web server,
it typically uses port number 80, because this is the default port assigned to web
applications. Many applications have default port assigned.
• At receiver’s side:
Transport Layer reads the port number from its header and forwards the Data
which it has received to the respective application. It also performs sequencing
and reassembling of the segmented data.
The functions of the transport layer are :
1. Segmentation and Reassembly: This layer accepts the message from the
(session) layer , breaks the message into smaller units . Each of the segment
produced has a header associated with it. The transport layer at the destination
station reassembles the message.
2. Service Point Addressing: In order to deliver the message to correct process,
transport layer header includes a type of address called service point address
or port address. Thus by specifying this address, transport layer makes sure
that the message is delivered to the correct process.
The services provided by the transport layer :
1. Connection Oriented Service: It is a three-phase process which include
– Connection Establishment
– Data Transfer
– Termination / disconnection
In this type of transmission, the receiving device sends an acknowledgement,
back to the source after a packet or group of packet is received. This type of
transmission is reliable and secure.
2. Connection less service: It is a one-phase process and includes Data Transfer.
In this type of transmission, the receiver does not acknowledge receipt of a
packet. This approach allows for much faster communication between devices.
Connection-oriented service is more reliable than connectionless Service.
* Data in the Transport Layer is called as Segments.
** Transport layer is operated by the Operating System. It is a part of the OS and
communicates with the Application Layer by making system calls.
Transport Layer is called as  Heart of OSI  model.

5. Session Layer (Layer 5) :

29
30

This layer is responsible for establishment of connection, maintenance of


sessions, authentication and also ensures security.
The functions of the session layer are :
1. Session establishment, maintenance and termination: The layer allows the
two processes to establish, use and terminate a connection.
2. Synchronization : This layer allows a process to add checkpoints which are
considered as synchronization points into the data. These synchronization
point help to identify the error so that the data is re-synchronized properly, and
ends of the messages are not cut prematurely and data loss is avoided.
3. Dialog Controller : The session layer allows two systems to start
communication with each other in half-duplex or full-duplex.
**All the below 3 layers(including Session Layer) are integrated as a single
layer in the TCP/IP model as “Application Layer”.
**Implementation of these 3 layers is done by the network application itself.
These are also known as  Upper Layers  or  Software Layers.

SCENARIO:
Let’s consider a scenario where a user wants to send a message through some
Messenger application running in his browser. The “Messenger” here acts as the
application layer which provides the user with an interface to create the data. This
message or so-called Data is compressed, encrypted (if any secure data) and
converted into bits (0’s and 1’s) so that it can be transmitted.

6. Presentation Layer (Layer 6) :

Presentation layer is also called the Translation layer.The data from the


application layer is extracted here and manipulated as per the required format to
transmit over the network.
The functions of the presentation layer are :
30
31

1. Translation : For example, ASCII to EBCDIC.


2. Encryption/ Decryption : Data encryption translates the data into another
form or code. The encrypted data is known as the cipher text and the
decrypted data is known as plain text. A key value is used for encrypting as
well as decrypting data.
3. Compression: Reduces the number of bits that need to be transmitted on the
network.

7. Application Layer (Layer 7) :

At the very top of the OSI Reference Model stack of layers, we find Application
layer which is implemented by the network applications. These applications
produce the data, which has to be transferred over the network. This layer also
serves as a window for the application services to access the network and for
displaying the received information to the user.
Ex: Application – Browsers, Skype Messenger etc.
**Application Layer is also called as Desktop Layer.

The functions of the Application layer are :


1. Network Virtual Terminal
2. FTAM-File transfer access and management
3. Mail Services
4. Directory Services
OSI model acts as a reference model and is not implemented in the Internet
because of its late invention. Current model being used is the TCP/IP model.
TCP\IP Model:-
The OSI Model we just looked at is just a reference/logical model. It was
designed to describe the functions of the communication system by dividing the
communication procedure into smaller and simpler components. But when we
talk about the TCP/IP model, it was designed and developed by Department of
Defense (DoD) in 1960s and is based on standard protocols. It stands for
Transmission Control Protocol/Internet Protocol. The TCP/IP model is a concise
version of the OSI model. It contains four layers, unlike seven layers in the OSI
model. The layers are:
1. Process/Application Layer
2. Host-to-Host/Transport Layer
3. Internet Layer
4. Network Access/Link Layer

31
32

1. Network Access Layer –

This layer corresponds to the combination of Data Link Layer and Physical Layer
of the OSI model. It looks out for hardware addressing and the protocols present in
this layer allows for the physical transmission of data.
We just talked about ARP being a protocol of Internet layer, but there is a conflict
about declaring it as a protocol of Internet Layer or Network access layer. It is
described as residing in layer 3, being encapsulated by layer 2 protocols.

2. Internet Layer –

This layer parallels the functions of OSI’s Network layer. It defines the protocols
which are responsible for logical transmission of data over the entire network. The
main protocols residing at this layer are :
1. IP – stands for Internet Protocol and it is responsible for delivering packets
from the source host to the destination host by looking at the IP addresses in the
packet headers. IP has 2 versions:
IPv4 and IPv6. IPv4 is the one that most of the websites are using currently. But
IPv6 is growing as the number of IPv4 addresses are limited in number when
compared to the number of users.
2. ICMP – stands for Internet Control Message Protocol. It is encapsulated within
IP datagrams and is responsible for providing hosts with information about
network problems.
3. ARP – stands for Address Resolution Protocol. Its job is to find the hardware
address of a host from a known IP address. ARP has several types: Reverse
ARP, Proxy ARP, Gratuitous ARP and Inverse ARP.

3. Host-to-Host Layer –

This layer is analogous to the transport layer of the OSI model. It is responsible for
end-to-end communication and error-free delivery of data. It shields the upper-
layer applications from the complexities of data. The two main protocols present in
this layer are :
1. Transmission Control Protocol (TCP) – It is known to provide reliable and
error-free communication between end systems. It performs sequencing and
segmentation of data. It also has acknowledgment feature and controls the flow
of the data through flow control mechanism. It is a very effective protocol but

32
33

has a lot of overhead due to such features. Increased overhead leads to


increased cost.
2. User Datagram Protocol (UDP) – On the other hand does not provide any
such features. It is the go-to protocol if your application does not require
reliable transport as it is very cost-effective. Unlike TCP, which is connection-
oriented protocol, UDP is connectionless.

4. Application Layer –

This layer performs the functions of top three layers of the OSI model:
Application, Presentation and Session Layer. It is responsible for node-to-node
communication and controls user-interface specifications. Some of the protocols
present in this layer are: HTTP, HTTPS, FTP, TFTP, Telnet, SSH, SMTP, SNMP,
NTP, DNS, DHCP, NFS, X Window, LPD. Have a look at Protocols in
Application Layer for some information about these protocols. Protocols other than
those present in the linked article are :
1. HTTP and HTTPS – HTTP stands for Hypertext transfer protocol. It is used
by the World Wide Web to manage communications between web browsers and
servers. HTTPS stands for HTTP-Secure. It is a combination of HTTP with
SSL(Secure Socket Layer). It is efficient in cases where the browser need to fill
out forms, sign in, authenticate and carry out bank transactions.
2. SSH – SSH stands for Secure Shell. It is a terminal emulations software similar
to Telnet. The reason SSH is more preferred is because of its ability to maintain
the encrypted connection. It sets up a secure session over a TCP/IP connection.
3. NTP – NTP stands for Network Time Protocol. It is used to synchronize the
clocks on our computer to one standard time source. It is very useful in
situations like bank transactions. Assume the following situation without the
presence of NTP. Suppose you carry out a transaction, where your computer
reads the time at 2:30 PM while the server records it at 2:28 PM. The server can
crash very badly if it’s out of sync.
3. The diagrammatic comparison of the TCP/IP and OSI model is as follows :

33
34

Difference between TCP/IP and OSI Model:

TCP/IP OSI
TCP refers to Transmission OSI refers to Open Systems
Control Protocol. Interconnection.

TCP/IP has 4 layers. OSI has 7 layers.

TCP/IP is more reliable OSI is less reliable

TCP/IP does not have very


strict boundaries. OSI has strict boundaries

TCP/IP follow a horizontal


approach. OSI follows a vertical approach.

TCP/IP uses both session


and presentation layer in the OSI uses different session and
application layer itself. presentation layers.

TCP/IP developed protocols


then model. OSI developed model then protocol.

34
35

Transport layer in TCP/IP In OSI model, transport layer


does not provide assurance provides assurance delivery of
delivery of packets. packets.

TCP/IP model network Connection less and connection


layer only provides oriented both services are provided
connection less services. by network layer in OSI model.

Protocols cannot be While in OSI model, Protocols are


replaced easily in TCP/IP better covered and is easy to replace
model. with the change in technology.

MAC:-Multiple Access Control


The Data Link Layer is responsible for transmission of data between two nodes.
Its main functions are-
 Data Link Control
 Multiple Access Control

Data Link control –


The data link control is responsible for reliable transmission of message over
transmission channel by using techniques like framing, error control and flow
control. For Data link control refer to – Stop and Wait ARQ
Multiple Access Control –
If there is a dedicated link between the sender and the receiver then data link
control layer is sufficient, however if there is no dedicated link present then
multiple stations can access the channel simultaneously. Hence multiple access
protocols are required to decrease collision and avoid crosstalk. For example, in a
classroom full of students, when a teacher asks a question and all the students (or
stations) start answering simultaneously (send data at same time) then a lot of
chaos is created( data overlap or data lost) then it is the job of the teacher

35
36

(multiple access protocols) to manage the students and make them answer one at
a time.
Thus, protocols are required for sharing data on non dedicated channels. Multiple
access protocols can be subdivided further as –

1. Random Access Protocol: In this, all stations have same superiority that is no
station has more priority than another station. Any station can send data
depending on medium’s state( idle or busy). It has two features:
1. There is no fixed time for sending data
2. There is no fixed sequence of stations sending data
The Random access protocols are further subdivided as:
(a) ALOHA – It was designed for wireless LAN but is also applicable for shared
medium. In this, multiple stations can transmit data at the same time and can
hence lead to collision and data being garbled.
 Pure Aloha:
When a station sends data it waits for an acknowledgement. If the

36
37

acknowledgement doesn’t come within the allotted time then the station waits
for a random amount of time called back-off time (Tb) and re-sends the data.
Since different stations wait for different amount of time, the probability of
further collision decreases.
 Vulnerable Time = 2* Frame transmission time
 Throughput = G exp{-2*G}
Maximum throughput = 0.184 for G=0.5
 Slotted Aloha:
It is similar to pure aloha, except that we divide time into slots and sending of
data is allowed only at the beginning of these slots. If a station misses out the
allowed time, it must wait for the next slot. This reduces the probability of
collision.
 Vulnerable Time = Frame transmission time
 Throughput = G exp{-*G}
Maximum throughput = 0.368 for G=1
For more information on ALOHA refer – LAN Technologies
(b) CSMA – Carrier Sense Multiple Access ensures fewer collisions as the
station is required to first sense the medium (for idle or busy) before transmitting
data. If it is idle then it sends data, otherwise it waits till the channel becomes
idle. However there is still chance of collision in CSMA due to propagation
delay. For example, if station A wants to send data, it will first sense the
medium.If it finds the channel idle, it will start sending data. However, by the
time the first bit of data is transmitted (delayed due to propagation delay) from
station A, if station B requests to send data and senses the medium it will also
find it idle and will also send data. This will result in collision of data from
station A and B.

CSMA access modes-


 1-persistent: The node senses the channel, if idle it sends the data, otherwise
it continuously keeps on checking the medium for being idle and transmits
unconditionally(with 1 probability) as soon as the channel gets idle.
 Non-Persistent: The node senses the channel, if idle it sends the data,
otherwise it checks the medium after a random amount of time (not
continuously) and transmits when found idle.
37
38

 P-persistent: The node senses the medium, if idle it sends the data with p
probability. If the data is not transmitted ((1-p) probability) then it waits for
some time and checks the medium again, now if it is found idle then it send
with p probability. This repeat continues until the frame is sent. It is used in
Wifi and packet radio systems.
 O-persistent: Superiority of nodes is decided beforehand and transmission
occurs in that order. If the medium is idle, node waits for its time slot to send
data.
(c) CSMA/CD – Carrier sense multiple access with collision detection. Stations
can terminate transmission of data if collision is detected.
Carrier Sense Multiple Access with Collision Detection (CSMA/CD) is a network
protocol for carrier transmission that operates in the Medium Access Control
(MAC) layer. It senses or listens whether the shared channel for transmission is
busy or not, and defers transmissions until the channel is free. The collision
detection technology detects collisions by sensing transmissions from other
stations. On detection of a collision, the station stops transmitting, sends a jam
signal, and then waits for a random time interval before retransmission.
Algorithms
The algorithm of CSMA/CD is:
 When a frame is ready, the transmitting station checks whether the channel
is idle or busy.
 If the channel is busy, the station waits until the channel becomes idle.
 If the channel is idle, the station starts transmitting and continually monitors
the channel to detect collision.
 If a collision is detected, the station starts the collision resolution algorithm.
 The station resets the retransmission counters and completes frame
transmission.
The algorithm of Collision Resolution is:
 The station continues transmission of the current frame for a specified time
along with a jam signal, to ensure that all the other stations detect collision.
 The station increments the retransmission counter.
 If the maximum number of retransmission attempts is reached, then the
station aborts transmission.
 Otherwise, the station waits for a backoff period which is generally a
function of the number of collisions and restart main algorithm.
The following flowchart summarizes the algorithms:

38
39

 Though this algorithm detects collisions, it does not reduce the number of
collisions.
 It is not appropriate for large networks performance degrades exponentially
when more stations are added.

(d) CSMA/CA – Carrier sense multiple access with collision avoidance. The
process of collisions detection involves sender receiving acknowledgement
signals. If there is just one signal(its own) then the data is successfully sent but if
there are two signals(its own and the one with which it has collided) then it
means a collision has occurred. To distinguish between these two cases, collision
must have a lot of impact on received signal. However it is not so in wired
networks, so CSMA/CA is used in this case.
CSMA/CA avoids collision by:

39
40

1. Interframe space – Station waits for medium to become idle and if found idle
it does not immediately send data (to avoid collision due to propagation delay)
rather it waits for a period of time called Interframe space or IFS. After this
time it again checks the medium for being idle. The IFS duration depends on
the priority of station.
2. Contention Window – It is the amount of time divided into slots. If the
sender is ready to send data, it chooses a random number of slots as wait time
which doubles every time medium is not found idle. If the medium is found
busy it does not restart the entire process, rather it restarts the timer when the
channel is found idle again.
3. Acknowledgement – The sender re-transmits the data if acknowledgement is
not received before time-out.
2. Controlled Access:
In this, the data is sent by that station which is approved by all other stations. For
further details refer – Controlled Access Protocols
3. Channelization:
In this, the available bandwidth of the link is shared in time, frequency and code
to multiple stations to access channel simultaneously.
 Frequency Division Multiple Access (FDMA) – The available bandwidth is
divided into equal bands so that each station can be allocated its own band.
Guard bands are also added so that no to bands overlap to avoid crosstalk and
noise.
 Time Division Multiple Access (TDMA) – In this, the bandwidth is shared
between multiple stations. To avoid collision time is divided into slots and
stations are allotted these slots to transmit data. However there is a overhead
of synchronization as each station needs to know its time slot. This is resolved
by adding synchronization bits to each slot. Another issue with TDMA is
propagation delay which is resolved by addition of guard bands.
For more details refer – Circuit Switching
 Code Division Multiple Access (CDMA) – One channel carries all
transmissions simultaneously. There is neither division of bandwidth nor
division of time. For example, if there are many people in a room all speaking
at the same time, then also perfect reception of data is possible if only two
person speak the same language. Similarly data from different stations can be
transmitted simultaneously in different code languages.

40
41

UNIT:-2

NETWORK LAYER:-

ARP:- Address Resolution Protocol (ARP) is a communication protocol used to


find the MAC (Media Access Control) address of a device from its IP address. This
protocol is used when a device wants to communicate with another device on a
Local Area Network or Ethernet.

Types of ARP

There are four types of Address Resolution Protocol, which is given below:

o Proxy ARP
o Gratuitous ARP
o Reverse ARP (RARP)
o Inverse ARP

41
42

Proxy ARP - Proxy ARP is a method through which a Layer 3 devices may
respond to ARP requests for a target that is in a different network from the sender.
The Proxy ARP configured router responds to the ARP and map the MAC address
of the router with the target IP address and fool the sender that it is reached at its
destination.

At the backend, the proxy router sends its packets to the appropriate destination
because the packets contain the necessary information.

Example - If Host A wants to transmit data to Host B, which is on the different
network, then Host A sends an ARP request message to receive a MAC address for
Host B. The router responds to Host A with its own MAC address pretend itself as
a destination. When the data is transmitted to the destination by Host A, it will
send to the gateway so that it sends to Host B. This is known as proxy ARP.

Gratuitous ARP - Gratuitous ARP is an ARP request of the host that helps to
identify the duplicate IP address. It is a broadcast request for the IP address of the

42
43

router. If an ARP request is sent by a switch or router to get its IP address and no
ARP responses are received, so all other nodes cannot use the IP address allocated
to that switch or router. Yet if a router or switch sends an ARP request for its IP
address and receives an ARP response, another node uses the IP address allocated
to the switch or router.

There are some primary use cases of gratuitous ARP that are given below:

o The gratuitous ARP is used to update the ARP table of other devices.
o It also checks whether the host is using the original IP address or a duplicate
one.

Reverse ARP (RARP) - It is a networking protocol used by the client system in a
local area network (LAN) to request its IPv4 address from the ARP gateway router
table. A table is created by the network administrator in the gateway-router that is
used to find out the MAC address to the corresponding IP address.

When a new system is set up or any machine that has no memory to store the IP
address, then the user has to find the IP address of the device. The device sends a
RARP broadcast packet, including its own MAC address in the address field of
both the sender and the receiver hardware. A host installed inside of the local
network called the RARP-server is prepared to respond to such type of broadcast
packet. The RARP server is then trying to locate a mapping table entry in the IP to
MAC address. If any entry matches the item in the table, then the RARP server
sends the response packet along with the IP address to the requesting computer.

Inverse ARP (InARP) - Inverse ARP is inverse of the ARP, and it is used to find
the IP addresses of the nodes from the data link layer addresses. These are mainly
used for the frame relays, and ATM networks, where Layer 2 virtual circuit
addressing are often acquired from Layer 2 signaling. When using these virtual
circuits, the relevant Layer 3 addresses are available.

ARP conversions Layer 3 addresses to Layer 2 addresses. However, its opposite


address can be defined by InARP. The InARP has a similar packet format as ARP,
but operational codes are different.

How Address Resolution Protocol (ARP) works?

Most of the computer programs/applications use logical address (IP address) to


send/receive messages, however the actual communication happens over

43
44

the physical address (MAC address) i.e from layer 2 of OSI model. So our


mission is to get the destination MAC address which helps in communicating with
other devices. This is where ARP comes into the picture, its functionality is to
translate IP address to physical address. 
 

The acronym ARP stands for Address Resolution Protocol which is one of the


most important protocols of the Network layer in the OSI model. 
Note: ARP finds the hardware address, also known as Media Access Control
(MAC) address, of a host from its known IP address. 
 
 

44
45

Let’s look at how ARP works. 


Imagine a device wants to communicate with the other over the internet. What
ARP does? Is it broadcast a packet to all the devices of the source network. 
The devices of the network peel the header of the data link layer from the protocol
data unit (PDU) called frame and transfers the packet to the network layer (layer 3
of OSI) where the network ID of the packet is validated with the destination IP’s
network ID of the packet and if it’s equal then it responds to the source with the
MAC address of the destination, else the packet reaches the gateway of the
network and broadcasts packet to the devices it is connected with and validates
their network ID 
The above process continues till the second last network device in the path to reach
the destination where it gets validated and ARP, in turn, responds with the
destination MAC address. 
The important terms associated with ARP are : 
 
1. ARP Cache: After resolving MAC address, the ARP sends it to the source
where it stores in a table for future reference. The subsequent communications
can use the MAC address from the table
2. ARP Cache Timeout: It indicates the time for which the MAC address in the
ARP cache can reside
3. ARP request: This is nothing but broadcasting a packet over the network to
validate whether we came across destination MAC address or not. 
1. The physical address of the sender.
2. The IP address of the sender.
3. The physical address of the receiver is FF:FF:FF:FF:FF:FF or 1’s.
4. The IP address of the receiver
4. ARP response/reply: It is the MAC address response that the source receives
from the destination which aids in further communication of the data. 
 
 
 

45
46

 CASE-1: The sender is a host and wants to send a packet to another host on the
same network.
 Use ARP to find another host’s physical address
 CASE-2: The sender is a host and wants to send a packet to another host on
another network. 
 Sender looks at its routing table.
 Find the IP address of the next hop (router) for this destination.
 Use ARP to find the router’s physical address
 CASE-3: the sender is a router and received a datagram destined for a host on
another network. 
 Router check its routing table.
 Find the IP address of the next router.
 Use ARP to find the next router’s physical address.
 CASE-4: The sender is a router that has received a datagram destined for a host
in the same network. 
 Use ARP to find this host’s physical address.
NOTE: An ARP request is a broadcast, and an ARP response is a Unicast.

RARP:- RARP is abbreviation of Reverse Address Resolution Protocol which


is a protocol based on computer networking which is employed by a client
computer to request its IP address from a gateway server’s Address Resolution
Protocol table or cache. The network administrator creates a table in gateway-
router, which is used to map the MAC address to corresponding IP address.
This protocol is used to communicate data between two points in a server. The
client doesn’t necessarily need prior knowledge the server identities capable of
serving its request. Medium Access Control (MAC) addresses requires individual
configuration on the servers done by an administrator. RARP limits to the serving
of IP addresses only.
When a replacement machine is set up, the machine may or might not have an
attached disk that may permanently store the IP Address so the RARP client
program requests IP Address from the RARP server on the router. The RARP
server will return the IP address to the machine under the belief that an entry has
been setup within the router table.

46
47

History of RARP :
RARP was proposed in 1984 by the university Network group. This protocol
provided the IP Address to the workstation. These diskless workstations were
also the platform for the primary workstations from Sun Microsystems.
Working of RARP :
The RARP is on the Network Access Layer and is employed to send data between
two points in a very network.
Each network participant has two unique addresses:- IP address (a logical
address) and MAC address (the physical address).

The IP address gets assigned by software and after that the MAC address is
constructed into the hardware.
The RARP server that responds to RARP requests, can even be any normal
computer within the network. However, it must hold the data of all the MAC
47
48

addresses with their assigned IP addresses. If a RARP request is received by the


network, only these RARP servers can reply to it. The info packet needs to be
sent on very cheap layers of the network. This implies that the packet is
transferred to all the participants at the identical time.
The client broadcasts a RARP request with an Ethernet broadcast address and
with its own physical address. The server responds by informing the client its IP
address.
How is RARP different from ARP ?
RARP ARP

RARP stands for Reverse Address ARP stands for Address Resolution
Resolution Protocol Protocol

In RARP, we find our own IP In ARP, we find the IP address of a


address remote machine

The MAC address is known and the The IP address is known, and the MAC
IP address is requested address is being requested

It uses the value 3 for requests and It uses the value 1 for requests and 2
4 for responses for responses

Uses of RARP :
RARP is used to convert the Ethernet address to an IP address.
It is available for the LAN technologies like FDDI, token ring LANs, etc.
Disadvantages of RARP :
The Reverse Address Resolution Protocol had few disadvantages which
eventually led to its replacement by BOOTP and DHCP. Some of the
disadvantages are listed below:
 The RARP server must be located within the same physical network.
 The computer sends the RARP request on very cheap layer of the network.
Thus, it’s unattainable for a router to forward the packet because the computer
sends the RARP request on very cheap layer of the network.

48
49

 The RARP cannot handle the subnetting process because no subnet masks are
sent. If the network is split into multiple subnets, a RARP server must be
available with each of them.
 It isn’t possible to configure the PC in a very modern network.
 It doesn’t fully utilize the potential of a network like Ethernet.
RARP has now become an obsolete protocol since it operates at low level. Due to
this, it requires direct address to the network which makes it difficult to build a
server.

ICMP:- Since IP does not have a inbuilt mechanism for sending error and control
messages. It depends on Internet Control Message Protocol(ICMP) to provide an
error control. It is used for reporting errors and management queries. It is a
supporting protocol and used by networks devices like routers for sending the
error messages and operations information.
e.g. the requested service is not available or that a host or router could not be
reached.
Source quench message :
Source quench message is request to decrease traffic rate for messages sending to
the host(destination). Or we can say, when receiving host detects that rate of
sending packets (traffic rate) to it is too fast it sends the source quench message
to the source to slow the pace down so that no packet can be lost.

ICMP will take source IP from the discarded packet and informs to source by
sending source quench message.
Then source will reduce the speed of transmission so that router will free for
congestion.

49
50

When the congestion router is far away from the source the ICMP will send hop
by hop source quench message so that every router will reduce the speed of
transmission.
Parameter problem :
Whenever packets come to the router then calculated header checksum should be
equal to recieved header checksum then only packet is accepted by the router.

If there is mismatch packet will be dropped by the router.

50
51

ICMP will take the source IP from the discarded packet and informs to source by
sending parameter problem message.
Time exceeded message :

When some fragments are lost in a network then the holding fragment by the
router will be droped then ICMP will take source IP from discarded packet and
informs to the source, of discarded datagram due to time to live field reaches to
zero, by sending time exceeded message.

Destination un-reachable :
Destination unreachable is generated by the host or its inbound gateway to inform
the client that the destination is unreachable for some reason.

51
52

There is no necessary condition that only router give the ICMP error message
some time destination host send ICMP error message when any type of failure
(link failure,hardware failure,port failure etc) happen in the network.
Redirection message :
Redirect requests data packets be sent on an alternate route. The message informs
to a host to update its routing information (to send packets on an alternate route).
Ex. If host tries to send data through a router R1 and R1 sends data on a router
R2 and there is a direct way from host to R2. Then R1 will send a redirect
message to inform the host that there is a best way to the destination directly
through R2 available. The host then sends data packets for the destination directly
to R2.
The router R2 will send the original datagram to the intended destination.
But if datagram contains routing information then this message will not be sent
even if a better route is available as redirects should only be sent by gateways and
should not be sent by Internet hosts.

52
53

Whenever a packet is forwarded in a wrong direction later it is re-directed in a


current direction then ICMP will send re-directed message.

IGMP:- IGMP is acronym for Internet Group Management Protocol. IGMP is


a communication protocol used by hosts and adjacent routers for multicasting
communication with IP networks and uses the resources efficiently to transmit the
message/data packets. Multicast communication can have single or multiple
senders and receivers and thus, IGMP can be used in streaming videos, gaming or
web conferencing tools. This protocol is used on IPv4 networks and for using this
on IPv6, multicasting is managed by Multicast Listener Discovery (MLD). Like
other network protocols, IGMP is used on network layer. MLDv1 is almost same
in functioning as IGMPv2 and MLDv2 is almost similar to IGMPv3.
The communication protocol, IGMPv1 was developed in 1989 at Stanford
University. IGMPv1 was updated to IGMPv2 in year 1997 and again updated to
IGMPv3 in year 2002.
Applications:
 Streaming –
Multicast routing protocol are used for audio and video streaming over the
network i.e., either one-to-many or many-to-many.

53
54

 Gaming –
Internet group management protocol is often used in simulation games which
has multiple users over the network such as online games.
 Web Conferencing tools –
Video conferencing is a new method to meet people from your own
convenience and IGMP connects to the users for conferencing and transfers
the message/data packets efficiently.
Types:
There are 3 versions of IGMP. These versions are backward compatible.
Following are the versions of IGMP:
1. IGMPv1 :
The version of IGMP communication protocol allows all the supporting hosts to
join the multicast groups using membership request and include some basic
features. But, host cannot leave the group on their own and have to wait for a
timeout to leave the group.
The message packet format in IGMPv1:

 Version –
Set to 1.
 Type –
1 for Host Membership Query and Host Membership Report.
 Unused –
8-bits of zero which are of no use.
 Checksum –
It is the one’s complement of the one’s complement of the sum of IGMP
message.
 Group Address –
The group address field is zero when sent and ignored when received in

54
55

membership query message. In a membership report message, the group


address field takes the IP host group address of the group being reported.

2. IGMPv2 :
IGMPv2 is the revised version of IGMPv1 communication protocol. It has added
functionality of leaving the multicast group using group membership.
The message packet format in IGMPv2:

Type –
0x11 for Membership Query
0x12 for IGMPv1 Membership Report
0x16 for IGMPv2 Membership Report
0x22 for IGMPv3 Membership Report
0x17 for Leave Group

 Max Response Time –


This field is ignored for message types other than membership query. For
membership query type, it is the maximum time allowed before sending a
response report. The value is in units of 0.1 seconds.
 Checksum –
It is the one’s complement of the one’s complement of the sum of IGMP
message.

 Group Address –
It is set as 0 when sending a general query. Otherwise, multicast address for
group-specific or source-specific queries.

55
56

3. IGMPv3 :
IGMPv2 was revised to IGMPv3 and added source-specific multicast and
membership report aggregation. These reports are sent to 224.0.0.22.
The message packet format in IGMPv3:

 Max Response Time –


This field is ignored for message types other than membership query. For
membership query type, it is the maximum time allowed before sending a
response report. The value is in units of 0.1 seconds.
 Checksum –
It is the one’s complement of the one’s complement of the sum of IGMP
message.
 Group Address –
It is set as 0 when sending a general query. Otherwise, multicast address for
group-specific or source-specific queries.
 Resv –
It is set zero of sent and ignored when received.
 S flag –
It represents Suppress Router-side Processing flag. When the flag is set, it
indicates to suppress the timer updates that multicast routers perform upon
receiving any query.
 QRV –
It represents Querier’s Robustness Variable. Routers keeps on retrieving the

56
57

QRV value from the most recently received query as their own value until the
most recently received QRV is zero.

 QQIC –
It represents Querier’s Query Interval Code.
 Number of sources –
It represents the number of source addresses present in the query. For general
query or group-specific query, this field is zero and for group-and-source-
specific query, this field is non-zero.
 Source Address[i] –
It represents the IP unicast address for N fields.
Working:
IGMP works on devices that are capable of handling multicast groups and
dynamic multicasting. These devices allows the host to join or leave the
membership in the multicast group. These devices also allows to add and remove
clients from the group. This communication protocol is operated between host
and local multicast router. When a multicast group is created, the multicast group
address is in range of class D (224-239) IP addresses and is forwarded as
destination IP address in the packet.

57
58

L2 or Level-2 devices such as switches are used in between host and multicast
router for IGMP snooping. IGMP snooping is a process to listen to the IGMP
network traffic in controlled manner. Switch receives the message from host and
forwards the membership report to the local multicast router. The multicast traffic
is further forwarded to remote routers from local multicast routers using PIM
(Protocol Independent Multicast) so that clients can receive the message/data
packets. Clients wishing to join the network sends join message in the query and
switch intercepts the message and adds the ports of clients to its multicast routing
table.
Advantages:
 IGMP communication protocol efficiently transmits the multicast data to the
receivers and so, no junk packets are transmitted to the host which shows
optimized performance.
 Bandwidth is consumed totally as all the shared links are connected.
 Hosts can leave a multicast group and join another.
Disadvantages:
 It does not provide good efficiency in filtering and security.
 Due to lack of TCP, network congestion can occur.
 IGMP is vulnerable to some attacks such as DOS attack (Denial-Of-Service).

IPV4:- Internet Protocol Version 4 (IPv4)

Internet Protocol is one of the major protocols in the TCP/IP protocols suite. This
protocol works at the network layer of the OSI model and at the Internet layer of
the TCP/IP model. Thus this protocol has the responsibility of identifying hosts
based upon their logical addresses and to route data among them over the
underlying network.
IP provides a mechanism to uniquely identify hosts by an IP addressing scheme.
IP uses best effort delivery, i.e. it does not guarantee that packets would be
delivered to the destined host, but it will do its best to reach the destination.
Internet Protocol version 4 uses 32-bit logical address.
IPv4 - Packet Structure
Internet Protocol being a layer-3 protocol (OSI) takes data Segments from layer-4
(Transport) and divides it into packets. IP packet encapsulates data unit received
from above layer and add to its own header information.

58
59

The encapsulated data is referred to as IP Payload. IP header contains all the


necessary information to deliver the packet at the other end.

IP header includes many relevant information including Version Number, which,


in this context, is 4. Other details are as follows −
 Version − Version no. of Internet Protocol used (e.g. IPv4).
 IHL − Internet Header Length; Length of entire IP header.
 DSCP − Differentiated Services Code Point; this is Type of Service.
 ECN − Explicit Congestion Notification; It carries information about the
congestion seen in the route.
 Total Length − Length of entire IP Packet (including IP header and IP
Payload).
 Identification − If IP packet is fragmented during the transmission, all the
fragments contain same identification number. to identify original IP packet
they belong to.

59
60

 Flags − As required by the network resources, if IP Packet is too large to


handle, these ‘flags’ tells if they can be fragmented or not. In this 3-bit flag,
the MSB is always set to ‘0’.
 Fragment Offset − This offset tells the exact position of the fragment in the
original IP Packet.
 Time to Live − To avoid looping in the network, every packet is sent with
some TTL value set, which tells the network how many routers (hops) this
packet can cross. At each hop, its value is decremented by one and when
the value reaches zero, the packet is discarded.
 Protocol − Tells the Network layer at the destination host, to which
Protocol this packet belongs to, i.e. the next level Protocol. For example
protocol number of ICMP is 1, TCP is 6 and UDP is 17.
 Header Checksum − This field is used to keep checksum value of entire
header which is then used to check if the packet is received error-free.
 Source Address − 32-bit address of the Sender (or source) of the packet.
 Destination Address − 32-bit address of the Receiver (or destination) of the
packet.
 Options − This is optional field, which is used if the value of IHL is greater
than 5. These options may contain values for options such as Security,
Record Route, Time Stamp, etc.
IPv4 - Addressing
IPv4 supports three different types of addressing modes. −

Unicast Addressing Mode

In this mode, data is sent only to one destined host. The Destination Address field
contains 32- bit IP address of the destination host. Here the client sends data to the
targeted server −

60
61

Broadcast Addressing Mode

In this mode, the packet is addressed to all the hosts in a network segment. The
Destination Address field contains a special broadcast address,
i.e. 255.255.255.255. When a host sees this packet on the network, it is bound to
process it. Here the client sends a packet, which is entertained by all the Servers −

Multicast Addressing Mode

61
62

This mode is a mix of the previous two modes, i.e. the packet sent is neither
destined to a single host nor all the hosts on the segment. In this packet, the
Destination Address contains a special address which starts with 224.x.x.x and
can be entertained by more than one host.

Here a server sends packets which are entertained by more than one servers.
Every network has one IP address reserved for the Network Number which
represents the network and one IP address reserved for the Broadcast Address,
which represents all the hosts in that network.

Hierarchical Addressing Scheme

IPv4 uses hierarchical addressing scheme. An IP address, which is 32-bits in


length, is divided into two or three parts as depicted −

A single IP address can contain information about the network and its sub-
network and ultimately the host. This scheme enables the IP Address to be
hierarchical where a network can have many sub-networks which in turn can have
many hosts.

Subnet Mask

62
63

The 32-bit IP address contains information about the host and its network. It is
very necessary to distinguish both. For this, routers use Subnet Mask, which is as
long as the size of the network address in the IP address. Subnet Mask is also 32
bits long. If the IP address in binary is ANDed with its Subnet Mask, the result
yields the Network address. For example, say the IP Address is 192.168.1.152 and
the Subnet Mask is 255.255.255.0 then −

This way the Subnet Mask helps extract the Network ID and the Host from an IP
Address. It can be identified now that 192.168.1.0 is the Network number and
192.168.1.152 is the host on that network.

Binary Representation

The positional value method is the simplest form of converting binary from
decimal value. IP address is 32 bit value which is divided into 4 octets. A binary
octet contains 8 bits and the value of each bit can be determined by the position of
bit value '1' in the octet.

Positional value of bits is determined by 2 raised to power (position – 1), that is


the value of a bit 1 at position 6 is 2^(6-1) that is 2^5 that is 32. The total value of
the octet is determined by adding up the positional value of bits. The value of
11000000 is 128+64 = 192. Some examples are shown in the table below −

63
64

IPV6:- Internet Protocol version 6 (IPv6) is the latest revision of the Internet
Protocol (IP) and the first version of the protocol to be widely deployed. IPv6 was
developed by the Internet Engineering Task Force (IETF) to deal with the long-
anticipated problem of IPv4 address exhaustion. This tutorial will help you in

64
65

understanding IPv6 and its associated terminologies along with appropriate


references and example.

IPv6 - Features
The successor of IPv4 is not designed to be backward compatible. Trying to keep
the basic functionalities of IP addressing, IPv6 is redesigned entirely. It offers the
following features:
 Larger Address Space
In contrast to IPv4, IPv6 uses 4 times more bits to address a device on the
Internet. This much of extra bits can provide approximately
3.4×1038 different combinations of addresses. This address can accumulate
the aggressive requirement of address allotment for almost everything in
this world. According to an estimate, 1564 addresses can be allocated to
every square meter of this earth.
 Simplified Header
IPv6’s header has been simplified by moving all unnecessary information
and options (which are present in IPv4 header) to the end of the IPv6
header. IPv6 header is only twice as bigger than IPv4 provided the fact that
IPv6 address is four times longer.
 End-to-end Connectivity
Every system now has unique IP address and can traverse through the
Internet without using NAT or other translating components. After IPv6 is
fully implemented, every host can directly reach other hosts on the Internet,
with some limitations involved like Firewall, organization policies, etc.
 Auto-configuration
IPv6 supports both stateful and stateless auto configuration mode of its host
devices. This way, absence of a DHCP server does not put a halt on inter
segment communication.
 Faster Forwarding/Routing
Simplified header puts all unnecessary information at the end of the header.
The information contained in the first part of the header is adequate for a
Router to take routing decisions, thus making routing decision as quickly as
looking at the mandatory header.
 IPSec

65
66

Initially it was decided that IPv6 must have IPSec security, making it more
secure than IPv4. This feature has now been made optional.
 No Broadcast
Though Ethernet/Token Ring are considered as broadcast network because
they support Broadcasting, IPv6 does not have any broadcast support any
more. It uses multicast to communicate with multiple hosts.
 Anycast Support
This is another characteristic of IPv6. IPv6 has introduced Anycast mode of
packet routing. In this mode, multiple interfaces over the Internet are
assigned same Anycast IP address. Routers, while routing, send the packet
to the nearest destination.
 Mobility
IPv6 was designed keeping mobility in mind. This feature enables hosts
(such as mobile phone) to roam around in different geographical area and
remain connected with the same IP address. The mobility feature of IPv6
takes advantage of auto IP configuration and Extension headers.
 Enhanced Priority Support
IPv4 used 6 bits DSCP (Differential Service Code Point) and 2 bits ECN
(Explicit Congestion Notification) to provide Quality of Service but it could
only be used if the end-to-end devices support it, that is, the source and
destination device and underlying network must support it.
In IPv6, Traffic class and Flow label are used to tell the underlying routers
how to efficiently process the packet and route it.
 Smooth Transition
Large IP address scheme in IPv6 enables to allocate devices with globally
unique IP addresses. This mechanism saves IP addresses and NAT is not
required. So devices can send/receive data among each other, for example,
VoIP and/or any streaming media can be used much efficiently.
Other fact is, the header is less loaded, so routers can take forwarding
decisions and forward them as quickly as they arrive.
 Extensibility
One of the major advantages of IPv6 header is that it is extensible to add
more information in the option part. IPv4 provides only 40-bytes for

66
67

options, whereas options in IPv6 can be as much as the size of IPv6 packet
itself.
IPv6 - Addressing Modes
In computer networking, addressing mode refers to the mechanism of hosting an
address on the network. IPv6 offers several types of modes by which a single host
can be addressed. More than one host can be addressed at once or the host at the
closest distance can be addressed.

Unicast

In unicast mode of addressing, an IPv6 interface (host) is uniquely identified in a


network segment. The IPv6 packet contains both source and destination IP
addresses. A host interface is equipped with an IP address which is unique in that
network segment.When a network switch or a router receives a unicast IP packet,
destined to a single host, it sends out one of its outgoing interface which connects
to that particular host.

Multicast

The IPv6 multicast mode is same as that of IPv4. The packet destined to multiple
hosts is sent on a special multicast address. All the hosts interested in that
multicast information, need to join that multicast group first. All the interfaces
that joined the group receive the multicast packet and process it, while other hosts
not interested in multicast packets ignore the multicast information.

67
68

Anycast

IPv6 has introduced a new type of addressing, which is called Anycast addressing.
In this addressing mode, multiple interfaces (hosts) are assigned same Anycast IP
address. When a host wishes to communicate with a host equipped with an
Anycast IP address, it sends a Unicast message. With the help of complex routing
mechanism, that Unicast message is delivered to the host closest to the Sender in
terms of Routing cost.

68
69

Let’s take an example of TutorialPoints.com Web Servers, located in all


continents. Assume that all the Web Servers are assigned a single IPv6 Anycast IP
Address. Now when a user from Europe wants to reach TutorialsPoint.com the
DNS points to the server that is physically located in Europe itself. If a user from
India tries to reach Tutorialspoint.com, the DNS will then point to the Web Server
physically located in Asia. Nearest or Closest terms are used in terms of Routing
Cost.
In the above picture, when a client computer tries to reach a server, the request is
forwarded to the server with the lowest Routing Cost.
IPv6 - Headers
The wonder of IPv6 lies in its header. An IPv6 address is 4 times larger than IPv4,
but surprisingly, the header of an IPv6 address is only 2 times larger than that of
IPv4. IPv6 headers have one Fixed Header and zero or more Optional (Extension)
Headers. All the necessary information that is essential for a router is kept in the
Fixed Header. The Extension Header contains optional information that helps
routers to understand how to handle a packet/flow.

Fixed Header

69
70

[Image:
IPv6 Fixed Header]

IPv6 fixed header is 40 bytes long and contains the following information.

S.N Field & Description


.

1 Version (4-bits): It represents the version of Internet Protocol, i.e. 0110.

2 Traffic Class (8-bits): These 8 bits are divided into two parts. The most
significant 6 bits are used for Type of Service to let the Router Known what
services should be provided to this packet. The least significant 2 bits are used
for Explicit Congestion Notification (ECN).

3 Flow Label (20-bits): This label is used to maintain the sequential flow of the
packets belonging to a communication. The source labels the sequence to help
the router identify that a particular packet belongs to a specific flow of
information. This field helps avoid re-ordering of data packets. It is designed
for streaming/real-time media.

4 Payload Length (16-bits): This field is used to tell the routers how much
information a particular packet contains in its payload. Payload is composed of
Extension Headers and Upper Layer data. With 16 bits, up to 65535 bytes can
be indicated; but if the Extension Headers contain Hop-by-Hop Extension
Header, then the payload may exceed 65535 bytes and this field is set to 0.

70
71

5 Next Header (8-bits): This field is used to indicate either the type of


Extension Header, or if the Extension Header is not present then it indicates
the Upper Layer PDU. The values for the type of Upper Layer PDU are same
as IPv4’s.

6 Hop Limit (8-bits): This field is used to stop packet to loop in the network
infinitely. This is same as TTL in IPv4. The value of Hop Limit field is
decremented by 1 as it passes a link (router/hop). When the field reaches 0 the
packet is discarded.

7 Source Address (128-bits): This field indicates the address of originator of


the packet.

8 Destination Address (128-bits): This field provides the address of intended


recipient of the packet.

Extension Headers

In IPv6, the Fixed Header contains only that much information which is
necessary, avoiding those information which is either not required or is rarely
used. All such information is put between the Fixed Header and the Upper layer
header in the form of Extension Headers. Each Extension Header is identified by a
distinct value.
When Extension Headers are used, IPv6 Fixed Header’s Next Header field points
to the first Extension Header. If there is one more Extension Header, then the first
Extension Header’s ‘Next-Header’ field points to the second one, and so on. The
last Extension Header’s ‘Next-Header’ field points to the Upper Layer Header.
Thus, all the headers points to the next one in a linked list manner.
If the Next Header field contains the value 59, it indicates that there are no
headers after this header, not even Upper Layer Header.
The following Extension Headers must be supported as per RFC 2460:

71
72

The sequence of Extension Headers should be:

These headers:
 1. should be processed by First and subsequent destinations.
 2. should be processed by Final Destination.
Extension Headers are arranged one after another in a linked list manner, as
depicted in the following diagram:

[Image:
Extension Headers Connected Format]

Difference between IPv4 and IPv6:


IPv4 IPv6

IPv4 has 32-bit address length IPv6 has 128-bit address length

72
73

IPv4 IPv6

It Supports Manual and DHCP It supports Auto and renumbering address


address configuration configuration

In IPv4 end to end connection In IPv6 end to end connection integrity is


integrity is Unachievable Achievable

It can generate 4.29×109 Address space of IPv6 is quite large it can


address space produce 3.4×1038 address space

Security feature is dependent IPSEC is inbuilt security feature in the


on application IPv6 protocol

Address representation of IPv4 Address Representation of IPv6 is in


is in decimal hexadecimal

Fragmentation performed by In IPv6 fragmentation performed only by


Sender and forwarding routers sender

In IPv6 packetflow identification are


In IPv4 Packet flow Available and uses flow label field in the
identification is not available header

In IPv4 checksumfield is
available In IPv6 checksumfield is not available

It has broadcast Message In IPv6 multicast and any cast message


Transmission Scheme transmission scheme is available

In IPv4 Encryption and


Authentication facility not In IPv6 Encryption and Authentication
provided are provided

IPv4 has header of 20-60 bytes. IPv6 has header of 40 bytes fixed

73
74

Classful Addressing:- The 32 bit IP address is divided into five sub-classes. These
are:
 Class A
 Class B
 Class C
 Class D
 Class E
Each of these classes has a valid range of IP addresses. Classes D and E are
reserved for multicast and experimental purposes respectively. The order of bits in
the first octet determine the classes of IP address.
IPv4 address is divided into two parts:
 Network ID
 Host ID
The class of IP address is used to determine the bits used for network ID and host
ID and the number of total networks and hosts possible in that particular class.
Each ISP or network administrator assigns IP address to each device that is
connected to its network.

74
75

Note: IP addresses are globally managed by Internet Assigned Numbers


Authority(IANA) and regional Internet registries(RIR).
Note: While finding the total number of host IP addresses, 2 IP addresses are not
counted and are therefore, decreased from the total count because the first IP
address of any network is the network number and whereas the last IP address is
reserved for broadcast IP.
Class A:
IP address belonging to class A are assigned to the networks that contain a large
number of hosts.
 The network ID is 8 bits long.
 The host ID is 24 bits long.
The higher order bit of the first octet in class A is always set to 0. The remaining 7
bits in first octet are used to determine network ID. The 24 bits of host ID are used
to determine the host in any network. The default subnet mask for class A is
255.x.x.x. Therefore, class A has a total of:

 2^7-2= 126 network ID(Here 2 address is subracted because 0.0.0.0 and 127.x.y.z
are special address. )
 2^24 – 2 = 16,777,214 host ID
IP addresses belonging to class A ranges from 1.x.x.x – 126.x.x.x

Class B:
IP address belonging to class B are assigned to the networks that ranges from
medium-sized to large-sized networks.
 The network ID is 16 bits long.
 The host ID is 16 bits long.
The higher order bits of the first octet of IP addresses of class B are always set to
10. The remaining 14 bits are used to determine network ID. The 16 bits of host ID
is used to determine the host in any network. The default sub-net mask for class B
is 255.255.x.x. Class B has a total of:
 2^14 = 16384 network address
 2^16 – 2 = 65534 host address
75
76

IP addresses belonging to class B ranges from 128.0.x.x – 191.255.x.x.

Class C:
IP address belonging to class C are assigned to small-sized networks.
 The network ID is 24 bits long.
 The host ID is 8 bits long.
The higher order bits of the first octet of IP addresses of class C are always set to
110. The remaining 21 bits are used to determine network ID. The 8 bits of host ID
is used to determine the host in any network. The default sub-net mask for class C
is 255.255.255.x. Class C has a total of:
 2^21 = 2097152 network address
 2^8 – 2 = 254 host address
IP addresses belonging to class C ranges from 192.0.0.x – 223.255.255.x.

Class D:

IP address belonging to class D are reserved for multi-casting. The higher order
bits of the first octet of IP addresses belonging to class D are always set to 1110.
The remaining bits are for the address that interested hosts recognize.
Class D does not posses any sub-net mask. IP addresses belonging to class D
ranges from 224.0.0.0 – 239.255.255.255.

76
77

Class E:
IP addresses belonging to class E are reserved for experimental and research
purposes. IP addresses of class E ranges from 240.0.0.0 – 255.255.255.254. This
class doesn’t have any sub-net mask. The higher order bits of first octet of class E
are always set to 1111.

Range of special IP addresses:


169.254.0.0 – 169.254.0.16 : Link local addresses
127.0.0.0 – 127.0.0.8 : Loop-back addresses
0.0.0.0 – 0.0.0.8 : used to communicate within the current network.
Rules for assigning Host ID:
Host ID’s are used to identify a host within a network. The host ID are assigned
based on the following rules:
 Within any network, the host ID must be unique to that network.
 Host ID in which all bits are set to 0 cannot be assigned because this host ID is
used to represent the network ID of the IP address.
 Host ID in which all bits are set to 1 cannot be assigned because this host ID is
reserved as a broadcast address to send packets to all the hosts present on that
particular network.
Rules for assigning Network ID:
Hosts that are located on the same physical network are identified by the network
ID, as all host on the same physical network is assigned the same network ID. The
network ID is assigned based on the following rules:

 The network ID cannot start with 127 because 127 belongs to class A address and
is reserved for internal loop-back functions.
 All bits of network ID set to 1 are reserved for use as an IP broadcast address and
therefore, cannot be used.
 All bits of network ID set to 0 are used to denote a specific host on the local
network and are not routed and therefore, aren’t used.
Summary of Classful addressing :

77
78

Problems with Classful Addressing:


The problem with this classful addressing method is that millions of class A
address are wasted, many of the class B address are wasted, whereas, number of
addresses available in class C is so small that it cannot cater the needs of
organizations. Class D addresses are used for multicast routing and are therefore
available as a single block only. Class E addresses are reserved.

Subnets:- A subnet, or subnetwork, is a segmented piece of a larger network.


More specifically, subnets are a logical partition of an IP network into multiple,
smaller network segments. The Internet Protocol (IP) is the method for sending
data from one computer to another over the internet. Each computer, or host, on the
internet has at least one IP address as a unique identifier.

78
79

Organizations will use a subnet to subdivide large networks into smaller, more


efficient subnetworks. One goal of a subnet is to split a large network into a
grouping of smaller, interconnected networks to help minimize traffic. This way,
traffic doesn't have to flow through unnecessary routs, increasing network speeds.

Subnetting, the segmentation of a network address space, improves address


allocation efficiency. It is described in the formal document, Request for
Comments 950, and is tightly linked to IP addresses, subnet masks and Classless
Inter-Domain Routing (CIDR) notation. 

How subnets work


Each subnet allows its connected devices to communicate with each other, while
routers are used to communicate between subnets. The size of a subnet depends on
the connectivity requirements and the network technology employed. A point-to-
point subnet allows two devices to connect, while a data center subnet might be
designed to connect many more devices.

Each organization is responsible for determining the number and size of the
subnets it creates, within the limits of the address space available for its use.
Additionally, the details of subnet segmentation within an organization remain
local to that organization.

An IP address is divided into two fields: a Network Prefix (also called the Network
ID) and a Host ID. What separates the Network Prefix and the Host ID depends on
whether the address is a Class A, B or C address. Figure 1 shows an IPv4 Class B
address, 172.16.37.5. Its Network Prefix is 172.16.0.0, and the Host ID is 37.5.

79
80

Class B
IP address

The subnet mechanism uses a portion of the Host ID field to identify individual
subnets. Figure 2, for example, shows the third group of the 172.16.0.0 network
being used as a Subnet ID. A subnet mask is used to identify the part of the address
that should be used as the Subnet ID. The subnet mask is applied to the full
network address using a binary AND operation. AND operations operate,
assuming an output is "true" only when both inputs are "true." Otherwise, the
output is "false." Only when two bits are both 1. This results in the Subnet ID.

Figure 2 shows the AND of the IP address, as well as the mask producing the
Subnet ID. Any remaining address bits identify the Host ID. The subnet in Figure 2
is identified as 172.16.2.0, and the Host ID is 5. In practice, network staff will
typically refer to a subnet by just the Subnet ID. It would be common to hear
someone say, "Subnet 2 is having a problem today," or, "There is a problem with
the dot-two subnet."

80
81

Subnet
ID

The Subnet ID is used by routers to determine the best route between subnetworks.
Figure 3 shows the 172.16.0.0 network, with the third grouping as the Subnet ID.
Four of the 256 possible subnets are shown connected to one router. Each subnet is
identified either by its Subnet ID or the subnet address with the Host ID set to .0.
The router interfaces are assigned the Host ID of .1 -- e.g., 172.16.2.1.

When the router receives a packet addressed to a host on a different subnet than the
sender -- host A to host C, for example -- it knows the subnet mask and uses it to
determine the Subnet ID of host C. It examines its routing table to find the
interface connected to host C's subnet and forwards the packet on that interface.

Subnet segmentation
A subnet itself also may be segmented into smaller subnets, giving organizations
the flexibility to create smaller subnets for things like point-to-point links or for
subnetworks that support a few devices. The example below uses an 8-bit Subnet
ID. The number of bits in the subnet mask depends on the organization's

81
82

requirements for subnet size and the number of subnets. Other subnet mask lengths
are common. While this adds some complexity to network addressing, it
significantly improves the efficiency of network address utilization.

Subnet
segmentation

A subnet can be delegated to a suborganization, which itself may apply the


subnetting process to create additional subnets, as long as sufficient address space
is available. Subnetting performed by a delegated organization is hidden from
other organizations. As a result, the Subnet ID field length and where subnets are
assigned can be hidden from the parent (delegating) organization, a key
characteristic that allows networks to be scaled up to large sizes.

In modern routing architectures, routing protocols distribute the subnet mask with
routes and provide mechanisms to summarize groups of subnets as a single routing

82
83

table entry. Older routing architectures relied on the default Class A, B and C IP
address classification to determine the mask to use. CIDR notation is used to
identify Network Prefix and Mask, where the subnet mask is a number that
indicates the number of ones in the Mask (e.g., 172.16.2.0/24). This is also known
as Variable-Length Subnet Masking (VLSM) and CIDR. Subnets and subnetting
are used in both IPv4 and IPv6 networks, based on the same principles.

Beneficial uses of subnets


 Reallocating IP addresses. Each class has a limited number of host
allocations; for example, networks with more than 254 devices need a Class B
allocation. If a network administrator is working with a Class B or C network
and needs to allocate 150 hosts for three physical networks located in three
different cities, they would need to either request more address blocks for each
network -- or divide a network into subnets that enable administrators to use
one block of addresses on multiple physical networks.
 Relieving network congestion. If much of an organization's traffic is meant to
be shared regularly between the same cluster of computers, placing them on the
same subnet can reduce network traffic. Without a subnet, all computers and
servers on the network would see data packets from every other computer.
 Improving network security. Subnetting allows network administrators to
reduce network-wide threats by quarantining compromised sections of the
network and by making it more difficult for trespassers to move around an
organization's network.
IPv6 - Address Types & Formats

Hexadecimal Number System

Before introducing IPv6 Address format, we shall look into Hexadecimal Number
System. Hexadecimal is a positional number system that uses radix (base) of 16.
To represent the values in readable format, this system uses 0-9 symbols to
represent values from zero to nine and A-F to represent values from ten to fifteen.
Every digit in Hexadecimal can represent values from 0 to 15.

83
84

[Image: Conversion Table]

Address Structure

An IPv6 address is made of 128 bits divided into eight 16-bits blocks. Each block
is then converted into 4-digit Hexadecimal numbers separated by colon symbols.
For example, given below is a 128 bit IPv6 address represented in binary format
and divided into eight 16-bits blocks:
0010000000000001 0000000000000000 0011001000111000 1101111111100001
0000000001100011 0000000000000000 0000000000000000 1111111011111011
Each block is then converted into Hexadecimal and separated by ‘:’ symbol:
2001:0000:3238:DFE1:0063:0000:0000:FEFB
Even after converting into Hexadecimal format, IPv6 address remains long. IPv6
provides some rules to shorten the address. The rules are as follows:
Rule.1: Discard leading Zero(es):
In Block 5, 0063, the leading two 0s can be omitted, such as (5th block):
2001:0000:3238:DFE1:63:0000:0000:FEFB
Rule.2: If two of more blocks contain consecutive zeroes, omit them all and
replace with double colon sign ::, such as (6th and 7th block):
84
85

2001:0000:3238:DFE1:63::FEFB
Consecutive blocks of zeroes can be replaced only once by :: so if there are still
blocks of zeroes in the address, they can be shrunk down to a single zero, such as
(2nd block):
2001:0:3238:DFE1:63::FEFB

Interface ID

IPv6 has three different types of Unicast Address scheme. The second half of the
address (last 64 bits) is always used for Interface ID. The MAC address of a
system is composed of 48-bits and represented in Hexadecimal. MAC addresses
are considered to be uniquely assigned worldwide. Interface ID takes advantage of
this uniqueness of MAC addresses. A host can auto-configure its Interface ID by
using IEEE’s Extended Unique Identifier (EUI-64) format. First, a host divides its
own MAC address into two 24-bits halves. Then 16-bit Hex value 0xFFFE is
sandwiched into those two halves of MAC address, resulting in EUI-64 Interface
ID.

[Image: EUI-64 Interface ID]


Conversion of EUI-64 ID into IPv6 Interface Identifier
To convert EUI-64 ID into IPv6 Interface Identifier, the most significant 7th bit of
EUI-64 ID is complemented. For example:

85
86

[Image: IPV6 Interface ID]

Global Unicast Address

This address type is equivalent to IPv4’s public address. Global Unicast addresses
in IPv6 are globally identifiable and uniquely addressable.

[Image: Global Unicast


Address]
Global Routing Prefix: The most significant 48-bits are designated as Global
Routing Prefix which is assigned to specific autonomous system. The three most
significant bits of Global Routing Prefix is always set to 001.

Link-Local Address

Auto-configured IPv6 address is known as Link-Local address. This address


always starts with FE80. The first 16 bits of link-local address is always set to
1111 1110 1000 0000 (FE80). The next 48-bits are set to 0, thus:

[Image:
Link-Local Address]
Link-local addresses are used for communication among IPv6 hosts on a link
(broadcast segment) only. These addresses are not routable, so a Router never
forwards these addresses outside the link.

86
87

Unique-Local Address

This type of IPv6 address is globally unique, but it should be used in local
communication. The second half of this address contain Interface ID and the first
half is divided among Prefix, Local Bit, Global ID and Subnet ID.

[Image:
Unique-Local Address]
Prefix is always set to 1111 110. L bit, is set to 1 if the address is locally assigned.
So far, the meaning of L bit to 0 is not defined. Therefore, Unique Local IPv6
address always starts with ‘FD’.

Scope of IPv6 Unicast Addresses:

[Image: IPv6
Unicast Address Scope]
The scope of Link-local address is limited to the segment. Unique Local Address
are locally global, but are not routed over the Internet, limiting their scope to an
organization’s boundary. Global Unicast addresses are globally unique and
recognizable. They shall make the essence of Internet v2 addressing.
Unicast Addresses

87
88

Figure 4-6 diagrams the three types of addresses: unicast, multicast, and anycast.
We begin by looking at unicast addresses. Don’t be intimidated by all the different
types of unicast addresses. The most significant types are global unicast addresses,
which are equivalent to IPv4 public addresses, and link-local addresses. These
address types are discussed in detail in Chapters 5 and 6.

Figure 4-6 IPv6 Address Types: Unicast Addresses

A unicast address uniquely identifies an interface on an IPv6 device. A packet sent


to a unicast address is received by the interface that is assigned to that address.
Similar to IPv4, a source IPv6 addresses must be a unicast address.

NOTE

Notice that there is no broadcast address shown in Figure 4-6. Remember that IPv6
does not include a broadcast address.

This section covers the different types of unicast addresses, as illustrated in Figure
4-6. The following is a quick preview of each type of unicast address discussed in
this section:

 Global unicast: A routable address in the IPv6 Internet, similar to a public


IPv4 address (covered in more detail in Chapter 5).
 Link-local: Used only to communicate with devices on the same local link
(covered in more detail in Chapter 6).
 Loopback: An address not assigned to any physical interface that can be
used for a host to send an IPv6 packet to itself.
 Unspecified address: Used only as a source address and indicates the
absence of an IPv6 address.
 Unique local: Similar to a private address in IPv4 (RFC 1918) and not
intended to be routable in the IPv6 Internet. However, unlike RFC 1918
addresses, these addresses are not intended to be statefully translated to a
global unicast address.

88
89

 IPv4 embedded: An IPv6 address that carries an IPv4 address in the low-
order 32 bits of the address.

Global Unicast Address

Global unicast addresses (GUAs), also known as aggregatable global unicast


addresses, are globally routable and reachable in the IPv6 Internet. They are
equivalent to public IPv4 addresses. They play a significant role in the IPv6
addressing architecture. One of the main motivations for transitioning to IPv6 is
the exhaustion of its IPv4 counterpart. As you can see in Figure 4-6, a GUA
address is only one of several types of IPv6 unicast addresses.

Figure 4-7 shows the generic structure of a GUA, which has three fields:

 Global Routing Prefix: The Global Routing Prefix is the prefix or network


portion of the address assigned by the provider, such as an ISP, to the
customer site.
 Subnet ID: The Subnet ID is a separate field for allocating subnets within
the customer site. Unlike with IPv4, it is not necessary to borrow bits from
the Interface ID (host portion) to create subnets. The number of bits in the
Subnet ID falls between where the Global Routing Prefix ends and where
the Interface ID begins. This makes subnetting simple and manageable.
 Interface ID: The Interface ID identifies the interface on the subnet,
equivalent to the host portion of an IPv4 address. The Interface ID in most
cases is 64 bits.

Figure 4-7 Structure of a GUA Address

Figure 4-7 illustrates the more general structure, without the specific sizes for any
of the three parts. The first 3 bits of a GUA address begin with the binary value
001, which results in the first hexadecimal digit becoming a 2 or a 3. (We look at
the structure of the GUA address more closely in Chapter 5.)

There are several ways a device can be configured with a global unicast address:

 Manually configured.
89
90

 Stateless Address Autoconfiguration.


 Stateful DHCPv6.

Example 4-1 demonstrates how to view the global unicast address on Windows
and Mac OS operating systems, using the ipconfig and ifconfig commands,
respectively. The ifconfig command is also used with the Linux operating system
and provides similar output.

NOTE

You may see multiple IPv6 global unicast addresses including one or more
temporary addresses. You’ll learn more about this in Chapter 9.

Example 4-1 Viewing IPv6 Addresses on Windows and Mac OS


Windows-OS> ipconfig
Ethernet adapter Local Area Connection:
   Connection-specific DNS Suffix  .  :
   ! IPv6 GUA
   IPv6 Address. . . . . . . . . . .  : 2001:db8:cafe:1:d0f8:9ff6:4201:7086  
   ! IPv6 Link-Local
   Link-local IPv6 Address . . . . .  : fe80::d0f8:9ff6:4201:7086%11         
   IPv4 Address. . . . . . . . . . .  : 192.168.1.100
   Subnet Mask . . . . . . . . . . .  : 255.255.255.0
   ! IPv6 Default Gateway
   Default Gateway . . . . . . . . .  : fe80::1%11          
                                        192.168.1.1
-----------------------------------------------------------------------------------
Mac-OS$ ifconfig
en1:
flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST>
mtu 1500
   ether 60:33:4b:15:24:6f
   ! IPv6 Link-Local
   inet6 fe80::6233:4bff:fe15:246f%en1 prefixlen 64 scopeid 0x5          
   inet 192.168.1.111 netmask 0xffffff00 broadcast 192.168.1.255
   ! IPv6 GUA
   inet6 2001:db8:cafe:1:4bff:fe15:246f prefixlen 64 autoconf            
   media: autoselect
   status: active

90
91

This section has provided just a brief introduction to global unicast addresses.
Remember that IPv6 introduced a lot of changes to IP. Devices may obtain more
than one GUA address for reasons such as privacy. For a network administrator
needing to manage and control access within a network, having these additional
addresses that are not administered through stateful DHCPv6 may be undesirable.
Chapter 11 discusses devices obtaining or creating multiple global unicast
addresses and various options to ensure that devices only obtain a GUA address
from a stateful DHCPv6 server.

ROUTING ALGORITHMS:-
Distance Vector Routing:- A distance-vector routing (DVR) protocol requires
that a router inform its neighbors of topology changes periodically. Historically
known as the old ARPANET routing algorithm (or known as Bellman-Ford
algorithm).
Bellman Ford Basics – Each router maintains a Distance Vector table containing
the distance between itself and ALL possible destination nodes. Distances,based on
a chosen metric, are computed using information from the neighbors’ distance
vectors.
Information kept by DV router -
 Each router has an ID
 Associated with each link connected to a router,
 there is a link cost (static or dynamic).
 Intermediate hops

Distance Vector Table Initialization -


 Distance to itself = 0
 Distance to ALL other routers = infinity number.

Distance Vector Algorithm –


1. A router transmits its distance vector to each of its neighbors in a routing
packet.
2. Each router receives and saves the most recently received distance vector from
each of its neighbors.
3. A router recalculates its distance vector when:
 It receives a distance vector from a neighbor containing different
information than before.
 It discovers that a link to a neighbor has gone down.
The DV calculation is based on minimizing the cost to each destination
91
92

Dx(y) = Estimate of least cost from x to y


C(x,v) = Node x knows cost to each neighbor v
Dx = [Dx(y): y ∈ N ] = Node x maintains distance vector
Node x also maintains its neighbors' distance vectors
– For each neighbor v, x maintains Dv = [Dv(y): y ∈ N ]
Note –

 From time-to-time, each node sends its own distance vector estimate to
neighbors.
 When a node x receives new DV estimate from any neighbor v, it saves v’s
distance vector and it updates its own DV using B-F equation:
 Dx(y) = min { C(x,v) + Dv(y), Dx(y) } for each node y ∈ N
Example – Consider 3-routers X, Y and Z as shown in figure. Each router have
their routing table. Every routing table will contain distance to the destination
nodes.

92
93

Consider router X , X will share it routing table to neighbors and neighbors will
share it routing table to it to X and distance from node X to destination will be
calculated using bellmen- ford equation.
Dx(y) = min { C(x,v) + Dv(y)} for each node y ∈ N
As we can see that distance will be less going from X to Z when Y is intermediate
node(hop) so it will be update in routing table X.

93
94

Similarly for Z also –

94
95

Finally the routing table for all –

Advantages of Distance Vector routing –


 It is simpler to configure and maintain than link state routing.
Disadvantages of Distance Vector routing –
 It is slower to converge than link state.
 It is at risk from the count-to-infinity problem.
 It creates more traffic than link state since a hop count change must be
propagated to all routers and processed on each router. Hop count
updates take place on a periodic basis, even if there are no changes in
the network topology, so bandwidth-wasting broadcasts still occur.
 For larger networks, distance vector routing results in larger routing
tables than link state since each router must know about all other
routers. This can also lead to congestion on WAN links.

95
96

Note – Distance Vector routing uses UDP(User datagram protocol) for


transportation.

Link State Routing

Link state routing is a technique in which each router shares the knowledge of its
neighborhood with every other router in the internetwork.

The three keys to understand the Link State Routing algorithm:

o Knowledge about the neighborhood: Instead of sending its routing table, a


router sends the information about its neighborhood only. A router broadcast
its identities and cost of the directly attached links to other routers.
o Flooding: Each router sends the information to every other router on the
internetwork except its neighbors. This process is known as Flooding. Every
router that receives the packet sends the copies to all its neighbors. Finally,
each and every router receives a copy of the same information.
o Information sharing: A router sends the information to every other router
only when the change occurs in the information.

Link State Routing has two phases:

Reliable Flooding

o Initial state: Each node knows the cost of its neighbors.


o Final state: Each node knows the entire graph.

Route Calculation

Each node uses Dijkstra's algorithm on the graph to calculate the optimal routes to
all nodes.

o The Link state routing algorithm is also known as Dijkstra's algorithm which
is used to find the shortest path from one node to every other node in the
network.

96
97

o The Dijkstra's algorithm is an iterative, and it has the property that after
kth iteration of the algorithm, the least cost paths are well known for k
destination nodes.

Let's describe some notations:

o c( i , j): Link cost from node i to node j. If i and j nodes are not directly
linked, then c(i , j) = ∞.
o D(v): It defines the cost of the path from source code to destination v that
has the least cost currently.
o P(v): It defines the previous node (neighbor of v) along with current least
cost path from source to v.
o N: It is the total number of nodes available in the network.

Algorithm

Initialization
N = {A} // A is a root node.
for all nodes v
if v adjacent to A
then D(v) = c(A,v)
else D(v) = infinity
loop
find w not in N such that D(w) is a minimum.
Add w to N
Update D(v) for all v adjacent to w and not in N:
D(v) = min(D(v) , D(w) + c(w,v))
Until all nodes in N

In the above algorithm, an initialization step is followed by the loop. The number
of times the loop is executed is equal to the total number of nodes available in the
network.

Disadvantage:

97
98

Heavy traffic is created in Line state routing due to Flooding. Flooding can cause
an infinite looping, this problem can be solved by using Time-to-leave field.

Path Vector Routing:- A path-vector routing protocol is a network routing


protocol which maintains the path information that gets updated dynamically.
Updates that have looped through the network and returned to the same node are
easily detected and discarded. This algorithm is sometimes used in Bellman–Ford
routing algorithms to avoid "Count to Infinity" problems.
It is different from the distance vector routing and link state routing. Each entry in
the routing table contains the destination network, the next router and the path to
reach the destination.
Border Gateway Protocol (BGP) is an example of a path vector protocol. In BGP,
the autonomous system boundary routers (ASBR) send path-vector messages to
advertise the reachability of networks. Each router that receives a path vector
message must verify the advertised path according to its policy. If the message
complies with its policy, the router modifies its routing table and the message
before sending the message to the next neighbor. It modifies the routing table to
maintain the autonomous systems that are traversed in order to reach the
destination system. It modifies the message to add its AS number and to replace
the next router entry with its identification.
Exterior Gateway Protocol (EGP) does not use path vectors.
It has three phases:

 Initiation
 Sharing
 Updating
Hierarchical Routing Protocol :
Hierarchical Routing is the method of routing in networks that is based on
hierarchical addressing. Most transmission control protocol, Internet protocol
(DCPIP). Routing is based on two level of hierarchical routing in which IP
address is divided into a network, person and a host person. Gateways use only
the network a person tell an IP data until gateways delivered it directly.
It addresses the growth of routing tables. Routers are further divided into
regions and they know the route of their own regions only. It works like a
telephone routing.

98
99

 Example –
City, State, Country, Continent.

RIP:- Routing Information Protocol (RIP) is a dynamic routing protocol which


uses hop count as a routing metric to find the best path between the source and
the destination network. It is a distance vector routing protocol which has AD
value 120 and works on the application layer of OSI model. RIP uses port number
520.
Hop Count :
Hop count is the number of routers occurring in between the source and
destination network. The path with the lowest hop count is considered as the best
route to reach a network and therefore placed in the routing table. RIP prevents
routing loops by limiting the number of hopes allowed in a path from source and
destination. The maximum hop count allowed for RIP is 15 and hop count of 16
is considered as network unreachable.
Features of RIP :
1. Updates of the network are exchanged periodically.
2. Updates (routing information) are always broadcast.
3. Full routing tables are sent in updates.
4. Routers always trust on routing information received from neighbor routers.
This is also known as Routing on rumours.
RIP versions :
There are three vesions of routing information protocol – RIP Version1, RIP
Version2 and RIPng.

RIP v1 RIP v2 RIPng

Sends update as Sends update as


Sends update as broadcast multicast multicast

Multicast at FF02::9
Broadcast at (RIPng can only run on
255.255.255.255 Multicast at 224.0.0.9 IPv6 networks)

Doesn’t support Supports authentication


authentication of update of RIPv2 update
messages messages –

99
100

RIP v1 RIP v2 RIPng

Classless protocol, Classless updates are


Classful routing protocol supports classful sent
RIP v1 is known as Classful Routing Protocol because it doesn’t send
information of subnet mask in its routing update.
RIP v2 is known as Classless Routing Protocol because it sends information of
subnet mask in its routing update.
>> Use debug command to get the details :
# debug ip rip
>> Use this command to show all routes configured in router, say for router R1 :
R1# show ip route
>> Use this command to show all protocols configured in router, say for router
R1 :
R1# show ip protocols
 
Configuration :

100
101

Consider the above given topology which has 3-routers R1, R2, R3. R1 has IP
address 172.16.10.6/30 on s0/0/1, 192.168.20.1/24 on fa0/0. R2 has IP address
172.16.10.2/30 on s0/0/0, 192.168.10.1/24 on fa0/0. R3 has IP address
172.16.10.5/30 on s0/1, 172.16.10.1/30 on s0/0, 10.10.10.1/24 on fa0/0.
Configure RIP for R1 :
R1(config)# router rip
R1(config-router)# network 192.168.20.0
R1(config-router)# network 172.16.10.4
R1(config-router)# version 2
R1(config-router)# no auto-summary
Note : no auto-summary command disables the auto-summarisation. If we don’t
select no auto-summary, then subnet mask will be considered as classful in
Version 1.
Configureg RIP for R2 :
R2(config)# router rip
R2(config-router)# network 192.168.10.0
R2(config-router)# network 172.16.10.0
R2(config-router)# version 2
R2(config-router)# no auto-summary
Similarly, Configure RIP for R3 :
R3(config)# router rip
R3(config-router)# network 10.10.10.0
R3(config-router)# network 172.16.10.4
R3(config-router)# network 172.16.10.0
R3(config-router)# version 2
R3(config-router)# no auto-summary
 
RIP timers :
 Update timer : The default timing for routing information being exchanged
by the routers operating RIP is 30 seconds. Using Update timer, the routers
exchange their routing table periodically.
 Invalid timer: If no update comes until 180 seconds, then the destination
router consider it as invalid. In this scenario, the destination router mark hop
count as 16 for that router.
 Hold down timer : This is the time for which the router waits for neighbour
router to respond. If the router isn’t able to respond within a given time then it
is declared dead. It is 180 seconds by default.

101
102

 Flush time : It is the time after which the entry of the route will be flushed if
it doesn’t respond within the flush time. It is 60 seconds by default. This timer
starts after the route has been declared invalid and after 60 seconds i.e time
will be 180 + 60 = 240 seconds.
Note that all these times are adjustable. Use this command to change the timers :
R1(config-router)# timers basic
R1(config-router)# timers basic 20 80 80 90

OSPF:- Open Shortest Path First (OSPF) is a link-state routing protocol that is
used to find the best path between the source and the destination router using its
own Shortest Path First). OSPF is developed by Internet Engineering Task Force
(IETF) as one of the Interior Gateway Protocol (IGP), i.e, the protocol which
aims at moving the packet within a large autonomous system or routing domain.
It is a network layer protocol which works on the protocol number 89 and uses
AD value 110. OSPF uses multicast address 224.0.0.5 for normal communication
and 224.0.0.6 for update to designated router(DR)/Backup Designated Router
(BDR).
OSPF terms –
1. Router I’d – It is the highest active IP address present on the router. First,
highest loopback address is considered. If no loopback is configured then the
highest active IP address on the interface of the router is considered.
2. Router priority – It is a 8 bit value assigned to a router operating OSPF, used
to elect DR and BDR in a broadcast network.
3. Designated Router (DR) – It is elected to minimize the number of adjacency
formed. DR distributes the LSAs to all the other routers. DR is elected in a
broadcast network to which all the other routers shares their DBD. In a
broadcast network, router requests for an update to DR and DR will respond to
that request with an update.
4. Backup Designated Router (BDR) – BDR is backup to DR in a broadcast
network. When DR goes down, BDR becomes DR and performs its functions.
DR and BDR election – DR and BDR election takes place in broadcast network
or multi-access network. Here are the criteria for the election:
1. Router having the highest router priority will be declared as DR.
2. If there is a tie in router priority then highest router I’d will be considered.
First, the highest loopback address is considered. If no loopback is configured
then the highest active IP address on the interface of the router is considered.
OSPF states – The device operating OSPF goes through certain states. These
states are:
102
103

1. Down – In this state, no hello packet have been received on the interface.
Note – The Down state doesn’t mean that the interface is physically down.
Here, it means that OSPF adjacency process has not started yet.
2. INIT – In this state, hello packet have been received from the other router.
3. 2WAY – In the 2WAY state, both the routers have received the hello packets
from other routers. Bidirectional connectivity has been established.
Note – In between the 2WAY state and Exstart state, the DR and BDR
election takes place.
4. Exstart – In this state, NULL DBD are exchanged.In this state, master and
slave election take place. The router having the higher router I’d becomes the
master while other becomes the slave. This election decides Which router will
send it’s DBD first (routers who have formed neighbourship will take part in
this election).
5. Exchange – In this state, the actual DBDs are exchanged.
6. Loading – In this sate, LSR, LSU and LSA (Link State Acknowledgement)
are exchanged.
Important – When a router receives DBD from other router, it compares it’s
own DBD with the other router DBD. If the received DBD is more updated
than its own DBD then the router will send LSR to the other router stating
what links are needed. The other router replies with the LSU containing the
updates that are needed. In return to this, the router replies with the Link State
Acknowledgement.
7. Full – In this state, synchronization of all the information takes place. OSPF
routing can begin only after the Full state.

BGP:- Border Gateway Protocol (BGP) is used to Exchange routing information


for the internet and is the protocol used between ISP which are different ASes.
The protocol can connect together any internetwork of autonomous system using
an arbitrary topology. The only requirement is that each AS have at least one
router that is able to run BGP and that is router connect to at least one other AS’s
BGP router. BGP’s main function is to exchange network reach-ability
information with other BGP systems. Border Gateway Protocol constructs an
autonomous systems’ graph based on the information exchanged between BGP
routers.
Characteristics of Border Gateway Protocol (BGP):
 Inter-Autonomous System Configuration: The main role of BGP is to
provide communication between two autonomous systems.
 BGP supports Next-Hop Paradigm.

103
104

 Coordination among multiple BGP speakers within the AS (Autonomous


System).
 Path Information: BGP advertisement also include path information, along
with the reachable destination and next destination pair.
 Policy Support: BGP can implement policies that can be configured by the
administrator. For ex:- a router running BGP can be configured to distinguish
between the routes that are known within the AS and that which are known
from outside the AS.
 Runs Over TCP.
 BGP conserve network Bandwidth.
 BGP supports CIDR.
 BGP also supports Security.
Functionality of Border Gateway Protocol (BGP):
BGP peers performs 3 functions, which are given below.
1. The first function consist of initial peer acquisition and authentication. both
the peers established a TCP connection and perform message exchange that
guarantees both sides have agreed to communicate.
2. The second function mainly focus on sending of negative or positive reach-
ability information.
3. The third function verifies that the peers and the network connection between
them are functioning correctly.
BGP Route Information Management Functions:
 Route Storage:
Each BGP stores information about how to reach other networks.
 Route Update:
In this task, Special techniques are used to determine when and how to use the
information received from peers to properly update the routes.
 Route Selection:
Each BGP uses the information in its route databases to select good routes to
each network on the internet network.
 Route advertisement:
Each BGP speaker regularly tells its peer what is knows about various
networks and methods to reach them.

UNIT:-3
TRANSPORT LAYER:-

104
105

Transport Layer Services:- Transport Layer is the second layer of the TCP/IP
model. It is an end-to-end layer used to deliver messages to a host. It is termed as
an end-to-end layer because it provides a point-to-point connection rather
than hop-to- hop, between the source host and destination host to deliver the
services reliably. The unit of data encapsulation in Transport Layer is a segment.
The standard protocols used by Transport Layer to enhance its functionalities are
TCP(Transmission Control Protocol), UDP( User Datagram Protocol),
DCCP( Datagram Congestion Control Protocol) etc.
Various responsibilities of a Transport Layer –
 Process to process delivery –
While Data Link Layer requires the MAC address (48 bits address contained
inside the Network Interface Card of every host machine) of source-
destination hosts to correctly deliver a frame and Network layer requires the
IP address for appropriate routing of packets , in a similar way Transport
Layer requires a Port number to correctly deliver the segments of data to the
correct process amongst the multiple processes running on a particular host.
A port number is a 16 bit address used to identify any client-server program
uniquely.
 End-to-end Connection between hosts –
The transport layer is also responsible for creating the end-to-end Connection
between hosts for which it mainly uses TCP and UDP. TCP is a secure,
connection- orientated protocol which uses a handshake protocol to establish a
robust connection between two end- hosts. TCP ensures reliable delivery of
messages and is used in various applications. UDP, on the other hand, is a
stateless and unreliable protocol which ensures best-effort delivery. It is
suitable for the applications which have little concern with flow or error
control and requires to send the bulk of data like video conferencing. It is
often used in multicasting protocols.
 Multiplexing and Demultiplexing –
Multiplexing allows simultaneous use of different applications over a network
which is running on a host. The transport layer provides this mechanism which
enables us to send packet streams from various applications simultaneously
over a network. The transport layer accepts these packets from different
processes differentiated by their port numbers and passes them to the network
layer after adding proper headers. Similarly, Demultiplexing is required at the
receiver side to obtain the data coming from various processes. Transport
receives the segments of data from the network layer and delivers it to the
appropriate process running on the receiver’s machine.

105
106

 Congestion Control –
Congestion is a situation in which too many sources over a network attempt to
send data and the router buffers start overflowing due to which loss of packets
occur. As a result retransmission of packets from the sources increases the
congestion further. In this situation, the Transport layer provides Congestion
Control in different ways. It uses open loop congestion control to prevent the
congestion and closed loop congestion control to remove the congestion in a
network once it occurred. TCP provides AIMD- additive increase
multiplicative decrease, leaky bucket technique for congestion control.
 Data integrity and Error correction –
Transport layer checks for errors in the messages coming from application
layer by using error detection codes, computing checksums, it checks whether
the received data is not corrupted and uses the ACK and NACK services to
inform the sender if the data has arrived or not and checks for the integrity of
data.
 Flow control –
The transport layer provides a flow control mechanism between the adjacent
layers of the TCP/IP model. TCP also prevents data loss due to a fast sender
and slow receiver by imposing some flow control techniques. It uses the
method of sliding window protocol which is accomplished by the receiver by
sending a window back to the sender informing the size of data it can receive.

UDP:- User Datagram Protocol (UDP) is a Transport Layer protocol. UDP is a


part of Internet Protocol suite, referred as UDP/IP suite. Unlike TCP, it
is unreliable and connectionless protocol. So, there is no need to establish
connection prior to data transfer. 
Though Transmission Control Protocol (TCP) is the dominant transport layer
protocol used with most of Internet services; provides assured delivery, reliability
and much more but all these services cost us with additional overhead and
latency. Here, UDP comes into picture. For the realtime services like computer
gaming, voice or video communication, live conferences; we need UDP. Since
high performance is needed, UDP permits packets to be dropped instead of
processing delayed packets. There is no error checking in UDP, so it also save
bandwidth. 
User Datagram Protocol (UDP) is more efficient in terms of both latency and
bandwidth. 
UDP Header – 
UDP header is 8-bytes fixed and simple header, while for TCP it may vary from
20 bytes to 60 bytes. First 8 Bytes contains all necessary header information and

106
107

remaining part consist of data. UDP port number fields are each 16 bits long,
therefore range for port numbers defined from 0 to 65535; port number 0 is
reserved. Port numbers help to distinguish different user requests or process. 
 

 
1. Source Port : Source Port is 2 Byte long field used to identify port number of
source.
2. Destination Port : It is 2 Byte long field, used to identify the port of destined
packet.
3. Length : Length is the length of UDP including header and the data. It is 16-
bits field.
4. Checksum : Checksum is 2 Bytes long field. It is the 16-bit one’s
complement of the one’s complement sum of the UDP header, pseudo header
of information from the IP header and the data, padded with zero octets at the
end (if necessary) to make a multiple of two octets.
Notes – Unlike TCP, Checksum calculation is not mandatory in UDP. No Error
control or flow control is provided by UDP. Hence UDP depends on IP and
ICMP for error reporting. 
Applications of UDP: 
 
 Used for simple request response communication when size of data is less and
hence there is lesser concern about flow and error control.
 It is suitable protocol for multicasting as UDP supports packet switching.
 UDP is used for some routing update protocols like RIP(Routing Information
Protocol).

107
108

 Normally used for real time applications which can not tolerate uneven delays
between sections of a received message.
 Following implementations uses UDP as a transport layer protocol: 
 NTP (Network Time Protocol)
 DNS (Domain Name Service)
 BOOTP, DHCP.
 NNP (Network News Protocol)
 Quote of the day protocol
 TFTP, RTSP, RIP.
 Application layer can do some of the tasks through UDP- 
 Trace Route
 Record Route
 Time stamp
 UDP takes datagram from Network Layer, attach its header and send it to the
user. So, it works fast.
 Actually UDP is null protocol if you remove checksum field.
1. Reduce the requirement of computer resources.
2. When using the Multicast or Broadcast to transfer.
3. The transmission of Real-time packets, mainly in multimedia applications.

TCP Protocol:- The transmission Control Protocol (TCP) is one of the most
important protocols of Internet Protocols suite. It is most widely used protocol for
data transmission in communication network such as internet.

Features

 TCP is reliable protocol. That is, the receiver always sends either positive or
negative acknowledgement about the data packet to the sender, so that the
sender always has bright clue about whether the data packet is reached the
destination or it needs to resend it.
 TCP ensures that the data reaches intended destination in the same order it
was sent.
 TCP is connection oriented. TCP requires that connection between two
remote points be established before sending actual data.
 TCP provides error-checking and recovery mechanism.
 TCP provides end-to-end communication.
 TCP provides flow control and quality of service.
 TCP operates in Client/Server point-to-point mode.
 TCP provides full duplex server, i.e. it can perform roles of both receiver
and sender.
108
109

Header

The length of TCP header is minimum 20 bytes long and maximum 60 bytes.

 Source Port (16-bits)  - It identifies source port of the application process


on the sending device.
 Destination Port (16-bits) - It identifies destination port of the application
process on the receiving device.
 Sequence Number (32-bits) - Sequence number of data bytes of a segment
in a session.
 Acknowledgement Number (32-bits)  - When ACK flag is set, this number
contains the next sequence number of the data byte expected and works as
acknowledgement of the previous data received.
 Data Offset (4-bits)  - This field implies both, the size of TCP header (32-
bit words) and the offset of data in current packet in the whole TCP
segment.
 Reserved (3-bits)  - Reserved for future use and all are set zero by default.
 Flags (1-bit each)
o NS - Nonce Sum bit is used by Explicit Congestion Notification
signaling process.
o CWR - When a host receives packet with ECE bit set, it sets
Congestion Windows Reduced to acknowledge that ECE received.
o ECE -It has two meanings:
 If SYN bit is clear to 0, then ECE means that the IP packet has
its CE (congestion experience) bit set.
 If SYN bit is set to 1, ECE means that the device is ECT
capable.
o URG - It indicates that Urgent Pointer field has significant data and
should be processed.

109
110

o ACK - It indicates that Acknowledgement field has significance. If


ACK is cleared to 0, it indicates that packet does not contain any
acknowledgement.
o PSH - When set, it is a request to the receiving station to PUSH data
(as soon as it comes) to the receiving application without buffering it.
o RST - Reset flag has the following features:
 It is used to refuse an incoming connection.
 It is used to reject a segment.
 It is used to restart a connection.
o SYN - This flag is used to set up a connection between hosts.
o FIN - This flag is used to release a connection and no more data is
exchanged thereafter. Because packets with SYN and FIN flags have
sequence numbers, they are processed in correct order.
 Windows Size  - This field is used for flow control between two stations
and indicates the amount of buffer (in bytes) the receiver has allocated for a
segment, i.e. how much data is the receiver expecting.
 Checksum - This field contains the checksum of Header, Data and Pseudo
Headers.
 Urgent Pointer  - It points to the urgent data byte if URG flag is set to 1.
 Options  - It facilitates additional options which are not covered by the
regular header. Option field is always described in 32-bit words. If this field
contains data less than 32-bit, padding is used to cover the remaining bits to
reach 32-bit boundary.

Addressing

TCP communication between two remote hosts is done by means of port numbers
(TSAPs). Ports numbers can range from 0 – 65535 which are divided as:

 System Ports (0 – 1023)


 User Ports ( 1024 – 49151)
 Private/Dynamic Ports (49152 – 65535)

Connection Management

TCP communication works in Server/Client model. The client initiates the


connection and the server either accepts or rejects it. Three-way handshaking is
used for connection management.

110
111

Establishment

Client initiates the connection and sends the segment with a Sequence number.
Server acknowledges it back with its own Sequence number and ACK of client’s
segment which is one more than client’s Sequence number. Client after receiving
ACK of its segment sends an acknowledgement of Server’s response.

Release

Either of server and client can send TCP segment with FIN flag set to 1. When the
receiving end responds it back by ACKnowledging FIN, that direction of TCP
communication is closed and connection is released.

Bandwidth Management

111
112

TCP uses the concept of window size to accommodate the need of Bandwidth
management. Window size tells the sender at the remote end, the number of data
byte segments the receiver at this end can receive. TCP uses slow start phase by
using window size 1 and increases the window size exponentially after each
successful communication.
For example, the client uses windows size 2 and sends 2 bytes of data. When the
acknowledgement of this segment received the windows size is doubled to 4 and
next sent the segment sent will be 4 data bytes long. When the acknowledgement
of 4-byte data segment is received, the client sets windows size to 8 and so on.
If an acknowledgement is missed, i.e. data lost in transit network or it received
NACK, then the window size is reduced to half and slow start phase starts again.

Error Control &and Flow Control

TCP uses port numbers to know what application process it needs to handover the
data segment. Along with that, it uses sequence numbers to synchronize itself with
the remote host. All data segments are sent and received with sequence numbers.
The Sender knows which last data segment was received by the Receiver when it
gets ACK. The Receiver knows about the last segment sent by the Sender by
referring to the sequence number of recently received packet.
If the sequence number of a segment recently received does not match with the
sequence number the receiver was expecting, then it is discarded and NACK is
sent back. If two segments arrive with the same sequence number, the TCP
timestamp value is compared to make a decision.

Multiplexing

The technique to combine two or more data streams in one session is called
Multiplexing. When a TCP client initializes a connection with Server, it always
refers to a well-defined port number which indicates the application process. The
client itself uses a randomly generated port number from private port number
pools.
Using TCP Multiplexing, a client can communicate with a number of different
application process in a single session. For example, a client requests a web page
which in turn contains different types of data (HTTP, SMTP, FTP etc.) the TCP
session timeout is increased and the session is kept open for longer time so that
the three-way handshake overhead can be avoided.

112
113

This enables the client system to receive multiple connection over single virtual
connection. These virtual connections are not good for Servers if the timeout is
too long.

Congestion Control

When large amount of data is fed to system which is not capable of handling it,
congestion occurs. TCP controls congestion by means of Window mechanism.
TCP sets a window size telling the other end how much data segment to send.
TCP may use three algorithms for congestion control:
 Additive increase, Multiplicative Decrease
 Slow Start
 Timeout React

Timer Management

TCP uses different types of timer to control and management various tasks:

Keep-alive timer:

 This timer is used to check the integrity and validity of a connection.


 When keep-alive time expires, the host sends a probe to check if the
connection still exists.

Retransmission timer:

 This timer maintains stateful session of data sent.


 If the acknowledgement of sent data does not receive within the
Retransmission time, the data segment is sent again.

Persist timer:

 TCP session can be paused by either host by sending Window Size 0.


 To resume the session a host needs to send Window Size with some larger
value.
 If this segment never reaches the other end, both ends may wait for each
other for infinite time.
 When the Persist timer expires, the host re-sends its window size to let the
other end know.
 Persist Timer helps avoid deadlocks in communication.

113
114

Timed-Wait:

 After releasing a connection, either of the hosts waits for a Timed-Wait time
to terminate the connection completely.
 This is in order to make sure that the other end has received the
acknowledgement of its connection termination request.
 Timed-out can be a maximum of 240 seconds (4 minutes).

Crash Recovery

TCP is very reliable protocol. It provides sequence number to each of byte sent in
segment. It provides the feedback mechanism i.e. when a host receives a packet, it
is bound to ACK that packet having the next sequence number expected (if it is
not the last segment).
When a TCP Server crashes mid-way communication and re-starts its process it
sends TPDU broadcast to all its hosts. The hosts can then send the last data
segment which was never unacknowledged and carry onwards.
TCP Services:- The Transmission Control Protocol is the most common transport
layer protocol. It works together with IP and provides a reliable transport service
between processes using the network layer service provided by the IP protocol.
The various services provided by the TCP to the application layer are as follows:
1. Process-to-Process Communication –
TCP provides process to process communication, i.e, the transfer of data takes
place between individual processes executing on end systems. This is done
using port numbers or port addresses. Port numbers are 16 bit long that help
identify which process is sending or receiving data on a host.
2. Stream oriented –
This means that the data is sent and received as a stream of bytes(unlike UDP
or IP that divides the bits into datagrams or packets). However, the network
layer, that provides service for the TCP, sends packets of information not
streams of bytes. Hence, TCP groups a number of bytes together into
a segment and adds a header to each of these segments and then delivers these
segments to the network layer. At the network layer, each of these segments
are encapsulated in an IP packet for transmission. The TCP header has
information that is required for control purpose which will be discussed along
with the segment structure.
3. Full duplex service –
This means that the communication can take place in both directions at the
same time.

114
115

4. Connection oriented service –


Unlike UDP, TCP provides connection oriented service. It defines 3 different
phases:
 Connection establishment
 Data transfer
 Connection termination
(IMP: This is a virtual connection, not a physical connection, means during
the transmission the resources will not be reserved and the segments will not
follow the same path to reach the destination but it is a connection orientation
in the sense that segments will arrive in order by the help of sequence
number.)
5. Reliability –
TCP is reliable as it uses checksum for error detection, attempts to recover lost
or corrupted packets by re-transmission, acknowledgement policy and timers.
It uses features like byte number and sequence number and acknowledgement
number so as to ensure reliability. Also, it uses congestion control
mechanisms.
6. Multiplexing –
TCP does multiplexing and de-multiplexing at the sender and receiver ends
respectively as a number of logical connections can be established between
port numbers over a physical connection.
Byte number, Sequence number and Acknowledgement number:
All the data bytes that are to be transmitted are numbered and the beginning of
this numbering is arbitrary. Sequence numbers are given to the segments so as to
reassemble the bytes at the receiver end even if they arrive in a different order.
Sequence number of a segment is the byte number of the first byte that is being
sent. Acknowledgement number is required since TCP provides full duplex
service. Acknowledgement number is the next byte number that the receiver
expects to receive which also provides acknowledgement for receiving the
previous bytes.
Example:

115
116

In this example we see that, A sends acknowledgement number1001, which


means that it has received data bytes till byte number 1000 and expects to receive
1001 next, hence B next sends data bytes starting from 1001. Similarly, since B
has received data bytes till byte number 13001 after the first data transfer from A
to B, therefore B sends acknowledgement number 13002, the byte number that it
expects to receive from A next.

TCP Features:- 1. Connection oriented: An application requests a “connection” to


destination and uses connection to transfer data
2. Stream Data transfer:- It is the duty of TCP to pack this byte stream to packets,
known as TCP segments, which are passed to the IP layer for transmission to the
destination device. 
3. Reliable:- It recovers data from Network layer if data is damaged, duplicated or
corrupted.
4. Point to Point:- TCP connection provides end to end delivery.
5. Interoperability:-  It eliminates the cross-platform boundaries.

116
117

6.Error and flow control:- error-checking, flow-control, and acknowledgement


functions.
7. Name resolution:- It helps in solving human readable name into IP address.
8. Routability:- TCP/IP is a routable protocol, 
9. It helps in resolving logical address.
10. Full Duplex:- It provides connection in both the directions

SCTP:- SCTP stands for Stream Control Transmission Protocol.


It is a connection- oriented protocol in computer networks which provides a full-
duplex association i.e., transmitting multiple streams of data between two end
points at the same time that have established a connection in network. It is
sometimes referred to as next generation TCP or TCPng, SCTP makes it easier to
support telephonic conversation on Internet. A telephonic conversation requires
transmitting of voice along with other data at the same time on both ends, SCTP
protocol makes it easier to establish reliable connection.
SCTP is also intended to make it easier to establish connection over wireless
network and managing transmission of multimedia data. SCTP is a standard
protocol (RFC 2960) and is developed by Internet Engineering Task Force (IETF).

Characteristics of SCTP :
1. Unicast with Multiple properties –
It is a point-to-point protocol which can use different paths to reach end host.

2. Reliable Transmission –
It uses SACK and checksums to detect damaged, corrupted, discarded,
duplicate and reordered data. It is similar to TCP but SCTP is more efficient
when it comes to reordering of data.
3. Message oriented –
Each message can be framed and we can keep order of datastream and tabs on
structure. For this, In TCP, we need a different layer for abstraction.
4. Multi-homing –
It can establish multiple connection paths between two end points and does not
need to rely on IP layer for resilience.

Advantages of SCTP :
117
118

1. It is a full- duplex connection i.e. users can send and receive data
simultaneously.
2. It allows half- closed connections.
3. The message’s boundaries are maintained and application doesn’t have to split
messages.
4. It has properties of both TCP and UDP protocol.
5. It doesn’t rely on IP layer for resilience of paths.

Disadvantages of SCTP :
1. One of key challenges is that it requires changes in transport stack on node.
2. Applications need to be modified to use SCTP instead of TCP/UDP.
3. Applications need to be modified to handle multiple simultaneous streams.

SCTP Services:-
 Process-to-Process Communication: SCTP provides uses Process-to-Process
Communication and also uses all well-known ports in the TCP space and also
some extra port numbers.
 Multiple Streams: TCP is a stream-oriented protocol. ...
 Multi homing: ...
 Full-Duplex Communication: ...
 Connection-Oriented Service:

SCTP Features:-
 Unicast with Multicast properties. This means it is a point-to-point protocol but
with the ability to use several addresses at the same end host. ...
 Reliable transmission. ...
 Message oriented. ...
 Rate adaptive. ...
 Multi-homing. ...
 Multi-streaming. ...
 Initiation.

SCTP Association:- Two SCTP endpoints (servers) have an SCTP association


between them (rather than a TCP connection) and the SCTP service reliably
transfers user messages between the peers. An association has an association ID
and includes multiple streams (unidirectional logical channels).
118
119

An upper-layer SCTP protocol (such as Diameter, for example) initiates an SCTP


association, which starts a four-way handshake. The client (initiator) sends an
SCTP packet with an INIT chunk which provides the server with a list of the IP
addresses through which the client can be reached, a verification tag that must
appear in every packet the client sends in this association (validating the sender),
the number of outbound streams the client is requesting, the number of inbound
streams it can support, and an initial transmission sequence number.

The server replies with an INIT-ACK chunk containing its own list of IP
addresses, initial sequence number, verification tag (that must appear in every
packet it sends for this association), the number of outbound streams the server is
requesting, the number of inbound streams it can support, and a state cookie that
ensures the association is valid. The client then replies with a COOKIE-ECHO
chunk and the server validates the cookie and replies with a COOKIE-ACK chunk.
The COOKIE-ECHO and COOKIE-ACK messages can include user data (chunks)
for more efficiency.

When you Configure SCTP Security, you can set an SCTP INIT timeout to control
the maximum length of time after receiving an INIT chunk before the firewall
receives the INIT-ACK chunk. If that time is exceeded, then the firewall stops the
association initiation. You can also configure an SCTP COOKIE timeout to control
the maximum length of time after receiving an INIT-ACK chunk with the STATE
COOKIE before the firewall receives the COOKIE-ECHO chunk; if that time is
exceeded, that also causes the firewall to stop the association initiation.

You can also leverage the following SCTP timeouts as needed:

SCTP timeout

—Maximum length of time that can elapse without SCTP traffic on an association
before the firewall closes the association.

Discard SCTP timeout

—Maximum length of time that an SCTP association remains open after the
firewall denies the session based on Security policy rules.

SCTP Shutdown timeout

119
120

—Maximum length of time that the firewall waits after a SHUTDOWN chunk to
receive a SHUTDOWN-ACK chunk before the firewall disregards the
SHUTDOWN chunk.

An established SCTP association ends in one of three ways: when an endpoint


sends a SHUTDOWN chunk to gracefully end the association with its peer and
receives a SHUTDOWN-ACK; when an endpoint sends an ABORT chunk with or
without cause parameters to close the association; or when an SCTP timeout
occurs. When any of these events occur, the firewall brings down all SCTP
sessions for that association.

APPLICATION LAYER:-

SMTP:- Email is emerging as one of the most valuable services on the internet
today. Most of the internet systems use SMTP as a method to transfer mail from
one user to another. SMTP is a push protocol and is used to send the mail
whereas POP (post office protocol) or IMAP (internet message access protocol)
are used to retrieve those mails at the receiver’s side. 
SMTP Fundamentals 
SMTP is an application layer protocol. The client who wants to send the mail
opens a TCP connection to the SMTP server and then sends the mail across the
connection. The SMTP server is always on listening mode. As soon as it listens
for a TCP connection from any client, the SMTP process initiates a connection on
that port (25). After successfully establishing the TCP connection the client
process sends the mail instantly. 
SMTP Protocol 
The SMTP model is of two type :
1. End-to- end method
2. Store-and- forward method
The end to end model is used to communicate between different organizations
whereas the store and forward method are used within an organization. A SMTP
client who wants to send the mail will contact the destination’s host SMTP
directly in order to send the mail to the destination. The SMTP server will keep
the mail to itself until it is successfully copied to the receiver’s SMTP. 
The client SMTP is the one which initiates the session let us call it as the client-
SMTP and the server SMTP is the one which responds to the session request and

120
121

let us call it as receiver-SMTP. The client- SMTP will start the session and the
receiver-SMTP will respond to the request. 

Model of SMTP system 


In the SMTP model user deals with the user agent (UA) for example Microsoft
Outlook, Netscape, Mozilla, etc. In order to exchange the mail using TCP, MTA
is used. The users sending the mail do not have to deal with the MTA it is the
responsibility of the system admin to set up the local MTA. The MTA maintains
a small queue of mails so that it can schedule repeat delivery of mail in case the
receiver is not available. The MTA delivers the mail to the mailboxes and the
information can later be downloaded by the user agents.

Both the SMTP-client and SMTP-server should have 2 components:


1. User agent (UA)
2. Local MTA
Communication between sender and the receiver : 
The senders, user agent prepare the message and send it to the MTA. The MTA
functioning is to transfer the mail across the network to the receivers MTA. To
send mail, a system must have the client MTA, and to receive mail, a system
must have a server MTA. 
SENDING EMAIL: 
Mail is sent by a series of request and response messages between the client and a
server. The message which is sent across consists of a header and the body. A
null line is used to terminate the mail header. Everything which is after the null

121
122

line is considered as the body of the message which is a sequence of ASCII


characters. The message body contains the actual information read by the receipt. 
RECEIVING EMAIL: 
The user agent at the server-side checks the mailboxes at a particular time of
intervals. If any information is received it informs the user about the mail. When
the user tries to read the mail it displays a list of mails with a short description of
each mail in the mailbox. By selecting any of the mail user can view its contents
on the terminal.
Some SMTP Commands:
 HELO – Identifies the client to the server, fully qualified domain name, only
sent once per session
 MAIL – Initiate a message transfer, fully qualified domain of originator
 RCPT – Follows MAIL, identifies an addressee, typically the fully qualified
name of the addressee and for multiple addressees use one RCPT for each
addressee
 DATA – send data line by line.

POP Protocol

The POP protocol stands for Post Office Protocol. As we know that SMTP is used
as a message transfer agent. When the message is sent, then SMPT is used to
deliver the message from the client to the server and then to the recipient server.
But the message is sent from the recipient server to the actual server with the help
of the Message Access Agent. The Message Access Agent contains two types of
protocols, i.e., POP3 and IMAP.

How is mail transmitted?

122
123

Suppose sender wants to send the mail to receiver. First mail is transmitted to the
sender's mail server. Then, the mail is transmitted from the sender's mail server to
the receiver's mail server over the internet. On receiving the mail at the receiver's
mail server, the mail is then sent to the user. The whole process is done with the
help of Email protocols. The transmission of mail from the sender to the sender's
mail server and then to the receiver's mail server is done with the help of the SMTP
protocol. At the receiver's mail server, the POP or IMAP protocol takes the data
and transmits to the actual user.

Since SMTP is a push protocol so it pushes the message from the client to the
server. As we can observe in the above figure that SMTP pushes the message from
the client to the recipient's mail server. The third stage of email communication
requires a pull protocol, and POP is a pull protocol. When the mail is transmitted
from the recipient mail server to the client which means that the client is pulling
the mail from the server.

What is POP3?

The POP3 is a simple protocol and having very limited functionalities. In the case
of the POP3 protocol, the POP3 client is installed on the recipient system while the
POP3 server is installed on the recipient's mail server.

History of POP3 protocol

The first version of post office protocol was first introduced in 1984 as RFC 918
by the internet engineering task force. The developers developed a simple and
effective email protocol known as the POP3 protocol, which is used for retrieving

123
124

the emails from the server. This provides the facility for accessing the mails offline
rather than accessing the mailbox offline.

In 1985, the post office protocol version 2 was introduced in RFC 937, but it was
replaced with the post office protocol version 3 in 1988 with the publication of
RFC 1081. Then, POP3 was revised for the next 10 years before it was published.
Once it was refined completely, it got published on 1996.

Although the POP3 protocol has undergone various enhancements, the developers
maintained a basic principle that it follows a three-stage process at the time of mail
retrieval between the client and the server. They tried to make this protocol very
simple, and this simplicity makes this protocol very popular today.

Let's understand the working of the POP3 protocol.

To establish the connection between the POP3 server and the POP3 client, the
POP3 server asks for the user name to the POP3 client. If the username is found in
the POP3 server, then it sends the ok message. It then asks for the password from
the POP3 client; then the POP3 client sends the password to the POP3 server. If
the password is matched, then the POP3 server sends the OK message, and the
connection gets established. After the establishment of a connection, the client can
see the list of mails on the POP3 mail server. In the list of mails, the user will get

124
125

the email numbers and sizes from the server. Out of this list, the user can start the
retrieval of mail.

Once the client retrieves all the emails from the server, all the emails from the
server are deleted. Therefore, we can say that the emails are restricted to a
particular machine, so it would not be possible to access the same mails on another
machine. This situation can be overcome by configuring the email settings to leave
a copy of mail on the mail server.

Advantages of POP3 protocol

The following are the advantages of a POP3 protocol:

o It allows the users to read the email offline. It requires an internet connection
only at the time of downloading emails from the server. Once the mails are
downloaded from the server, then all the downloaded mails reside on our PC
or hard disk of our computer, which can be accessed without the internet.
Therefore, we can say that the POP3 protocol does not require permanent
internet connectivity.
o It provides easy and fast access to the emails as they are already stored on
our PC.
o There is no limit on the size of the email which we receive or send.
o It requires less server storage space as all the mails are stored on the local
machine.
o There is maximum size on the mailbox, but it is limited by the size of the
hard disk.
o It is a simple protocol so it is one of the most popular protocols used today.
o It is easy to configure and use.

Disadvantages of POP3 protocol

The following are the advantages of a POP3 protocol:

o If the emails are downloaded from the server, then all the mails are deleted
from the server by default. So, mails cannot be accessed from other

125
126

machines unless they are configured to leave a copy of the mail on the
server.
o Transferring the mail folder from the local machine to another machine can
be difficult.
o Since all the attachments are stored on your local machine, there is a high
risk of a virus attack if the virus scanner does not scan them. The virus
attack can harm the computer.
o The email folder which is downloaded from the mail server can also become
corrupted.
o The mails are stored on the local machine, so anyone who sits on your
machine can access the email folder.

IMAP:- IMAP stands for Internet Message Access Protocol. It is an application


layer protocol which is used to receive the emails from the mail server. It is the
most commonly used protocols like POP3 for retrieving the emails.

It also follows the client/server model. On one side, we have an IMAP client,
which is a process running on a computer. On the other side, we have an IMAP
server, which is also a process running on another computer. Both computers are
connected through a network.

The IMAP protocol resides on the TCP/IP transport layer which means that it


implicitly uses the reliability of the protocol. Once the TCP connection is
established between the IMAP client and IMAP server, the IMAP server listens to
the port 143 by default, but this port number can also be changed.

By default, there are two ports used by IMAP:

o Port 143: It is a non-encrypted IMAP port.

126
127

o Port 993: This port is used when IMAP client wants to connect through
IMAP securely.

Why should we use IMAP instead of POP3 protocol?

POP3 is becoming the most popular protocol for accessing the TCP/IP mailboxes.
It implements the offline mail access model, which means that the mails are
retrieved from the mail server on the local machine, and then deleted from the mail
server. Nowadays, millions of users use the POP3 protocol to access the incoming
mails. Due to the offline mail access model, it cannot be used as much. The online
model we would prefer in the ideal world. In the online model, we need to be
connected to the internet always. The biggest problem with the offline access using
POP3 is that the mails are permanently removed from the server, so multiple
computers cannot access the mails. The solution to this problem is to store the
mails at the remote server rather than on the local server. The POP3 also faces
another issue, i.e., data security and safety. The solution to this problem is to use
the disconnected access model, which provides the benefits of both online and
offline access. In the disconnected access model, the user can retrieve the mail for
local use as in the POP3 protocol, and the user does not need to be connected to the
internet continuously. However, the changes made to the mailboxes are
synchronized between the client and the server. The mail remains on the server so
different applications in the future can access it. When developers recognized these
benefits, they made some attempts to implement the disconnected access model.
This is implemented by using the POP3 commands that provide the option to leave
the mails on the server. This works, but only to a limited extent, for example,
keeping track of which messages are new or old become an issue when both are
retrieved and left on the server. So, the POP3 lacks some features which are
required for the proper disconnected access model.

In the mid-1980s, the development began at Stanford University on a new protocol


that would provide a more capable way of accessing the user mailboxes. The result
was the development of the interactive mail access protocol, which was later
renamed as Internet Message Access Protocol.

IMAP History and Standards

The first version of IMAP was formally documented as an internet standard was
IMAP version 2, and in RFC 1064, and was published in July 1988. It was updated
in RFC 1176, August 1990, retaining the same version. So they created a new
document of version 3 known as IMAP3. In RFC 1203, which was published in

127
128

February 1991. However, IMAP3 was never accepted by the market place, so
people kept using IMAP2. The extension to the protocol was later created called
IMAPbis, which added support for Multipurpose Internet Mail Extensions (MIME)
to IMAP. This was a very important development due to the usefulness of MIME.
Despite this, IMAPbis was never published as an RFC. This may be due to the
problems associated with the IMAP3. In December 1994, IMAP version 4, i.e.,
IMAP4 was published in two RFCs, i.e., RFC 1730 describing the main protocol
and RFC 1731 describing the authentication mechanism for IMAP 4. IMAP 4 is
the current version of IMAP, which is widely used today. It continues to be
refined, and its latest version is actually known as IMAP4rev1 and is defined in
RFC 2060. It is most recently updated in RFC 3501.

IMAP Features

IMAP was designed for a specific purpose that provides a more flexible way of
how the user accesses the mailbox. It can operate in any of the three modes, i.e.,
online, offline, and disconnected mode. Out of these, offline and disconnected
modes are of interest to most users of the protocol.

The following are the features of an IMAP protocol:

o Access and retrieve mail from remote server: The user can access the mail
from the remote server while retaining the mails in the remote server.
o Set message flags: The message flag is set so that the user can keep track of
which message he has already seen.
o Manage multiple mailboxes: The user can manage multiple mailboxes and
transfer messages from one mailbox to another. The user can organize them
into various categories for those who are working on various projects.
o Determine information prior to downloading: It decides whether to retrieve
or not before downloading the mail from the mail server.
o Downloads a portion of a message: It allows you to download the portion of
a message, such as one body part from the mime-multi part. This can be
useful when there are large multimedia files in a short-text element of a
message.
o Organize mails on the server: In case of POP3, the user is not allowed to
manage the mails on the server. On the other hand, the users can organize

128
129

the mails on the server according to their requirements like they can create,
delete or rename the mailbox on the server.
o Search: Users can search for the contents of the emails.
o Check email-header: Users can also check the email-header prior to
downloading.
o Create hierarchy: Users can also create the folders to organize the mails in a
hierarchy.

IMAP General Operation

1. The IMAP is a client-server protocol like POP3 and most other TCP/IP
application protocols. The IMAP4 protocol functions only when the IMAP4
must reside on the server where the user mailboxes are located. In c the
POP3 does not necessarily require the same physical server that provides the
SMTP services. Therefore, in the case of the IMAP protocol, the mailbox
must be accessible to both SMTP for incoming mails and IMAP for retrieval
and modifications.

129
130

2. The IMAP uses the Transmission Control Protocol (TCP) for


communication to ensure the delivery of data and also received in the order.
3. The IMAP4 listens on a well-known port, i.e., port number 143, for an
incoming connection request from the IMAP4 client.

Let's understand the IMAP protocol through a simple example.

The IMAP protocol synchronizes all the devices with the main server. Let's
suppose we have three devices desktop, mobile, and laptop as shown in the above
figure. If all these devices are accessing the same mailbox, then it will be
synchronized with all the devices. Here, synchronization means that when mail is
opened by one device, then it will be marked as opened in all the other devices, if
we delete the mail, then the mail will also be deleted from all the other devices. So,
we have synchronization between all the devices. In IMAP, we can see all the
folders like spam, inbox, sent, etc. We can also create our own folder known as a
custom folder that will be visible in all the other devices.

MIME:- Multipurpose Internet Mail Extension (MIME) is a standard which


was proposed by Bell Communications in 1991 in order to expand limited
capabilities of email.
MIME is a kind of add on or a supplementary protocol which allows non-ASCII
130
131

data to be sent through SMTP. It allows the users to exchange different kinds of
data files on the Internet: audio, video, images, application programs as well.
Why do we need MIME?
Limitations of Simple Mail Transfer Protocol (SMTP):
1. SMTP has a very simple structure
2. It’s simplicity however comes with a price as it only send messages in NVT 7-
bit ASCII format.
3. It cannot be used for languages that do not support 7-bit ASCII format such
as- French, German, Russian, Chinese and Japanese, etc. so it cannot be
transmitted using SMTP. So, in order to make SMTP more broad we use
MIME.
4. It cannot be used to send binary files or video or audio data.
Purpose and Functionality of MIME –
Growing demand for Email Message as people also want to express in terms of
Multimedia. So, MIME another email application is introduced as it is not
restricted to textual data.
MIME transforms non-ASCII data at sender side to NVT 7-bit data and delivers it
to the client SMTP. The message at receiver side is transferred back to the
original data. As well as we can send video and audio data using MIME as it
transfers them also in 7-bit ASCII data.
Features of MIME –

1. It is able to send multiple attachments with a single message.


2. Unlimited message length.
3. Binary attachments (executables, images, audio, or video files) which may be
divided if needed.
4. MIME provided support for varying content types and multi-part messages.
Working of MIME –
Suppose a user wants to send an email through user agent and it is in a non-
ASCII format so there is a MIME protocol which converts it into 7-bit NVT
ASCII format. Message is transferred through e-mail system to the other side in
7-bit format now MIME protocol again converts it back into non-ASCII code and
now the user agent of receiver side reads it and then information is finally read by
the receiver. MIME header is basically inserted at the beginning of any e-mail
transfer.
MIME with SMTP and POP –
SMTP transfers the mail being a message transfer agent from senders side to the
mailbox of receiver side and stores it and MIME header is added to the original

131
132

header and provides additional information. while POP being the message access
agent organizes the mails from the mail server to the receivers computer. POP
allows user agent to connect with the message transfer agent.
MIME Header:
It is added to the original e-mail header section to define transformation. There
are five headers which we add to the original header:
1. MIME Version – Defines version of MIME protocol. It must have the
parameter Value 1.0, which indicates that message is formatted using MIME.
2. Content Type – Type of data used in the body of message. They are of
different types like text data (plain, HTML), audio content or video content.
3. Content Type Encoding – It defines the method used for encoding the
message. Like 7-bit encoding, 8-bit encoding, etc.
4. Content Id – It is used for uniquely identifying the message.
5. Content description – It defines whether the body is actually image, video or
audio.
DHCP:- Dynamic Host Configuration Protocol(DHCP) is an application layer
protocol which is used to provide:
1. Subnet Mask (Option 1 – e.g., 255.255.255.0)
2. Router Address (Option 3 – e.g., 192.168.1.1)
3. DNS Address (Option 6 – e.g., 8.8.8.8)
4. Vendor Class Identifier (Option 43 – e.g., ‘unifi’ = 192.168.1.9 ##where unifi
= controller)
DHCP is based on a client-server model and based on discovery, offer, request,
and ACK.
DHCP port number for server is 67 and for the client is 68. It is a Client server
protocol which uses UDP services. IP address is assigned from a pool of
addresses. In DHCP, the client and the server exchange mainly 4 DHCP messages
in order to make a connection, also called DORA process, but there are 8 DHCP
messages in the process.
These messages are given as below:
1. DHCP discover message –
This is a first message generated in the communication process between server
and client. This message is generated by Client host in order to discover if
there is any DHCP server/servers are present in a network or not. This
message is broadcasted to all devices present in a network to find the DHCP
server. This message is 342 or 576 bytes long

132
133

As shown in the figure, source MAC address (client PC) is 08002B2EAF2A,


destination MAC address(server) is FFFFFFFFFFFF, source IP address is
0.0.0.0(because PC has no IP address till now) and destination IP address is
255.255.255.255 (IP address used for broadcasting). As the discover message
is broadcast to find out the DHCP server or servers in the network therefore
broadcast IP address and MAC address is used.
2. DHCP offer message –
The server will respond to host in this message specifying the unleased IP
address and other TCP configuration information. This message is broadcasted
by server. Size of message is 342 bytes. If there are more than one DHCP
servers present in the network then client host will accept the first DHCP
OFFER message it receives. Also a server ID is specified in the packet in
order to identify the server.

Now, for the offer message, source IP address is 172.16.32.12 (server’s IP


address in the example), destination IP address is 255.255.255.255 (broadcast
IP address) ,source MAC address is 00AA00123456, destination MAC
address is FFFFFFFFFFFF. Here, the offer message is broadcast by the DHCP
server therefore destination IP address is broadcast IP address and destination
MAC address is FFFFFFFFFFFF and the source IP address is server IP
address and MAC address is server MAC address.

133
134

Also the server has provided the offered IP address 192.16.32.51 and lease
time of 72 hours(after this time the entry of host will be erased from the server
automatically) . Also the client identifier is PC MAC address
(08002B2EAF2A) for all the messages.
3. DHCP request message –
When a client receives a offer message, it responds by broadcasting a DHCP
request message. The client will produce a gratitutous ARP in order to find if
there is any other host present in the network with same IP address. If there is
no reply by other host, then there is no host with same TCP configuration in
the network and the message is broadcasted to server showing the acceptance
of IP address .A Client ID is also added in this message.

Now, the request message is broadcast by the client PC therefore source IP


address is 0.0.0.0(as the client has no IP right now) and destination IP address
is 255.255.255.255 (broadcast IP address) and source MAC address is
08002B2EAF2A (PC MAC address) and destination MAC address is
FFFFFFFFFFFF.
Note – This message is broadcast after the ARP request broadcast by the PC
to find out whether any other host is not using that offered IP. If there is no
reply, then the client host broadcast the DHCP request message for the server
showing the acceptance of IP address and Other TCP/IP Configuration.
4. DHCP acknowledgement message –
In response to the request message received, the server will make an entry
with specified client ID and bind the IP address offered with lease time. Now,
the client will have the IP address provided by server.

134
135

Now the server will make an entry of the client host with the offered IP
address and lease time. This IP address will not be provided by server to any
other host. The destination MAC address is FFFFFFFFFFFF and the
destination IP address is 255.255.255.255 and the source IP address is
172.16.32.12 and the source MAC address is 00AA00123456 (server MAC
address).
5. DHCP negative acknowledgement message –
Whenever a DHCP server receives a request for IP address that is invalid
according to the scopes that is configured with, it send DHCP Nak message to
client. Eg-when the server has no IP address unused or the pool is empty, then
this message is sent by the server to client.
6. DHCP decline –
If DHCP client determines the offered configuration parameters are different
or invalid, it sends DHCP decline message to the server .When there is a reply
to the gratuitous ARP by any host to the client, the client sends DHCP decline
message to the server showing the offered IP address is already in use.
7. DHCP release –
A DHCP client sends DHCP release packet to server to release IP address and
cancel any remaining lease time.
8. DHCP inform –
If a client address has obtained IP address manually then the client uses a
DHCP inform to obtain other local configuration parameters, such as domain
name. In reply to the dhcp inform message, DHCP server generates DHCP ack
message with local configuration suitable for the client without allocating a
new IP address. This DHCP ack message is unicast to the client.
Note – All the messages can be unicast also by dhcp relay agent if the server is
present in different network.
Advantages – The advantages of using DHCP include:

135
136

 centralized management of IP addresses


 ease of adding new clients to a network
 reuse of IP addresses reducing the total number of IP addresses that are
required
 simple reconfiguration of the IP address space on the DHCP server without
needing to reconfigure each client
The DHCP protocol gives the network administrator a method to configure the
network from a centralised area.
With the help of DHCP, easy handling of new users and reuse of IP address can
be achieved.
Disadvantages – Disadvantage of using DHCP is:
 IP conflict can occur

DHCP Operations:- DHCP server’s most fundamental task is Providing IP


addresses to clients. DHCP uses three different address allocation mechanisms
when assigning IP addresses:
Manual Allocation: The administrator manually assigns a pre-allocated IP address
to the client and DHCP only communicates the IP address to the device.
Automatic Allocation: DHCP automatically assigns a static IP address
permanently to a device, selecting it from a pool of available addresses. There is no
lease and the address is permanently assigned to a device.
Dynamic Allocation: DHCP automatically dynamically assigns, or leases, an IP
address from a pool of addresses for a limited period of time chosen by the server,
or the address will be withdrawn when the client tells the DHCP server that it no
longer needs the address.
 Configure FTP:-
1. Navigate to Start > Control Panel > Administrative Tools > Internet Information
Services (IIS) Manager.
2. Once the IIS console is open, expand the local server.
3. Right-click on Sites, and click on Add FTP Site.

SSH:- SSH or Secure SHell is now only major protocol to access the network
devices and servers over the internet. SSH was developed by SSH
Communications Security Ltd., it is a program to log into another computer over
136
137

a network, to execute commands in a remote machine, and to move files from one
machine to another.
 It provides strong authentication and secure communications over insecure
channels.
 SSH runs on port 22 by default; however it can be easily changed. SSH is a
very secure protocol because it shares and sends the information in encrypted
form which provides confidentiality and security of the data over an un-
secured network such as internet.
 Once the data for communication is encrypted using SSH, it is extremely
difficult to decrypt and read that data, so our passwords also become secure to
travel on a public network.
 SSH also uses a public key for the authentication of users accessing a server
and it is a great practice providing us extreme security. SSH is mostly used in
all popular operating systems like Unix, Solaris, Red-Hat Linux, CentOS,
Ubuntu etc.
 SSH protects a network from attacks such as IP spoofing, IP source routing,
and DNS spoofing. An attacker who has managed to take over a network can
only force ssh to disconnect. He or she cannot play back the traffic or hijack
the connection when encryption is enabled.
 When using ssh’s slogin (instead of rlogin) the entire login session, including
transmission of password, is encrypted; therefore it is almost impossible for an
outsider to collect passwords.

137

You might also like