Chapter 5
Chapter 5
5.1 Introduction
Wireless Sensor Networks (WSNs) have always attracted the interests of the researchers
because of numerous applications they have in store with them. These networks are formed
with hundreds of small sensors which sense data from the environment and forward it to
centralized base station either directly or via relay nodes [148-152]. These networks are
provide support in various areas such as healthcare, agriculture, military applications, home
applications etc. However, the sensor nodes are powered by smaller batteries which become a
major constraint for these networks [153-155]. WSNs also provide backbone support to
Internet of Things (IoT) applications these days. If the constraint of the sensor nodes, i.e.
their limited battery is not taken care of, the entire structure of IoT becomes of no use as well.
In order to increase the lifetime of the sensor nodes, clustering is one of the many famous
techniques which has been practiced by the researchers in the past. These clustering
techniques have been successful in increasing the lifetime of the sensor nodes [156-160]. In
such techniques, the entire network is divided into clusters with each cluster having its own
The concept of clustering was originated from Low Energy Adaptive Clustering Hierarchy
(LEACH) protocol and ever since researchers have come forward with numerous variations
to this protocol [162-165]. In these variations, the focus has been given on optimal selection
of cluster head for the network. While some of the approaches go ahead with selection of
cluster head according to residual energy of the node or some other parameter of the node
[166-168], the others make use of multi hop data transmission approach [169-170] among
cluster heads to prolong the lifetime of WSN. Apart from this, many bio inspired or swarm
intelligence approaches such as Ant Colony Optimization [171-173], krill herd algorithm
[174-175], whale and grey wolf optimization algorithm [176-178], gravitational search
algorithm (GSA) [179-182] have been used to optimize the performance of WSN. Apart from
this concept of Mobile Agent (MA) has also attracted interest of the researchers. MA is used
to collect data from the cluster heads to improve the network lifetime. Such approaches have
been described in [183-186] while the work related to optimizing the itinerary of the MA can
Although a lot of work has been done in the recent past related to optimizing the cluster head
selection process or optimizing the data transmission process, still work involving emergency
applications has not been focused upon lately. This chapter presents two fold gravitational
search algorithm based clustering protocol for IoT application involving emergency data. The
proposed cluster protocol optimizes the cluster head selection process using GSA and data
transmission has been done using MA. Furthermore, the itinerary of the MA has also been
The authors in [191] presented concept of threshold value of energy which has been used to
avoid rotation of cluster heads in every round. Usually, the cluster heads are rotated every
round so as to balance the load among them. However, in the presented protocol, the cluster
heads having energy more than threshold value are not rotated and kept same for the next
round. It saves the energy consumed in broadcasting of advertisement packets and thereby
increases the network lifetime. The same concept of saving the energy by avoiding re-
broadcast of advertisement packets in each round was presented in [192]. In the presented
protocol, nodes exchange certain list of information in pilot rounds initially which is used for
packets. Apart from saving the energy by avoiding the rotation of cluster heads, optimization
strategies for cluster head selection are also considered in the past research works. The
authors in [193] have presented hybrid cluster head selection method using harmony search
and firefly optimization. While energy efficient cluster heads were selected at primary stage
using harmony search algorithm, the clusters formed by them are refined using firefly
optimization. The cluster formation was optimized considering the density of the nodes,
compactness of the cluster formed and energy required in cluster formation. On the other
hand, the authors in [194] have made use of gravitational search algorithm for optimal
selection of cluster heads. The parameters considered for optimal cluster head selection were
inter cluster distance, residual energy of the node and distance from base station. Node
spacing is one of the optimization parameter explored by the authors in [195] apart from
remaining energy and total energy of the network to optimize the cluster head selection
process. Another use of algorithms namely dolphin echolocation and crow search
optimization in a hybrid way was made by the authors in [196] for selecting the cluster heads.
Since the network lifetime can also be improved by optimizing the data transmission process
from cluster heads to the base station, this can be seen in [197] where the mobile agents were
used to collect data from the cluster heads. The empower Hamilton loop based approach was
used for itinerary planning of the MA. An approach of data collection using Mobile data
collectors has been presented in [198]. The itinerary of mobile data collector has been
optimized using Particle Swarm Optimization (PSO) algorithm [199] and cluster heads
forward the data to the mobile data collector, when it reaches the anchor points, using the
space division multiple access technique. The optimization of itinerary of MA was also
explored by the authors in [200] using the fuzzy based approach. The mobile agent would
choose next node to visit based on the outcome of the fuzzy rules.
5.3 Motivation
Wireless sensor networks that provide support to Internet of Things applications which is
very prominent field of research these days. WSNs have numerous sensors powered by
smaller batteries and researchers have used clustering approaches to increase their lifetime.
Increasing the lifetime reduces the re-deployment cost of the sensors and makes IoT
applications more successful. Clustering originated from LEACH which defines threshold
value and a probability for each node to become cluster head. Ever since research has been
conducted to optimize the selection of cluster head for better network performance. In some
of the previous researches [200-205], threshold values for every node has been modified to
have better cluster head or probability of the node has been modified to optimize the cluster
head selection process. Apart from cluster head selection process, data transmission phase
(sending data from cluster heads to the base station) has also seen modifications in the recent
past. Authors in [206] proposed a clustering technique where the probability of the node to
become cluster head was modified using remaining energy of the node. Furthermore, the
authors used the Hamilton loop concept for MA to collect data from the cluster heads.
However, the authors have not focused on including other parameters, such as distance of the
node from the base station etc., to optimize the cluster head selection process. Authors in
[207] proposed the collection of data from the nodes using the MA and optimized the
itinerary for it. However, the author has not worked on allocating the source nodes to the
respective MA in a distance-efficient way. It was also observed that researchers have not
focused on specific applications of WSN or IoT that take into account emergency data sensed
by the nodes and most of the clustering protocols designed had only considered normal data
sensed by the nodes. Therefore, taking into consideration these shortcomings, the proposed
clustering protocol has been described in this chapter. The proposed protocol uses GSA to
optimize the cluster head selection process and to optimize the itinerary for the MA to collect
In this chapter, clustering protocol for WSN has been designed which is expected to
support IoT application involving sensing of emergency data by the sensor nodes. For
example, consider an IOT application backed by the WSN deployed for forest
moisture sensor etc. (to monitor the environmental conditions) and sound sensors (to
monitor if someone is illegally cutting down the trees). Now sound sensors have
emergency data as compared to other sensors, and upon sensing the sound of higher
frequency (related to the tree cutting) can forward the data to the monitoring station
wherein smart alarm system (IOT device) can receive data from the sound sensors and
This chapter considers the homogenous environment for the nodes in context of
All the nodes are randomly deployed in the network and these nodes are not mobile.
Out of ‘N’ total nodes, ‘em’ percentage of total nodes are assumed to have emergency
data.
The nodes in the network consume energy according to first order radio energy
dissipation model [26]. The energy consumed in transmitting and receiving a packet
Where E(tx) is the energy consumed by the node which is sending L bit packet, E(rx)
is the energy consumed by the node which is receiving it.Eelec is the energy consumed
per bit by the transmitter or the receiver,Eamp and Efsare the amplifier parameters of
transmission corresponding to the multi-path fading model and the free-space model,
respectively.
d is the Euclidean distance between two communicating nodes and d0 is the threshold
𝐸
distance which is computed by 𝑑0 = √ 𝑓𝑠⁄𝐸 .
𝑎𝑚𝑝
The various notations and symbols used in this chapter have been shown in the table
5.1.
The proposed clustering protocol exploits Gravitational Search Algorithm (GSA) twice,
hence called as Two Fold GSA), to optimize the performance of the wireless sensor network
The proposed protocol groups the nodes into clusters, selects an optimal cluster head from
them which then aggregates the data from the sensor nodes and forwards it to server/base
station with the help of Mobile Agents (MA). In the Two Fold GSA, the first use of GSA was
made in optimizing the cluster head selection process while the second one was made in
optimizing the itinerary of MA for data aggregation process. The protocol has two phases
namely set up phase and steady phase. While the setup phase deals with selection of cluster
heads and formation of clusters, the steady phase deals with the data aggregation from cluster
members to cluster heads and data transmission from cluster heads to base station using MAs.
Table 5.1: Symbols and notations used
𝐺 Gravitational constant
d0 Threshold Distance
L Packet Size
Initially every node is eligible to become cluster head such that their probability ‘p’ of
becoming cluster head is equal. The equal probability is defined in traditional clustering
protocol LEACH. However, two nodes having equal probability may have different
characteristics. For example, a node may be located nearer to the base station while the other
one may be located far away, a node may have dense neighborhood while the other one may
have sparse neighborhood (this factor influences the size of cluster formed by the nodes), a
node may have some emergency data to forward to server/base station while the other one
may have sensed only normal data which is not as important as emergency data. Taking into
consideration such concerns, it is unfair to assign equal probability to the nodes to become
cluster head. Therefore, the probability of the node must be adjusted in a way that the node
having better characteristics has higher chance of becoming cluster head. Consequently, in
the proposed protocol, the probability ‘p’ has been adjusted according to the acceleration
According to GSA, every eligible node which can become cluster head acts as an agent.
Therefore, in network having set of ‘N’ randomly distributed nodes in the area of ‘M*M’ sq.
units, we have ‘N’ agents in the initialization stage of GSA. These agents have different set of
Let 𝑋𝑑𝑖 = {𝑥𝑑1 , 𝑥𝑑2 , … … . , 𝑥𝑑𝑁 } represent their random locations in ′𝑑′ dimension search
space. Since this chapter focuses on 2-D network, therefore we have set of X and Y
𝐸в𝑖 = {𝐸в1 , 𝐸в2 , … … . 𝐸в𝑁 } represents the bit value for the emergency data sensed by
𝑁𝑒𝑖𝑖 = {𝑁𝑒𝑖1 , 𝑁𝑒𝑖2 , … … . 𝑁𝑒𝑖𝑘 } represents the set of ‘k’neighbors for node ‘Ni’ such at
any point of time Euclidean distance between the node and its neighbor is subjected to
condition:
GSA is based on Newton’s first and second Law of Gravity. While the first law states that
each particle has certain amount of attraction (known as Gravitational Force) with another
particle, the second law defines the acceleration produced in the particle which depends on
Extending the law to the sensor network, each node ‘Ni’ must attract another node ‘Nj’ in the
neighborhood. The force exerted between the nodes is directly proportional to the product of
their Gravitational Mass and inversely proportional to square of distance between them. The
force 𝐹𝑑𝑖,𝑗 (𝑡) acting in the ‘dth’ dimension with which node ‘Nj’ pulls or pushes the node ‘Ni’
Where 𝐺 is the gravitational constant, 𝑀𝑝𝑖 is the passive gravitational mass of node ‘Ni’, 𝑀𝑎𝑗
is the active gravitational mass of node ‘Nj’, 𝑥𝑑𝑗 (𝑡) and 𝑥𝑑𝑖 (𝑡) represents the position of the
node ‘Nj’ and ‘Ni’ and 𝐷𝑖,𝑗 (𝑡) is the Euclidean distance between the nodes. Since the network
is two dimensional, therefore the nodes will exert force over the neighbors in two dimensions,
thus we have:
𝑦
Where 𝐹𝑥𝑖,𝑗 (𝑡), 𝐹𝑖,𝑗 (𝑡) represents the force acting in the X and Y dimension over the node.
the node ‘Ni’ such that the total force exerted over the node ‘Ni’ is randomly weighted sum of
𝑚𝑖 (𝑡)
𝑀𝑝𝑖 = 𝑀𝑎𝑗 = ∑𝑘 (5.9)
𝑗=1 𝑚𝑗 (𝑡)
Such that
𝑓𝑖𝑡 −𝑤𝑜𝑟𝑠𝑡(𝑡)
𝑖
𝑚𝑖 (𝑡) = 𝐵𝑒𝑠𝑡(𝑡)−𝑤𝑜𝑟𝑠𝑡(𝑡) (5.10)
Where 𝑓𝑖𝑡𝑖 is the fitness function of the 𝑖𝑡ℎ node, 𝑤𝑜𝑟𝑠𝑡 is the maximum value of the fitness
function of some node in the neighborhood of ‘k’neighbors when the minimization problem
is taken into account and vice-versa for the maximization problem. Similarly, 𝐵𝑒𝑠𝑡 is the
minimum value of the fitness function of some node in the neighborhood of ‘k’neighbors in
case of minimization problem and vice-versa. Thus for maximization problem we have:
The fitness function of the nodes is dependent upon following three sub-routines or three sub-
fitness functions:
Number of emergency nodes in the neighborhood: The first sub-fitness function is
dependent on the number of neighbors having the emergency data to forward to base
station. If a node, having more number of neighbors that has emergency data, is
elected as cluster head then probability of emergency data getting lost from the
network will be reduced significantly. This is because in such a cluster formed, more
nodes with emergency data will forward data to cluster head over shorter distance as
compared to the scenario when they are in direct communication with base station and
∑ 𝑁𝑒𝑖𝑖 |𝐸в𝑁𝑒𝑖 =1
𝑓𝑖𝑡𝑒𝑚𝑒𝑟𝑔𝑒𝑛𝑐𝑦
𝑖
= 𝑒𝑚∗𝑁
𝑖
(5.13)
Remaining energy of the node: This is another factor that needs to be maximized to
have optimal network performance. If a node (fulfilling condition of the first sub-
fitness function) has formed cluster such the majority of the cluster members have
emergency data, then its remaining energy needs to be on the higher side too for
reliable data transfer to base station. Otherwise, in the contrary scenario, all the data
accumulated by the low-energy cluster head will be lost. Therefore, the fitness
considered for optimal cluster head election. This parameter defines how much energy
cluster members have to spend to forward data to cluster head, i.e. intra cluster
communication cost, and to forward data to the base station, i.e. direct communication
cost. Even if the node fulfills the first two sub-fitness functions, it cannot afford to
have more communication cost which is again deadly for the network’s performance.
computed as:
2 2
∑𝑘𝑗=1(𝐸𝑒𝑙𝑒𝑐 ∗ 𝐿 + 𝐸𝑓𝑠 ∗ 𝐿 ∗ 𝐷 𝑖,𝑗 ) (𝐸𝑒𝑙𝑒𝑐 ∗ 𝐿 + 𝐸𝑓𝑠 ∗ 𝐿 ∗ 𝐷 𝑖,𝐵𝑆 )
𝑓𝑖𝑡𝑐𝑜𝑠𝑡
𝑖
= +
∑𝑘𝑗=1(𝑁𝑒𝑖𝑗 ∗ 𝐸𝑗 ) 𝐸𝑖
(5.15)
𝑓𝑖𝑡𝑖 = 𝛼 ∗ 𝑓𝑖𝑡𝑒𝑚𝑒𝑟𝑔𝑒𝑛𝑐𝑦
𝑖
+ 𝛽 ∗ 𝑓𝑖𝑡𝑒𝑛𝑒𝑟𝑔𝑦
𝑖
+ 𝛾 ∗ (1 − 𝑓𝑖𝑡𝑐𝑜𝑠𝑡
𝑖
) (5.16)
Where 𝛼, 𝛽, 𝛾 are constants having the sum equal to 1. The optimal cluster head would be the
node that has formed cluster with more number of neighbors having emergency data, the
node must have more remaining energy with it and the energy cost of communication should
be less too. Once the masses have been computed, the acceleration can be computed as:
𝐹𝑑 (𝑡)
𝑎𝑐𝑐𝑑𝑖 = 𝑀𝑖 (𝑡) (5.17)
𝑖
The nodes having higher value of the mass tend to experience least acceleration and are
considered fit. Therefore, the probability ‘p’ of the node to become cluster head is adjusted
as:
𝑝
𝑝𝑎𝑑𝑗 = 𝑎𝑐𝑐𝑖 𝑑 (5.18)
𝑖
After adjusting for the probabilities of the nodes, each node generates a random number and
compares it with threshold value. If the random number is less than threshold value, the node
becomes cluster head for the current round provided that it has not been cluster head in last
‘1/p’ rounds.
𝑝𝑎𝑑𝑗 (𝑟)
1 ; if node(i) ∈ G(r)
𝑇ℎ(𝑖 ) = {1−𝑝𝑎𝑑𝑗 (𝑟)(𝑟 𝑚𝑜𝑑 𝑝𝑎𝑑𝑗(𝑟)
) (5.19)
0; 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒
Where ‘r’ is the current round, G(r) is the set of the nodes which have not become as cluster
head in last ‘1/p’rounds. All the elected cluster heads broadcast the advertisement packet in
their communication range to their neighbors. All the neighboring nodes that receive the
packet decide to join the cluster head and form cluster with them. However, a node can
receive the advertisement packet from more than one cluster head, in such a case the nodes
join the cluster head for which the variance of the distance (between the node and cluster
head) is least. Let us assume that 𝐶𝐻𝑖 = {𝐶𝐻1 , 𝐶𝐻2 , … … . 𝐶𝐻𝑞 } represents set of ‘q’ cluster
heads from which the node ‘i’ has received the advertisement packet such that 𝐷𝑖,𝐶𝐻 =
{𝐷𝑖,1 , 𝐷𝑖,2 , … … . 𝐷𝑖,𝑞 } is the set of distance between them. The node makes the cluster head as
2 2
(𝐷𝑖,𝐶𝐻 −𝐴)
𝐶𝐻𝑝𝑎𝑟𝑒𝑛𝑡 = 𝐶𝐻𝑖 |Var𝑖 = ( √∑𝑞𝑖=1 𝑞
) = 𝑚𝑖𝑛 (Var𝑖 ) (5.20)
𝑞 𝐷𝑖,𝐶𝐻
Where 𝐴 = ∑𝑖=1 is the average value of the distance of the node from all the cluster
𝑞
heads and Var𝑖 represents variance of the distance. Thus, the formation of the clusters marks
In this, the data transmission is executed between the base station, cluster heads and cluster
members. After the formation of the clusters, the cluster heads broadcast Time Division
Multiple Access (TDMA) schedule to the member nodes for the data transmission. All the
cluster members aggregate the sensed data at their respective cluster head in the assigned
time slots. After aggregating the data from cluster members, the cluster heads need to
forward the data to the base station. For this purpose, the MAs have been used to collect data
from the cluster heads. The most imperative step is the planning of itinerary of the mobile
agent which will decide the order of visit of MA at the cluster heads to collect data from
them. The proposed protocol makes use of GSA second time here to decide optimal itinerary
In order to plan the itinerary for the MA in an optimal way, various steps such as deciding the
number of MAs for a particular number of source nodes (the nodes which have data to send
to base station, cluster heads in our case), allocation of the source nodes to the MA needs to
be done first.
Deciding the number of MAs: In this step, the total data that needs to be collected and
free memory of the MA decides how many number of MAs are required for data
Where RMA is the required number of mobile agents, FMMA is the free memory of
the MA and 𝐿𝐶𝐻𝑖 is the data packet size with the ithcluster head (source node). Let
𝑀𝐴 = {𝑀𝐴1 , 𝑀𝐴2 , … … . 𝑀𝐴𝑅𝑀𝐴 } be the set of MAs that will be collecting data from
𝑝
the p number of cluster heads. Therefore, 𝑅𝑀𝐴
number of cluster heads will be
Allocating the source nodes to MAs: In traditional approaches, the allocation has been
According to this, the source node, having greatest (highest amount) data to forward,
is allocated to the MA that has greatest memory free/available with it. Since, in this
approach the source nodes are allocated based on the amount of data available with
them therefore there is always an uncertainty about the distance among the allocated
nodes. For instance, the source nodes allocated to a single MA may be located far
away from each other which increases the tour length of the MA and will
In order to tackle this issue, the k-means clustering approach has been used with an
objective of reducing the distance among the assigned source nodes. This is done by
allocating source nodes nearer to each other to same MA instead of allocating the
In the proposed allocation strategy, a random source node is first given as input to the
k-means clustering algorithm with an intent of forming ‘RMA’ clusters. Then a cluster
is formed with nearest node to the randomly chosen source node. In the next iteration,
another node is added to the cluster such that it is closest to the centroid of the cluster
formed. The iteration continues till number of members in the cluster is not more than
𝑝
𝑅𝑀𝐴
. Thus, using k-means clustering algorithm, the source nodes can be allocated to
the MA in a distance-efficient way. At the end of this step, each MA will have to visit
𝑝
Optimal Itinerary planning for MA: For the set of 𝑅𝑀𝐴
source nodes, the itinerary
planning is done using GSA algorithm. As defined earlier, the agent having lesser
acceleration is considered more efficient. The fitness functions defined earlier for
optimal cluster head election are changed in this step and rest of the computation of
the masses and gravitational force remains same. This is again modelled as
o Type of data with node: Since a node may have some emergency data, it needs
to be sent to base station with more priority as compared to other nodes having
𝑓𝑖𝑡𝑑𝑎𝑡𝑎_𝑡𝑦𝑝𝑒
𝑖
= 𝐸в𝑖 (5.23)
o Energy of the node: The nodes having the emergency data have more priority
than the nodes having the normal data. However, among the nodes with
emergency data, the ones having the minimum remaining energy left with
them or the ones having higher energy cost of communication with the base
station have more priority. Thus, this fitness function encompasses the
o Distance with the base station: A node, having the emergency data and having
minimum remaining energy, if located far away from the base station is again
a more priority for the MA to visit as early as possible. Therefore, this fitness
𝑑𝑖𝑠𝑡𝑎𝑛𝑐𝑒 2 2
𝑓𝑖𝑡𝑖 = √(𝑋1 − 𝐵𝑋) + (𝑌1 − 𝐵𝑌) (5.25)
Where X1 and Y1 are coordinates of the node and BX and BY are coordinates
of the base station. If the distance is more, then priority to visit the node
increases.
The final fitness function is again the weighted sum of the three sub-fitness functions
𝑓𝑖𝑡𝑖 = 𝛼 ∗ 𝑓𝑖𝑡𝑑𝑎𝑡𝑎_𝑡𝑦𝑝𝑒
𝑖
+ 𝛽 ∗ 𝑓𝑖𝑡𝑒𝑛𝑒𝑟𝑔𝑦
𝑖
+ 𝛾 ∗ 𝑓𝑖𝑡𝑑𝑖𝑠𝑡𝑎𝑛𝑐𝑒
𝑖
(5.26)
Where 𝛼, 𝛽, 𝛾 are constants having the sum equal to 1. After computing the fitness
function for the nodes, the force exerted over the node, masses of the node as well as
acceleration for the node is computed. The MA visits the node having the minimum
acceleration first and the node having the maximum acceleration at last. Therefore, if
𝑎𝑐𝑐𝐶𝐻𝑖 represents the computed acceleration, then the optimal itinerary can be
represented as:
𝑎𝑐𝑐𝐶𝐻 𝑝 (5.27)
𝑅𝑀𝐴
Thus, when MA visits the cluster heads, they aggregate their data at the MA which
carries it towards the base station. This marks the end of steady phase as well as one
round. At the start of next round, same process is repeated and new cluster heads are
network of 200 nodes, that were randomly deployed, was created in area of 200*200 sq.
units. The base station was considered to be located in the center of the network. Other
simulation parameters used for the simulation have been defined in table 6.1. The
performance of proposed clustering protocol was analyzed in terms of network lifetime and
energy consumption. The comparison was done with the protocol described in [147] to check
Parameter Value
BS location 100,100
Eelec 50 nJ/bit
Efs 10 pJ/bit/m2
Eda 5 nJ/bit
existing and proposed clustering protocol. The number of alive nodes is an apt measurement
of the network lifetime. The round when the first node dies defines the network stability
period while the round at which the last node dies defines the network lifetime. It was
observed that the network stability period was least for LEACH-GA clustering protocol (400
rounds for LEACH-GA) followed by CBRP (600 rounds) and EEUC (720 rounds). The EHL
had second highest network stability period of 960 rounds whereas the proposed clustering
protocol had network stability period of 1100 rounds. After the first node dies, the network
experiences steep decline in number of alive nodes. However, the proposed protocol manages
to achieve somewhat gradual decline in the number of alive nodes which extended its
network lifetime. EHL optimized the cluster head selection process taking into account
remaining energy of the node only. It ignored the distance between the chosen cluster head
and the base station which can be important factor in deciding energy consumption of the
cluster head to forward data to the base station. While for the proposed clustering protocol,
GSA was used to optimally select the cluster heads taking into focus parameters such as
energy cost of communication, remaining energy of the node as well as number of neighbors
with emergency data. This enables to choose the cluster head that is having higher energy and
has least cost of communication. Also, the use of MA enables multi hop communication
between the nodes which is more energy efficient than single hop communication.
Furthermore, better network lifetime (which is because of more number of alive nodes) also
helps us to infer that more data can be sent to the base station. This includes the normal data
as well as emergency data. Consequently, we can infer from the better results that the
proposed clustering protocol provides more reliability in terms of sending more emergency
Figure 5.2: Comparison of Network Lifetime
data to the base station. Thus, for IoT application involving emergency data, the proposed
The figure 6.2 shows the energy consumed in the network against number of rounds. Initially
the network was supplied with 100 Joules of energy (0.5 Joules per node). As the number of
rounds progress, the energy gets consumed at uniform rate and when all the nodes die out, the
energy of the network gets fully consumed. The proposed protocol however showed gradual
increase in the rate of energy consumption as compared to other protocols that had steep
energy consumption rate. This is due to better optimized selection of cluster head using GSA
and multi hop data communication process between cluster heads using MA.
Furthermore, second set of simulation parameters was used to analyze and check the
effectiveness of the proposed optimal itinerary planning technique against other state-of-art
itinerary planning techniques for MA defined in [99]. These simulation parameters have been
Figure 5.3: Comparison of Energy Consumption
shown in table 6.4. For this simulation, 800 nodes were randomly deployed in the network of
bigger size having dimensions 1000*500 sq. units. The number of source nodes which have
data to forward to base station were varied from 10-80 and the nodes were supplied with
energy of 2 Joules each. The performance of the network was analyzed in terms of success
Parameter Value
BS location 500,250
MA parameters Value
The figure 5.3 shows the variation of the success rate of the mobile agent’s data collection
trip. Success rate is defined as ratio of number of MAs received at the base station to the
number of MAs dispatched by the base station for data collection. MA is dispatched from the
base station to collect data from the source nodes. If the MA is not received back at the base
station, it is considered as failure. It happens when the node does not have enough energy to
send the MA to next node in the trip. It can be seen from the figure that when the number of
Figure 5.4: Success rate of Mobile Agent Trip
nodes were less, almost all of the techniques have 100% success rate meaning that all the
mobile agents which were dispatched to gather data from the network came back
successfully. However, as the number of nodes increase, the success rate reduces. This
happens because more number of source nodes would mean more data to be collected from
the network. It eventually increases the tour length as well resulting in the increased energy
consumption and reduced success rate of MA trip. The proposed GSA based scheme however
showed higher success rate than the other schemes even when the number of source nodes
were more. The reason for better success rate of TF-GSA is attributed to two factors. The first
factor is related to the use of k-means clustering algorithm to assign the nearby source nodes
cluster/intra-group tour length for the MA. The second factor is the use of GSA for
optimizing the itinerary of MA which considers remaining energy of the nodes, their
communication cost and distance to the base station to check the priority of the nodes to be
visited by MA. The nodes with higher priority are visited first which also includes the priority
for the nodes having emergency data. The higher success rate of MA as compared to other
The figure 5.4 shows the energy consumed by the mobile agent to gather the data from the
source nodes against varying number of source nodes. The energy consumed by the mobile
agent was least for the proposed scheme as compared to other schemes which indicate better
optimized itinerary for the mobile agent. Energy consumption depends is directly
proportional to the square of distance between two communicating entities. The distance
between the source nodes is reduced by using better assignment strategy of allocating the
source nodes to the particular MA (k-means approach). Furthermore, the GSA has been used
to optimize the itinerary of the MA. It takes into account the distance of the nodes from the
base station and remaining energy plus energy cost of communication with the base station as
well. This leads to better itinerary for the MA thus leading to lesser task energy consumption.