Blockchain IoT Peer Device Storage Optimization Using An Advanced Time Variant Multi Objective Particle Swarm Optimization Algorithm
Blockchain IoT Peer Device Storage Optimization Using An Advanced Time Variant Multi Objective Particle Swarm Optimization Algorithm
*Correspondence:
[email protected] Abstract
2
Department of Computer The integration of Internet of Things devices onto the Blockchain implies an increase in
Engineering, Kwame
Nkrumah University the transactions that occur on the Blockchain, thus increasing the storage requirements.
of Science and Technology, A solution approach is to leverage cloud resources for storing blocks within the chain.
Kumasi, Ghana The paper, therefore, proposes two solutions to this problem. The first being an improved
Full list of author information
is available at the end of the hybrid architecture design which uses containerization to create a side chain on a fog
article node for the devices connected to it and an Advanced Time-variant Multi-objective
Particle Swarm Optimization Algorithm (AT-MOPSO) for determining the optimal number
of blocks that should be transferred to the cloud for storage. This algorithm uses time-var‑
iant weights for the velocity of the particle swarm optimization and the non-dominated
sorting and mutation schemes from NSGA-III. The proposed algorithm was compared
with results from the original MOPSO algorithm, the Strength Pareto Evolutionary Algo‑
rithm (SPEA-II), and the Pareto Envelope-based Selection Algorithm with region-based
selection (PESA-II), and NSGA-III. The proposed AT-MOPSO showed better results than the
aforementioned MOPSO algorithms in cloud storage cost and query probability optimi‑
zation. Importantly, AT-MOPSO achieved 52% energy efficiency compared to NSGA-III.
To show how this algorithm can be applied to a real-world Blockchain system, the BISS
industrial Blockchain architecture was adapted and modified to show how the AT-
MOPSO can be used with existing Blockchain systems and the benefits it provides.
Keywords: Blockchain, Internet of Things, Particle swarm optimization, Cloud storage,
Scalability
1 Introduction
Blockchain has gained tremendous traction over the past decade due to its remarkable
contribution to cryptocurrency. The emergence of Blockchain as a distributed ledger
technology led to its application in areas such as healthcare [1, 2], supply chain manage-
ment [3], education [4], real estate [5] as well as Internet of Things (IoT) [6].
© The Author(s), 2021. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits
use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original
author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third
party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the mate‑
rial. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or
exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://
creativecommons.org/licenses/by/4.0/.
Nartey et al. J Wireless Com Network (2022) 2022:5 Page 2 of 27
Blockchain varies from centralized digital databases and ledgers because it harnesses
the concept of community validation to synchronize the entries that go into the ledger.
It further works via a distributed model to replicate the updated ledger to all the nodes
and users involved in the network [7]. Blockchain is a very robust technology due to its
decentralized, transparent, secure, immutable and automated characteristics. However,
many researchers have highlighted scalability as a crucial concern affecting Blockchains
and this needs critical attention [8]. The scalability setback, which includes low through-
put, resource-intensive computations, and high latency, has dramatically hindered prac-
tical Blockchain-based applications.
Another critical concern that affects Blockchain adoption is the storage space require-
ments needed to run a Blockchain node. The space requirement is a Blockchain com-
ponent that increases daily due to the Blockchain ledger’s append-only nature [9]. The
more transactions completed, the larger the ledger size. According to a study by Statista
[10], it was reported that the Blockchain size of Bitcoin as of February 2021 stood at
321.32 GB, and it will take a new full node, a bootstrap time of roughly four days, to be
a part of the network. Ethereum [11], which is also another popular Blockchain, suffers
from a similar situation.
Several solutions have been proposed to handle the issue of scalability on Blockchains.
Some of these solutions include sharding (on-chain scaling) [12], state channels (off-
chain scaling), side chains with schemes as Plasma [13] for the Ethereum network, the
bloXroute technology [14] as well as Directed Acyclic Graph (DAG) [15].
These outlined issues about Blockchains become even more profound when IoTs are
integrated into Blockchains. IoT solutions require that many transactions be executed at
every given time [16], but some Blockchain networks such as Bitcoin can only be capable
of performing 7–20 transactions per second [8]. IoT devices are limited when it comes
to the aspects of their storage and computational resources. On the other hand, rapid
improvement in computing technologies has led to the development of concepts and
systems such as Edge computing [17]. Edge computing is an extension of cloud comput-
ing [18], because it enables many devices to run applications at the edge of a network.
It is done by providing compute resources for devices to offload tasks that are compu-
tationally intensive and provide data storage, maintain low latency, support heteroge-
neity and improve the quality of service for applications that require low latency, such
as IoT applications [17]. Edge computing has been partly used to solve the scalability
for Blockchain-IoT (BIoT) applications such that the IoT devices do not act as nodes on
the Blockchain. Instead, they connect to these edge computers instead of nodes on the
Blockchain [19]. These edge/fog computing structures tend to have more computational
and storage resources, but the number of transactions produced by these IoT devices
still puts considerable pressure on them.
Some existing solutions [20] to this storage problem mainly include storing some of
the blocks produced by these edge nodes in the cloud. Most of the proposed solutions
based on this have been backed by the fact that there has been significant improvement
in the security and encryption schemes implemented on cloud platforms. To the best of
our knowledge, not much literature exists which addresses the cloud storage problem
by using optimization techniques. Xu et al. [21] formulated this problem into a multi-
objective block selection problem and solved it using a Non-dominated Sorting Genetic
Nartey et al. J Wireless Com Network (2022) 2022:5 Page 3 of 27
Algorithm with Clustering (NSGA-C). Their approach had a few limitations, which this
paper seeks to improve. First of all, when compared to NSGA-III, NSGA-C performed
worse in 4 out of 5 objective functions and only out performed NSGA-III in the last
objective function. Secondly, the NSGA-III has a quite long runtime and NSGA-C has
an even longer runtime. This is not quite ideal because for just three peers the block
selection algorithm took upwards of 10 min to run. This also means that the energy con-
sumed during the running of this algorithm was also increased. When conducting this
research, we set out with the quest to have an optimization scheme that performed bet-
ter for all objective functions and also run in less than the current NSGA-C algorithm. In
our investigations, we also realized that no research had been performed where MOPSO
had been applied to this problem, thus, we propose this novel Advanced Time-Variant
Multi-Objective Particle Swarm Optimization (AT-MOPSO).
The paper aims to take advantage of the improvements made in cloud technologies
and leverage the colossal storage availability on cloud systems to solve the storage prob-
lems faced by IoT nodes. We seek to accomplish this by storing some blocks in the cloud
and maintaining the Blockchain’s decentralized nature by leveraging different cloud pro-
viders and cloud deployment strategies such as containerization. The paper adapts our
improved hybrid IoT architecture design [22] to show how our Advanced Time-Variant
Multi-Objective Particle Swarm Optimization (AT-MOPSO) algorithm would help solve
the block selection cloud storage problem. We also applied it to the BISS architecture
[23] from one of our recent research works to improve it and further show how legacy
machines with limited capabilities can be brought closer to the Blockchain as nodes. The
main contributions of the paper are:
Our study primarily focuses on how the storage space on Blockchain peers be opti-
mized by sending some of the stored blocks to a cloud storage and does not deal directly
with aspects of the network or the actual connection between devices.
The rest of this paper is organized as follows. Section 2 presents an overview of Block-
chain, edge computing, and multi-objective optimization techniques. Section 3 looks
at the proposed improvement to fog nodes and the mathematical formulation of the
Nartey et al. J Wireless Com Network (2022) 2022:5 Page 4 of 27
multi-objective block selection problem. Section 4 outlines the details of our proposed
Advanced Time-Variant Multi-Objective Particle Swarm Optimization (AT-MOPSO)
algorithm and the results of our scheme in comparison to other multi-objective opti-
mization scheme is shown in Sect. 5 for all objective functions as well as the energy-
saving benefits. Section 6 shows a real world application of the AT-MOPSO algorithm
and Sect. 7 concludes the paper and provides recommendations for future work.
2 Related work
2.1 Blockchain
To better understand the structure and nature of Blockchain systems, it is good to look
at the decomposed layers of Blockchains, as shown in Fig. 1. The figure consists of the
data, network consensus, ledger topology, and contract and application layers [20].
The data layer performs the function of data encapsulation so that the data generated
from a Blockchain application or transaction is verified and hidden in a block. Such a
block is linked to the previous block by its header with a hash value. The process results
in an ordered chain of blocks replicated among all nodes on the Blockchain [12]. The
replication process occurs in the network layer, whereby the generated blocks are propa-
gated to all nodes in the Blockchain [24]. Since the Blockchain is seen as a decentralized
Ledger
Sidechain Plasma Shard …
Topology Chain
Commuincaon
Network P2P Network
Mechanism
network of nodes, it is modeled as a P2P network where peers act as participants and
provide storage for the distributed ledger of blocks.
Consensus algorithms and schemes help maintain the integrity of the data on the
Blockchain. The nodes on the Blockchain verify each propagated data block and ensure
that it is a valid block before added to all the ledgers on all nodes. The three primary
consensus schemes that have seen widespread usage include Proof-of-Work (PoW) [10],
Proof-of-Stake (PoS) [25, 26], and Practical Byzantine Fault Tolerance (PBFT) [27].
The last decade has also seen an increase in the research and development of cloud-
based architectures and technology. The high availability of computing resources,
storage, and highly reliable and performance network infrastructure has led to more
applications being run on the cloud [28]. The trend also led to a second wave that seeks
to move these highly performance systems from the cloud to the edge of networks [20].
The shift to the edge is mainly being done to support applications that require little to
no delay in their operations. Such applications include IoT implementations [17, 29] and
Virtual Reality applications [30], etc. The adoption of edge/fog computing to support
Blockchain and IoT applications’ integration is due to the resource limitations on IoT
devices and the low latency communication usually required in their operations.
We realized that much work had not been done concerning the ledger topology layer
of the Blockchain system from the research that we have conducted. This is essential
because this layer is required to store the authenticated blocks produced from the con-
sensus layer. Large storage capacities are needed for this. The issue is very profound
when Blockchain-IoT integrations and applications are considered, mainly because of
the storage limitations on these IoT devices. Fog Computing structures used for IoT
implementations take a lot of this load off IoT devices, but the large amounts of data
produced by the IoT devices can still put high demands on these fog nodes.
Palai et al. [31] proposed an approach where a block would summarize the transac-
tions of several consecutive blocks. Then this summary block would be used to make a
net change on those blocks, thereby reducing the storage footprint. The only problem
with this approach is that if only a few blocks are summarized, then the summary blocks’
size is not any smaller than a set of continuous blocks. The authors in [32] also proposed
architecture to solve Blockchains’ storage problems using a class of erasure codes known
as ’Fountain codes’. This architecture enables a full node on the Blockchain to encode
blocks that other nodes have validated into a more compressed structure made up of dif-
ferent blocks, reducing the storage space needed.
Yang et al. [33] proposed a dual storage solution that uses both an on-chain and off-
chain approach. Their approach was used for a fruit and vegetable traceability applica-
tion with IoT devices serving as input devices. The public information about products
was stored in a relational database, and the private information about the products was
sent over to the Blockchain. This approach is also efficient since fewer data would be
sent over to the Blockchain. The downside of such an approach is that if there is data loss
on the side relational database, that data cannot be retrieved, and the private informa-
tion sent to the Blockchain would be without context.
Kumari et al. [34] also proposed an off-chain approach intended for use in an innovative
city application for IoT devices acting as smart meters and sensors. The off-chain solution
uses the InterPlanetary File System (IPFS) [34], a peer-to-peer hypermedia protocol to
Nartey et al. J Wireless Com Network (2022) 2022:5 Page 6 of 27
store only transactions related to the devices off-chain and links the off-chain records to
the main Ethereum Blockchain, thereby reducing storage requirements and costs.
2.2 Containerization
Another trend that has caught on with the popularization of cloud and edge/fog com-
puting is the concept of containers. Fog computing, which is known to solve the latency
issue for IoT devices, works by deploying a set of distributed servers and compute
resources at the edge of the network. The infrastructure helps avoid data transmission
over long distances, thus avoiding propagation and processing delays [29]. Complex con-
figurations and different software environments have to be set up on fog servers to ful-
fill the demands of various applications’ needs. Containerization has been used in such
cases to help solve the problem of software deployment and configurations and migra-
tion activities [19]. The process is achieved by bundling all the relevant source codes with
their respective library requirements into encapsulations known as containers which can
be easily deployed on the fog servers.
Some researchers have proposed systems to use containerization orchestrations with
Blockchain and IoT applications, but not much literature exists on this topic. Cui et al.
[6] proposed a Blockchain-based containerization scheme for an aspect of IoTs known
as the Internet of Vehicles (IoV). Their implementations focused on the container sched-
uling policies for Directed Acyclic Graphs (DAGs), which determined how many con-
tainers should be running to effectively manage the resources on the fog servers that
the vehicles were utilizing. In [35], the authors proposed a Blockchain system based on
Distributed Hash Table (DHT) called LightChain. The Blockchain implementation was
deployed on a single machine using a docker container, and this was done to show its
lightweight nature. The individual nodes of the Blockchain were run as separate threads
in the container.
2.3 Multi‑objective optimization
Multi-objective optimization is defined as the process of obtaining a suitable vector of
variables from a feasible region that a set of constraints can defines. The goal is to find
the vector of variables such that a vector of objective functions would be minimized or
maximized. It can be expressed as follows:
min F (x) = f 1(x), f 2(x), . . . , fm(x) , s.t. g(x) ≤ 0 (1)
In this case, x represents the vector of design variables—each value of fi , for
i(1 ≤ i ≤ m) representing an objective function that must be optimized and g(x) is the
constraint vector with m being the number of objective functions in question.
Evolutionary algorithms have been used widely in research to solve multi-objective
problems, and the general term used for these is Multi-Objective Evolutionary Algo-
rithms (MOEA) [36]. Such algorithms possess parallel efficiency, robustness, and
versatility when applied to complex optimization scenarios. Ridha et al. [37] used multi-
objective optimization to solve the problem of standalone photovoltaic system design.
Their research aimed to optimize the output power obtained from a given system using
storage battery capacity, mathematical models of the different types of PV modules,
and environmental criteria. Cai et al. [24] also proposed a sharding scheme based on
Nartey et al. J Wireless Com Network (2022) 2022:5 Page 7 of 27
Cloud A
Cloud C
blockchain Blocks moved to the cloud
for storage
Cloud B
Fog 2 Fog 3
Fig. 2 Proposed system for running a side chain on a fog node using containers and microservice
Nartey et al. J Wireless Com Network (2022) 2022:5 Page 8 of 27
not employ a traditional service-oriented monolithic architecture [39], and the nodes
of the Blockchain would not be the IoT peer devices. The way the side chain would
operate would be by using microservices [6]. Thus, multiple microservices would run
as containers on the fog that would act as nodes on the side-chain. These containers
would take up the mining activities from the IoT devices.
Each IoT device would be randomly assigned to a Node in the side-chain (i.e., a
container). Separate containers would not be created for each IoT device, but rather a
pool of IoT devices would be assigned to a node at a time. The depiction of this pro-
posed update is shown in Fig. 2.
Only a limited amount of delay is introduced for message propagation through the
side-chain, ensuring high throughput of transactions resulting from the small number
of container nodes. This architecture would be best suited for the high transaction—
high-performance IoT implementations.
As mentioned earlier, the Blockchain’s storage requirements on the fog node are
quite intensive because even a low throughput Blockchain such as Bitcoin has a ledger
size of about 321.21 GB. In contrast, a high throughput Blockchain such as Ripple has
a ledger size of up to 9TB [32]. Our proposed system curtails this challenge by mov-
ing blocks from fog nodes to the cloud for storage, as shown in Fig. 2.
3.2 Problem statement
Based on our proposed model, the question that arises is which blocks should be
moved to the cloud and how many of those blocks should be moved to ensure the
smooth and efficient running of the IoT devices connected to it. This problem is for-
mulated with the appropriate mathematical models are presented here in this section
of the paper.
Suppose we have several fog servers acting as fog nodes or peers for IoT devices
connected to it as S = {s1 , s2 , . . . , sm } , where si ∈ S denotes a single server and m the
total number of fog nodes being considered. For each fog node/peer, we need to
select some blocks to be sent to the cloud to alleviate that peer’s storage pressure. The
blocks in each fog node can be represented by b1 , b2 , . . . , bN , where N is the number
of blocks mined by the peer at any given time.
The number of blocks that would be taken to the cloud can be represented by
Mw (1 ≤ Mw ≤ N ) , where w = si ∈ S as shown in Fig. 3.
Once Mw blocks are sent to the cloud the blocks in the fog peer are renumbered
such that bMw +1 then becomes b1 and so on.
Fig. 3 A representation of the blocks present in a fog node where Mw blocks would be transferred
Nartey et al. J Wireless Com Network (2022) 2022:5 Page 9 of 27
F (t) = F0 (2)
F (t) = F0 − α2 t. (3)
3.2.1 Query probability
The query probability for the blocks in a fog node is based on the query frequency
F (t) for the type of Blockchain-IoT application being implemented. The value repre-
sented as t for every block is tightly coupled to when that block was generated. Thus,
with the addition of every new block, t is increased by 1. This means that the first
generated block in the set has a t value of 0 as shown in (5). In the eventual scheme of
events, the first generated block would be the last block in the arrangement given by
bN as shown in Fig. 3.
⇒ t = 0, F (t) = F0 . (5)
The query probability for the blocks in a fog node can be represented by
Pb1 , Pb2 , Pb2 , . . . , PMw , . . . PbN where bj 1 ≤ j ≤ N − 1 . Thus, the query probability for
the various blocks can be found by (6). It must be noted the block bN would have both
the query frequency and query probability of F0 since it was the first block created.
� �
1
1≤j ≤N −1
Pbj = ∫
N −j
F (t)dt . (6)
0
F0 j=N
Nartey et al. J Wireless Com Network (2022) 2022:5 Page 10 of 27
Based on (6), we can calculate the sum of the all the query probability for all the blocks
for a fog node as as shown in (7). The sum can be used to normalize the values of the
′ , . . . , P ′ as shown in (8).
query probabilities represented by Pb′ 1 , Pb′ 2 , Pb′ 2 , . . . , PM w bN
N −j
= Pbj + F0 . (7)
j=1
� �
1
N −j 1≤j ≤N −1
Pb′ j = � ∫0 F (t)dt . (8)
F0
j=N
�
Thus, after all the query probabilities of the blocks have been found, the overall query
probability for the fog node is based on the number of blocks Mw to be sent to the cloud
can be found. This can be achieved by finding the sum of all the normalized query prob-
abilities up to the Mw th block. For fog node si , the overall query probability is denoted
by Psi as shown in (9).
Mw
Psi = Pb′ j 1 ≤ si ≤ m. (9)
j=1
3.2.2 Storage cost
The storage cost deals with the cost of storing the blocks in the cloud. The storage cost
for cloud storage is assumed to be the same for all the fog nodes for the sake of simplic-
ity. The size of one block for each fog node is represented by C . Different Blockchain has
different sizes for the blocks that are generated. Thus, the Blockchain being used must be
considered, and the block size of an individual block must be known. For example, the
bitcoin Blockchain is known to have blocks with a size of 1 MB [40], the size of blocks
on the Hyperledger Fabric Blockchain can be adjusted to be as large as possible [41]. The
size of a block on the Ethereum Blockchain varies based on the gas limit [11].
The storage cost is considered a linear function as shown in (10), which is the total size
of all blocks moving from the fog node to the cloud. This linear function is governed by
a factor k representing the ratio of the cost of cloud storage compared to local storage.
Thus, when k has a small value, it means that cloud storage is cheaper than local storage
options for the fog node and vice versa. This value is based on the cloud service provider
used by a fog node and the type of local storage available on the fog node, such as Opti-
cal Hard Disk Drives (HDD)—which are slightly cheap but slow, or Solid-State Drives
(SSD)—which are relatively more expensive and faster.
m
cost = kC × Mw . (10)
w=1
Nartey et al. J Wireless Com Network (2022) 2022:5 Page 11 of 27
where m = |S|.
The individual local space occupancy for each fog node can be denoted by Qsi ,
expressed in (12). This value is based on the number of Mw blocks that are sent to
the cloud. The overall local space occupancy, Q , of all fog nodes can be expressed as a
weighted sum based on their assigned weights βsi as shown in (13).
N −Mw
N
e (12)
Qs i = .
e−1
m
Q= Qsi × βsi . (13)
i=1
3.3 Multi‑objective formulation
Based on the objective functions expressed earlier in the query probability, the storage
cost, and the local space occupancy, the block selection problem can be formulated as
a minimization multi-objective problem. This work’s primary goal is to minimize the
objective functions while taking as many blocks as possible to be stored in the cloud.
Thus, all objective functions would be minimized, as shown in (14)–(16).
Nartey et al. J Wireless Com Network (2022) 2022:5 Page 12 of 27
Ms
1
min query probs. ⇒· min Pb′ j
j=1
Msi
min Pb′ j (14)
j=1
Msm
min Pb′ j .
j=1
From (14), it can be seen that there will be m fog nodes and thus m objective func-
tions for the query probabilities, i.e., one for each fog node. The objective functions
for the storage cost and the local space occupancy also minimized, as shown in (15)
and (16).
m
min kC × Mw . (15)
w=1
m
min Qsi × βsi . (16)
i=1
1 ≤ Mw ≤ N , Mw ∈ N. (17)
Thus, for every set of m fog nodes that we have, it can be noted there would always
be m + 2 objective functions that we have to adhere to at every time. Users and opera-
tors of the Blockchain-IoT applications can always have constraints on these objective
functions and the individual variables used in them. The constraints can be repre-
sented with γ1 , γ2 , . . . , γm+2 as shown in (18). The constraints are solely the decision of
the operator or the user. It must also be noted that the number of blocks that can be
taken to the cloud must
Ms1
�
′
max min Pbj ≤ γ1
j=1
Msm
�
max min Pb′ j ≤ γm
j=1
m
� �
(18)
�
max min kC × Mw ≤ γm+1
w=1
..
.
m
� �
�
max min Qsi × βsi ≤ γm+2
i=1
s.t.m.
always be an integer and not a decimal number. Thus, Mw cannot be less than 1 as
shown in (17).
Nartey et al. J Wireless Com Network (2022) 2022:5 Page 13 of 27
The objective functions and multi-objective problem formulation that has been outlined
in this section would be solved using the proposed advanced multi-objective particle swarm
optimization approach, outlined in Sect. 4.
4 Methods
4.1 Particle swarm optimization (PSO) algorithm
In general, the particle swarm optimization algorithm is a widespread evolutionary algo-
rithm that has been used a lot to solve single-optimization problems [42]. This algorithm is
a random search method based on the behavioral pattern that simulates the way a swarm of
birds forage and flock together. It has aspects that influence it based on the individual and
social behavior of the birds (i.e., particles) in the swarm.
The PSO algorithm works in stages that include: initialization, searching, and updating
best positions and values, converging with the best search results. All these processes are
done over several iterations. The initialization phase of PSO is when a random set of par-
ticles or solutions to an optimization problem is generated. That is supposed to be repre-
sentative of a swarm of birds. In the block selection problem, the initialization stage would
give the random solutions for the number of blocks represented by Hf that should be sent
to the cloud. The selection is based on the constraint of the number of blocks N each peer
can hold as shown in (19) and a given population side, Pop, which the user provides.
The objective functions are also evaluated at this stage using the generated particles, and
the results obtained from the objective functions are used to judge the fitness of the solu-
tions. The next stage of searching and updating the solutions or particles’ positions and
values occurs by altering the search direction of the solutions to move them to the best
solution in the search space. These alterations and adjustments are made based on the two
extremums, the personal best solution ( Pki ) of each particle i and the global best solution of
g
the whole swarm (Pk ) . This is done in such a way that each particle in the swarm adjusts its
position and direction to move toward the best result or solution that has been found and
recorded by the whole swarm, which ensures that at the end of K iterations, all solutions
would be converging toward the best solution. The adjustments made to the velocity, vki ,
and position, xki of each particle i during every kth iteration are shown in (20) and (21).
i g
vk+1 = ω × vki + c1 r1 Pki − xki + c2 r2 Pk − xki . (20)
i
xk+1 = xki + vk+1
i
1≤k≤K (21)
1 ≤ i ≤ Pop.
From (20) and (21), ω represents the inertia weight which works with velocity to
improve the calculation speed and the quality, i.e., makes it converge faster or slower. r1
Nartey et al. J Wireless Com Network (2022) 2022:5 Page 14 of 27
and r2 represent random values that help incorporate and model the aspect of environ-
mental interference that birds in a swarm may face (0 ≤ r1 , r2 ≤ 1) . c1 and c2 are known
as the acceleration coefficients or learning factors [43]. Each iteration produces a varia-
tion in Pki , which is the personal best solution of a particle until the maximum number of
iterations is reached.
The significant advantage of the PSO algorithm is its fast convergence [42], but the
major setback is that it is notoriously known to get trapped into the local optima, which
sometimes ends up in excessive searching.
A term I(k) is introduced into the cosine function to adjust the period, and it changes
on every iteration. To keep track of the stage at which the PSO algorithm is in, another
variable a is introduced to help update I(k) . Based onexperiments
performed
in
[46],
the values chosen for the ai values are as follows; a1 = 3 , a2 = 3 and a3 = 29 . A
4 16
user can specify the starting initial inertia weight ωini and final inertia weight ωfin , but it
is proposed to be set to ωini = 0.4 and ωfin = 0.9.
Thus, the concept of the Pareto-optimality and dominance is introduced. This helps to
tell whether a particle is better than another by ensuring each particle’s targets are better
than the current best, not one or two but all.
In MOPSO, apart from the updated population during every iteration, an external
archive keeps track of non-dominated solutions (i.e., solutions with the number blocks
to be sent to the cloud for each peer). These archive elements are referred to as the
Pareto set, and the eventual goal is to have a Pareto optimal set. Elements are added to
this archive until it is complete, and then the archive is sorted, and only the very best
solutions are kept. This is where the update would be made to MOPSO for this paper.
When it comes to the global best search result, there would be more than one opti-
mal solution because there are multiple objective functions described in Sect. 3. Thus,
a global leader is chosen and used to adjust all other solutions. The flow chart for the
MOPSO algorithm is shown in Fig. 4.
It must be noted that actual values will be put in the population H (0) , the external
(k)
archive Hf . The mutant population (popm) would all have to be integers since these
solutions represent the number of blocks Msi for each fog node that would be sent to the
cloud. Thus, an extra step of ensuring that all solutions are converted to integers must be
guaranteed across the whole algorithm for this use case.
5 Experiments and results
The AT-MOPSO algorithm is applied to the objective functions outlined in Sect. 3 of
this paper, shown in (14)–(18). When the maximum number of iterations is reached, the
external archive Hf which contains the final Pareto set of non-dominated solutions, is
filtered using the constraints set for each objective function γ1 , γ2 , . . . ., γm+2 . If any of
the solutions has a corresponding objective function value less than γi , it is filtered out
and removed from the external archive into a new set Hf∗ . This would allow only for
solutions that satisfy all constraints to be in Hf .
After the filtration step, we would still be left with some, Pareto optimal solutions, but
we need the best solution out of the set Hf∗ to answer our research questions of how
many blocks for each fog node must be sent to the cloud. This was done by iteratively
going through each solution in Hf∗ and finding the minimum weighted sum of all objec-
tive functions using (23) where δl (1 ≤ l ≤ m + 2) represents the weights assigned to the
individual objective function.
m
WS = δj Pb′ j + δm+1 cos t + δm+2 Q. (23)
j=1
To show our AT-MOPSO algorithm’s potency, we ran it. We compared the results
for the different objective functions against the original MOPSO, time-variant
MOPSO (offered as I-MOPSO), and NSGA-III (shown as NSGA3). The experiments
Nartey et al. J Wireless Com Network (2022) 2022:5 Page 19 of 27
Table 2 List of parameters used for experiment for the different algorithms
Parameter Value
F0 0.95
N 200
βs1 0.3
βs2 0.2
βs3 0.5
γ1 0.45
γ2 0.45
γ3 0.45
γ4 3.0
γ5 0.6
δ1 0.16
δ2 0.16
δ3 0.16
δ4 0.12
δ5 0.4
used for testing the algorithms were done on a computer with an Intel Xeon E5 pro-
cessor with 2.9 GHz and 32 GB of RAM (the full specifications of the computer is
shown in Table 1).
To simplify the experiment, we assumed a fixed query frequency F0 of 0.95. The
set value helps to depict a typical use case involving IoT fog nodes connected to a
Blockchain with IoT devices being used in a traceability application such as [33] as
described in Sect 3. Three fog node servers were selected; thus, the set S = {s1 , s2 , s3 }
and the number of blocks to be sent to the cloud for each fog node is Ms1 , Ms2 , and Ms3
respectively, making m = 3 . For simplicity, the total number of blocks N that can be
stored for each fog node was taken to be 200 at a 1 MB size for each block. The local
space occupancy weights for the individual fog nodes were set as βs1 , βs2 , and βs3 . The
constraints for the five objective functions (i.e., m + 2) were specified as γ1 , γ2 , γ3 , γ4 ,
and γ5 . Also, the weights δ1 , δ2 , δ3 , δ4 and δ5 of each objective function for the weighted
sum computation in (23) was also specified. A complete list of all parameters used for
the experiment is specified in Table 2.
For the implementations, the total number of iterations K was set to 200. For each iter-
ation, averages of the solutions in the external archive were computed, and graphs plot-
ted for F1, F2, F3, F4, and F5, representing each objective function as shown in Fig. 7. It
must be noted that it takes some time for the algorithms to run for the given problem.
Nartey et al. J Wireless Com Network (2022) 2022:5 Page 20 of 27
Fig. 7 The graphs showing the performance of the objective functions for the different objective functions
(i.e., F1, F2, F3, F4 and F5) when AT-MOPSO was used
This problem is not being run to the point of full convergence but rather to find the algo-
rithm that can minimize the objective functions as much as possible in the run time.
The graph in Fig. 7 shows the results for the when the AT-MOPSO was compared
to the original MOPSO algorithms as well as the NSGA-C (Non-Dominated Sorting
Genetic Algorithm with Clustering). There is not research existing the direction of
using optimization techniques to find solve this cloud storage block selection prob-
lem. The most recent scheme that has been used to solve this is the NSGA-C based
on the NSGA-III algorithm proposed by Xu et al. [21]. It can be clearly shown from
the results that our proposed scheme (AT-MOPSO) out-performed NSGA-C and the
original MOPSO algorithm for the first four objective functions (i.e., F1–F4). It can
also be observed that for the objective function F5, AT-MOPSO performs slightly
worse than NSGA-C and the original MOPSO. However, the advantage AT-MOPSO
and other PSO algorithms have over NSGA-III is they converge at a faster rate.
Thus, the results of the AT-MOPSO being slightly worse in F5 in comparison with
NSGA-C, still gives it an edge over the NSGA-C due to the afore mentioned reasons.
The benefits of the AT-MOPSO can be further elaborated by considering the energy
efficiency or energy-saving analysis of the two algorithms.
We performed further investigations into how our proposed scheme compares to
some of the common and popular Multi-Objective algorithms. In Fig. 8, we show our
results when our developed scheme was compared to SPEA-II (Strength Pareto Evo-
lutionary Algorithm) [49], PESA-II (Pareto Envelope-based Selection Algorithm with
region-based selection) [50] and NSGA-III. Our scheme also out performs the SPEA-
II and PESA-II algorithms with SPEA-II performing slightly better than PESA-II for
the same block selection problem using the same parameters.
It can also be seen in Fig. 8, that our developed scheme (AT-MOPSO) is at par with
NSGA-III in most of the objective functions (i.e., F1–F4) and even marginally out-
performing NSGA-III in objective function F4. AT-MOPSO is also out-performed by
Nartey et al. J Wireless Com Network (2022) 2022:5 Page 21 of 27
Fig. 8 Graphs showing how the AT-MOPSO algorithm performs in comparison to other well-known
multi-objective optimization scheme
NSGA-III in the results for objective function F5. For the same afore mentioned as
NSGA-C (which is based on NSGA-III), AT-MOPSO should be selected due to its
faster run time and convergence rate.
6 Discussion
We discuss the potency of our develop AT-MOPSO algorithm by taken a good look a
critical energy efficiency comparison with other Multi-Objective Optimization algo-
rithms. All the specifications of the computing resources used for this comparison is
shown in Table 1. Shukla et al. [46] used the power consumption model for CMOS
(Complementary Metal Oxide Semiconductor) logic circuits to analyze dynamic
energy consumption on microprocessors. They considered that for microprocessors,
the capacitive power or the dynamic energy consumption is the most significant fac-
tor, and thus they expressed it mathematically as shown in (24):
n
Ebusy = B × vi2 × fr × ti
i=1
n
(24)
Eidle = B × vi2lowest × frlowest × tiidle
i=1
Etotal = Ebusy + Eidle
where B is the constant parameter related to the dynamic power (based on DVFS—
Dynamic Voltage and Frequency Scaling) for a given CPU, vi is the supply voltage at
which the processor is regulated, fr is the clock frequency of the microprocessor, and
ti is the time for which a microprocessor runs a task.
Nartey et al. J Wireless Com Network (2022) 2022:5 Page 22 of 27
Table 3 Results for the runtimes in seconds for different algorithms compared
Optimization algorithm Runtime (s)
MOPSO 371.8
NSGA-C 872.4
AT—MOPSO 384.2
NSGA3 758.8
45
40
35
Power Consumption (W)
30
25
20
15
10
0
MOPSO AT-MOPSO NSGA-III NSGA-C
Optimization Algorithm
Fig. 9 Results of energy consumption of the different optimization algorithms
There were two considered cases where the processor is in an idle state Eidle and
when the processor is running a task Ebusy . The times for which the various algo-
rithms run the 200 iterations were recorded and are shown in Table 3. It can be seen
from the results that the AT-MOPSO runs in also half the time it takes the NSGA-III
algorithm to complete the 200 iterations with the given parameters. The time it takes
to complete the same number of iterations using NSGA-C (the most recent scheme
used to solve the block selection problem) is even more than that of NSGA-III. Xu
et al. [21] considered their scheme to be a better approach due to the fact that it out-
performed NSGA-III for the F5 objective function which represents that for local
space occupancy (arguably the important objective).
We have succeeded in showing that our scheme out-performs NSGA-C in all other
objective functions and further performs in an almost similar fashion as NSGA-C for
objective function F5, yet running in less than half the time it takes to run NSGA-C.
The recorded execution times were fed into (24) to calculate the energy consumed the
runtime of each algorithm. The results for the energy consumed executing each of the
algorithms are shown in the graph in Fig. 9. The algorithms’ energy consumption is an
essential factor to consider because this is a task the processor of the fog node would
be running on a persistent basis, and having an optimization algorithm that can con-
verge very fast and use less energy is an excellent feature to have.
Nartey et al. J Wireless Com Network (2022) 2022:5 Page 23 of 27
Cloud A
Predicve maintenance Model Provider Manufacturers Enterprise Architecture
Fig. 10 The improved BISS platform architecture with the AT-MOPSO incorporated and running on peer
Nartey et al. J Wireless Com Network (2022) 2022:5 Page 24 of 27
It must also be noted that in the incorporation of Blockchain into industrial systems
and machines as shown in the BISS architecture, the machines are mainly going to be
legacy and older systems so having such an algorithm to take some of the pressure of
them would be greatly appreciated by operators of such systems and provide a true con-
nection between the machines and the Blockchain.
In the version of BISS which incorporates our approach, machines which house
the sensors and actuators are now peers on the Blockchain and they would in turn be
running the AT-MOPSO algorithms. From our pilot testing of the algorithm, it was
observed that as the machines run the given algorithm, they are able to offload some
of the blocks that are produced from transactions in their ledger to the cloud and this
reduced the transaction time by 52%. The is because, the AT-MOPSO takes into con-
sideration (as a parameter), the amount of storage space that the peer on the Blockchain
has available, thus the optimization is done based on the provided value. The three main
objective functions that the block selection is based on takes into account the cost of
storage of the blocks in the cloud as well as local storage occupancy. All these afore-
mentioned factors, make the developed and proposed algorithm one that can be put to
great and diverse use as shown with this industrial application example.
8 Conclusion
In this paper, we looked at the integration of Blockchain with IoT applications. We pro-
posed a hybrid Blockchain-IoT integration scheme that used fog computing and cloud
storage to help improve the throughput of such applications. The scheme scheduled a
side-chain run on a fog node and only sent completed transactions in the side-chain to
the main Blockchain. The proposed scheme alleviates the storage pressure on fog nodes
by ensuring that some of the blocks produced by the transactions are stored in the cloud
for each connected fog node.
To select the number of blocks that should be sent to the cloud, we further proposed
an Advanced Time-Variant Multi-Objective Particle Swarm Optimization (AT-MOPSO)
algorithm to help solve the block selection problem. The algorithm was applied to objec-
tive functions formulated to model the aspects of the query probability of the blocks
on each fog node, the cloud storage cost to be incurred by a user, and the local space
occupancy/storage availability needed to be saved for each fog node. We compared our
proposed algorithm to the original MOPSO algorithm and the Time-Variant MOPSO
and NSGA-III. We observed that our scheme performed better in all objective func-
tions than all other MOPSO algorithms except for the local space occupancy. Our AT-
MOPSO algorithm also performed as well as the NSGA-III algorithm for the query
probability objective function for the first and third fog nodes, and also for the cloud
storage cost objective function. We also assessed our proposed AT-MOPSO algorithm
to determine its energy-saving efficiency compared to the NSGA-III algorithm (which
is the current standard or algorithm which has been used to solve this problem). Our
algorithm runs in about half the time of the NSGA-III and achieved about 52% energy
efficiency compared to the NSGA-III.
We further showed how our proposed algorithm can be incorporated into industrial
systems which are running legacy machinery. This was shown but adapting the BISS
platform architecture and showing where our AT-MOPSO would fit in it.
Nartey et al. J Wireless Com Network (2022) 2022:5 Page 25 of 27
In our future work, we plan to explore the possibilities of reducing the algorithm’s
runtime time even further, thus allowing us to deal with fog nodes with larger storage
sizes of about 1TB or more. It will also be worth looking at the effects of our algorithm
on aspects of the IoT devices such as Quality of Service (QoS) and transmit power
limitations.
Abbreviations
IoT: Internet of Things; DAG: Directed acyclic graph; PSO: Particle swarm optimization; MOPSO: Multi-objective particle
swarm optimization; AT-MOPSO: Advanced time-variant multi-objective particle swarm optimization; NSGA: Non-domi‑
nated sorting genetic algorithm with clustering; MOEA: Multi-objective evolutionary algorithms; PV: Photo-voltaic.
Acknowledgements
The authors are grateful to the TWAS-DFG Visiting Researcher Programme for providing and funding a collaborative
environment between Offenburg University of Applied Sciences and KNUST to undertake this research. Authors are also
grateful to the KNUST-MTN Innovation Fund for providing support to undertake the research and pay the APC.
Authors’ contributions
CN and ET conceived and designed the study. CN, ET, HN, JG and BY performed the experiments and wrote the paper.
ET, DW and BY revised the manuscript. ET and CN took charge of all the work of paper submission. HN, JG, AS and DW
gave several proposals for the experiments and interpretation of the results. JG, AS and BY reviewed and revised the
manuscript. All authors read and approved the final manuscript.
Funding
This study received no external funding.
Declarations
Competing interests
The authors declare that they have no competing interests.
Author details
1
Department of Engineering and IT, Carinthia University of Applied Sciences, Carinthia, Austria. 2 Department of Com‑
puter Engineering, Kwame Nkrumah University of Science and Technology, Kumasi, Ghana. 3 Department of Telecommu‑
nications Engineering, Kwame Nkrumah University of Science and Technology, Kumasi, Ghana. 4 The Institute of Reliable
Embedded Systems and Communication Electronics, Offenburg University of Applied Sciences, Offenburg, Germany.
References
1. J. Xu et al., Healthchain: a blockchain-based privacy preserving scheme for large-scale health data. IEEE Internet
Things J. 6(5), 8770–8781 (2019)
2. P.P. Ray, D. Dash, K. Salah, N. Kumar, Blockchain for IoT-based healthcare: background, consensus, platforms, and use
cases. IEEE Syst. J. 15, 85–94 (2020)
3. S. Aich, S. Chakraborty, M. Sain, H. Lee, H.-C. Kim, A review on benefits of IoT integrated blockchain based supply
chain management implementations across different sectors with case study, in 2019 21st International Conference
on Advanced Communication Technology (ICACT), pp. 138–141 (2019). https://doi.org/10.23919/ICACT.2019.8701910
4. G. Caldarelli, J. Ellul, Trusted academic transcripts on the blockchain: a systematic literature review. Appl. Sci. (2021).
https://doi.org/10.3390/app11041842
5. J.H. Huh, S.K. Kim, Verification plan using neural algorithm blockchain smart contract for secure P2P real estate
transactions. Electronics (2020). https://doi.org/10.3390/electronics9061052
6. L. Cui et al., A blockchain-based containerized edge computing platform for the internet of vehicles. IEEE Internet
Things J. 8(4), 2395–2408 (2021). https://doi.org/10.1109/JIOT.2020.3027700
7. S. Nakamoto, Bitcoin: a peer-to-peer electronic cash system. Bitcoin. https://bitcoin.org/bitcoin.pdf
8. I. Eyal, A.E. Gencer, E.G. Sirer, R.V. Renesse, Bitcoin-NG: a scalable blockchain protocol, in 13th USENIX Symposium on
Networked Systems Design and Implementation (NSDI 16), Santa Clara, CA, pp. 45–59 (2016). https://www.usenix.org/
conference/nsdi16/technical-sessions/presentation/eyal
9. A. Carvalho, J.W. Merhout, Y. Kadiyala, J. Bentley II., When good blocks go bad: managing unwanted blockchain data.
Int. J. Inf. Manag. 57, 102263 (2021). https://doi.org/10.1016/j.ijinfomgt.2020.102263
10. Bitcoin blockchain size 2009–2021. Statista. https://www.statista.com/statistics/647523/worldwide-bitcoin-block
chain-size/. Accessed 24 Mar 2021
Nartey et al. J Wireless Com Network (2022) 2022:5 Page 26 of 27
42. Y. Wen-Bin, D. Yong-Hong, W-MOPSO in adaptive circuits for blast wave measurements. IEEE Sens. J. 21(7),
9323–9332 (2021). https://doi.org/10.1109/JSEN.2021.3053099
43. Y.-H. Lin, L.-C. Huang, S.-Y. Chen, C.-M. Yu, The optimal route planning for inspection task of autonomous underwater
vehicle composed of MOPSO-based dynamic routing algorithm in currents. Appl. Ocean Res. 75, 178–192 (2018).
https://doi.org/10.1016/j.apor.2018.03.016
44. M. Fan, M. Fan, Y. Akhter, A time-varying adaptive inertia weight based modified PSO algorithm for UAV path
planning, in 2021 2nd International Conference on Robotics, Electrical and Signal Processing Techniques (ICREST), pp.
573–576 (2021). https://doi.org/10.1109/ICREST51555.2021.9331101
45. A. Chatterjee, P. Siarry, Nonlinear inertia weight variation for dynamic adaptation in particle swarm optimization.
Comput. Oper. Res. 33(3), 859–871 (2006). https://doi.org/10.1016/j.cor.2004.08.012
46. J. Zhang, J. Sheng, J. Lu, L. Shen, UCPSO: a uniform initialized particle swarm optimization algorithm with cosine
inertia weight. Comput. Intell. Neurosci. 2021, e8819333 (2021). https://doi.org/10.1155/2021/8819333
47. S. Liang et al., Determining optimal parameter ranges of warm supply air for stratum ventilation using Pareto-based
MOPSO and cluster analysis. J. Build. Eng. 37, 102145 (2021). https://doi.org/10.1016/j.jobe.2021.102145
48. K. Deb, H. Jain, An evolutionary many-objective optimization algorithm using reference-point-based nondominated
sorting approach, part I: solving problems with box constraints. IEEE Trans. Evol. Comput. 18(4), 577–601 (2014).
https://doi.org/10.1109/TEVC.2013.2281535
49. R. Gharari, N. Poursalehi, M. Abbasi, M. Aghaie, Implementation of strength Pareto evolutionary algorithm II in the
multiobjective burnable poison placement optimization of KWU pressurized water reactor. Nucl. Eng. Technol.
48(5), 1126–1139 (2016). https://doi.org/10.1016/j.net.2016.04.004
50. D.W. Corne, N.R. Jerram, J.D. Knowles, M.J. Oates, PESA-II: region-based selection in evolutionary multiobjective
optimization, in Proceedings of the 3rd Annual Conference on Genetic and Evolutionary Computation, San Francisco, CA,
USA, pp. 283–290 (2001)
51. Linux Community, HyperLedger fabric white paper, in Linux Found (2018). https://www.hyperledger.org/wp-conte
nt/uploads/2018/08/HL_Whitepaper_IntroductiontoHyperledger.pdf
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.