0% found this document useful (0 votes)
13 views23 pages

Applsci 09 02630

Uploaded by

chrisholiday49
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views23 pages

Applsci 09 02630

Uploaded by

chrisholiday49
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 23

applied

sciences
Article
A Comparative Study of PSO-ANN, GA-ANN,
ICA-ANN, and ABC-ANN in Estimating the Heating
Load of Buildings’ Energy Efficiency for Smart
City Planning
Le Thi Le 1, *, Hoang Nguyen 2, * , Jie Dou 3 and Jian Zhou 4
1 Thanh Hoa University of Culture, Sports and Tourism, Thanh Hoa 440000, Vietnam
2 Institute of Research and Development, Duy Tan University, Da Nang 550000, Vietnam
3 Civil and Environmental Engineering, Nagaoka University of Technology, 1603-1, Kami-Tomioka, Nagaoka,
Niigata 940-2188, Japan
4 School of Resources and Safety Engineering, Central South University, Changsha 410083, China
* Correspondence: [email protected] (L.T.L.); [email protected] (H.N.)

Received: 3 June 2019; Accepted: 27 June 2019; Published: 28 June 2019 

Abstract: Energy-efficiency is one of the critical issues in smart cities. It is an essential basis
for optimizing smart cities planning. This study proposed four new artificial intelligence (AI)
techniques for forecasting the heating load of buildings’ energy efficiency based on the potential
of artificial neural network (ANN) and meta-heuristics algorithms, including artificial bee colony
(ABC) optimization, particle swarm optimization (PSO), imperialist competitive algorithm (ICA), and
genetic algorithm (GA). They were abbreviated as ABC-ANN, PSO-ANN, ICA-ANN, and GA-ANN
models; 837 buildings were considered and analyzed based on the influential parameters, such as
glazing area distribution (GLAD), glazing area (GLA), orientation (O), overall height (OH), roof area
(RA), wall area (WA), surface area (SA), relative compactness (RC), for estimating heating load (HL).
Three statistical criteria, such as root-mean-squared error (RMSE), coefficient determination (R2 ),
and mean absolute error (MAE), were used to assess the potential of the aforementioned models.
The results indicated that the GA-ANN model provided the highest performance in estimating the
heating load of buildings’ energy efficiency, with an RMSE of 1.625, R2 of 0.980, and MAE of 0.798.
The remaining models (i.e., PSO-ANN, ICA-ANN, ABC-ANN) yielded lower performance with
RMSE of 1.932, 1.982, 1.878; R2 of 0.972, 0.970, 0.973; MAE of 1.027, 0.980, 0.957, respectively.

Keywords: smart building; meta-heuristic algorithm; heating load; smart city; hybrid model

1. Introduction
One of the indispensable components for smart cities is energy and the applications of artificial
intelligence (AI) [1]. Nowadays, smart cities are becoming more popular and the first choice for those
who want a comfortable and productive life [2–5]. This includes intelligent, modern, energy efficient
utilities, as well as sustainable environmental protection [6–8]. Of those components, heating load (HL)
and cooling load (CL) systems are a part of energy efficiency. Many studies were conducted to predict
and optimize the use of buildings’ energy efficiency (EEB) as well as building energy consumption [9].
For instance, Catalina et al. [10] used multiple regression method to estimate the demand for heating
energy of the building. The south equivalent surface, global heat loss coefficient of building, and
the difference between the sol-air and the indoor temperatures, were used as the input variables to
estimate the demand of heating energy in their study. Their positive results were confirmed with a
determination coefficient (R2 ) of 0.987. Chou, Bui [11] also developed an ensemble model based on

Appl. Sci. 2019, 9, 2630; doi:10.3390/app9132630 www.mdpi.com/journal/applsci


Appl. Sci. 2019, 9, 2630 2 of 23

support vector regression (SVR) and an artificial neural network (ANN) to predict HL and CL for
building design, called ANN-SVR, using the datasets of 17 buildings. A variety of the other models
were also considered and developed to investigate and compare with their proposed ANN-SVR model,
including SVR, ANN, chi-squared automatic interaction detector, classification and regression tree,
and general linear regression. Their results confirmed the feasibility of AI techniques designing and
optimizing EEB systems, especially the ANN-SVR model with a mean absolute percentage error
(MAPE) below 4% and root-mean-squared error (RMSE) lower than 39%–65.9% in a comparison of
the previous works [12,13]. In another study, Castelli et al. [14] applied a genetic programming (GP)
model for evaluating the energy efficiency of EEB systems. Three forms of GP were investigated and
compared, such as geometric sematic GP (GSGP), GSGP with local search (HYBRID), and HYBRID with
linear scaling (HYBRID-LIN). Their results indicated that the HYBRID-LIN technique provided better
results than the other techniques (i.e., GP, HYBRID). Deep learning techniques in AI have also been
developed to estimate the energy efficiency of EEB systems (i.e., CL) by Fan et al. [15]. The potential of
deep learning was exploited and interpreted for a variety of AI models in predicting CL of EEB systems
during 24 h, including multiple linear regression (MLR), elastic net, random forest (RF), gradient
boosting trees (GBT), SVR, extreme gradient boosting (XGB), and deep learning (DNN). Their results
showed that their XGB model with deep learning technique yielded the highest accuracy with an
RMSE of 106.5 and MAE of 71.6. The efforts for optimizing an ANN model using uncertainty means
and sensitivity analyses were conducted to predict the energy demand of buildings by Ascione et
al. [16]. Its performance was proven in a short-term prediction of the energy demand of buildings.
As a result, their findings showed the powerful potential of the optimized ANN model in predicting
the energy demand of buildings with an R2 of 0.995, and the average relative error is between 2.0% and
11%. Ngo [17] also developed an ensemble machine learning model to predict the CL of EEB systems
with high accuracy (e.g., RMSE = 158.77, MAE = 112.07, MAPE = 6.17%, and R2 = 0.990). By the use of
a hybrid model (M5Rules-particle swarm optimization (PSO)), Nguyen et al. [18] predicted the CL
of EEB systems with a promising result. A similar study for predicting the HL of EEB systems was
also performed by Bui et al. [19], using a novel hybrid approach, i.e., M5Rules-genetic algorithm (GA).
By the use of the meta-heuristic algorithms (i.e., PSO, GA) to optimize the M5Rules model, Nguyen
et al. [18] and Bui et al. [19] provided two new hybrid intelligent techniques (i.e., M5Rules-PSO and
M5Rules-GA) to predict the CL and HL of EEB systems with high accuracy, i.e., RMSE of 0.0066, 0.0548,
and R2 of 0.999, 0.998, for the M5Rules-PSO and M5Rules-GA, respectively. Additionally, many other
studies used/applied/developed AI techniques for evaluating and predicting energy consumption as
well as its efficiency [20–24].
According to the best review of the authors, meta-heuristics algorithms with a combination of
ANN model was considered and developed in many areas with high reliability [25–35]; however,
they are not still considered and prepared for estimating the HL of EEB systems. Therefore, this study
developed and proposed four novel hybrid models based on four meta-heuristics algorithms and ANN
model, for estimating the HL of EEB systems, namely PSO-ANN, GA-ANN, imperialist competitive
algorithm (ICA)-ANN, and artificial bee colony (ABC)-ANN models. Four meta-heuristics algorithms
were considered in this study, including artificial bee colony (ABC) optimization, particle swarm
optimization (PSO), imperialist competitive algorithm (ICA), and genetic algorithm (GA). They were
abbreviated as ABC-ANN, PSO-ANN, ICA-ANN, and GA-ANN models.

2. Data Collection and Its Characteristics


For data collection, twelve types of buildings were investigated and simulated by Ecotect
computer software [13]. Accordingly, 768 experimental datasets were simulated and collected by
Tsanas, Xifara [13]. To ensure the diverse of the dataset, 69 other buildings (during the winter of
2018) were also considered and investigated in Vietnam with similar conditions and materials. Finally,
a total of 837 experimental datasets were considered and analyzed for estimating the HL of EEB
systems in this work. Floor/surface area (SA), roof area (RA), wall area (WA), and overall height (OH),
Appl. Sci. 2019, 9, x FOR PEER REVIEW 3 of 24

Appl. Sci.
total 837 9,experimental
of 2019, 2630 datasets were considered and analyzed for estimating the HL of 3EEB of 23
systems in this work. Floor/surface area (SA), roof area (RA), wall area (WA), and overall height (OH),
were considered as the main components of the buildings, as illustrated in Figure 1. Additionally,
were considered as the main components of the buildings, as illustrated in Figure 1. Additionally,
glazing area distribution (GLAD), relative compactness (RC), glazing area (GLA), and orientation (O)
glazing area distribution (GLAD), relative compactness (RC), glazing area (GLA), and orientation
were also extended investigated for estimating the HL of EEB systems. Table 1 summaries the heating
(O) were also extended investigated for estimating the HL of EEB systems. Table 1 summaries the
load of the energy efficiency database used herein. Also, Figure 2 illustrates the properties of the
heating load of the energy efficiency database used herein. Also, Figure 2 illustrates the properties of
dataset used for estimating the HL of EEB systems in this study.
the dataset used for estimating the HL of EEB systems in this study.

Figure 1.
Figure Illustrating the
1. Illustrating the components
components of
of building
building [11].
[11].

Table 1. Summary of the heating load of the energy efficiency database used.
Table 1. Summary of the heating load of the energy efficiency database used.
Elements GLAD GLA O OH RA
Elements GLAD GLA O OH RA
Min. 1.000 0.00 1.000 1.040 138.2
Min. 1.000 0.00 1.000 1.040 138.2
Mean 3.016 22.54 2.581 5.509 180.5
Mean 3.016
Max. 5.000 22.54 50.00 2.581
4.000 8.479 5.509 223.2 180.5
Max. 5.000 50.00 4.000 8.479 223.2
Elements WA SA RC HL -
Elements WA SA RC HL -
Min. 234.2 488.6 0.4194 5.353 -
Min. 234.2 488.6 0.4194 5.353 -
Mean 350.7 659.4 0.7954 29.575 -
Mean 350.7
Max. 459.7 659.4 825.0 0.7954
1.1960 65.03429.575 - -
Max. 459.7 825.0 1.1960 65.034 -
Note: glazing area distribution (GLAD), glazing area (GLA), orientation (O), overall height (OH), roof area (RA),
Note: glazing
wall area (WA),area distribution
surface (GLAD),
area (SA), relative glazing area
compactness (GLA),
(RC), orientation
heating load (HL). (O), overall height (OH), roof
area (RA), wall area (WA), surface area (SA), relative compactness (RC), heating load (HL).
Appl.Sci.
Appl. Sci.2019,
2019,9,9,x 2630
FOR PEER REVIEW 4 of 244 of 23

Figure 2. Properties of the dataset used for estimating heating load (HL) of buildings’ energy efficiency
Figure 2. Properties of the dataset used for estimating heating load (HL) of buildings’ energy
(EEB) systems.
efficiency (EEB) systems.
3. Methods
3. Methods
3.1. Particle Swarm Optimization (PSO) Algorithm
3.1. Particle
PSO isSwarm Optimization
a swarm algorithm(PSO) Algorithm
inspired by the behavior of the particles/social animals, such as
fish,PSO
or birds. It was introduced and developed
is a swarm algorithm inspired by the behavior by Eberhart, Kennedy [36]animals,
of the particles/social and classified
such as as one of
fish,
thebirds.
or metaheuristic techniques.
It was introduced It was considered
and developed as anKennedy
by Eberhart, evolutionary computation
[36] and classified astechnique
one of thein the
statistical community
metaheuristic techniques.withItmany
was advantages
considered as [29,37–39]. This method
an evolutionary attempts technique
computation to take a strong
in thepoint
of the information-sharing
statistical community with many procedure from[29,37–39].
advantages the clusterThis
thatmethod
affects the overall
attempts to swarm behavior.
take a strong pointThus,
of
PSOtheworks
information-sharing procedure
with the potential fromofthe
solution cluster that rather
a population affects than
the overall swarm
a single behavior.
separate item.Thus,
The best
PSO works
solution is with
found theout
potential
based solution of a population
on the experiences rather
of all than a single
individuals in theseparate
swarm item. Thesearching.
during best
solution
The PSO algorithm implements six steps for optimal searching as the following pseudo-codeThe
is found out based on the experiences of all individuals in the swarm during searching. [40]:
PSO algorithm implements six steps for optimal searching as the following pseudo-code [40]:
Appl. Sci. 2019, 9, 2630 5 of 23

Algorithm: The particle swarm optimization (PSO) pseudo-code for the


optimization process
1 for each particle i
2 for each dimention d
3 Initialize position xid randomly within permissible range
4 Initialize velocity vid randomly within permissible range
5 end for
6 end for
7 Iteration k = 1
8 do
9 for each particle i
10 Calculate fitness value
11 if the fitness value is better than p_bestid in history
12 Set current fitness value as the p_bestid
13 end if
14 end for
15 Choose the particle having the best fitness value as the g_bestid
16 for each particle i
17 for each dimention d
Calculate velocity according to the following equation
(i) (i)
18 vij+1 = wv j + (c1 × r1 × (local best j − x j )) + (c2 × r2 ×
(i) (i)
(global best j − x j )), vmin ≤ v j ≤ vmax
Update particle position according to the following equation
19 (i) (i+1)
xij+1 = x j + v j ; j = 1, 2, . . . , n
20 end for
21 end for
22 k = k+1
while maximum iterations or minimum error criteria are not
23
attained

3.2. Genetic Algorithm (GA)


Genetic algorithm (GA) is an optimization algorithm based on Darwin’s theory of natural
selection to find the optimal values of a function [41,42]. GA represents one branch of evolutionary
computation [43]. It applies the principles: genetics, mutation, natural selection, and crossover. A set
of initial candidates is created, and their corresponding fitness values are calculated [44–46]. In GA,
many processes are random, like in evolution. However, this optimization technique allows setting
random levels and levels of control. In this way, GA is considered as a robust and comprehensive
search algorithm. The executable GA may be specified as following (Figure 3):

• Population origination: randomly generates a population of n individuals.


• Calculate the adaptive values: Estimating the adaptation of each individual.
• Stop condition: check the state to finish the algorithm.
• Selection: select two parents from the old population according to their adaptation (the higher the
individual is, the more likely they are to be selected).
• Crossover: with each probability selected, a crossover between two parents is made to create a
new individual.
• Mutation: for each potential variation selected, new individuals are formed.
• Select the result: if the stopping condition is satisfied, the algorithm ends, and the best solution is
found in the current population. When the stopping conditions are not met, the new society will
be continually created by repeating three steps: selection, crossover, and mutation.
Appl. Sci. 2019, 9, x FOR PEER REVIEW 6 of 24

Appl. Sci. 2019, 9, 2630 6 of 23


•Select the result: if the stopping condition is satisfied, the algorithm ends, and the best
solution is found in the current population. When the stopping conditions are not met, the
GA hasnew
twosociety will stopping
necessary be continually created by repeating three steps: selection, crossover, and
conditions:
mutation.
1. GA on
Based hasthe
twochromosome
necessary stopping conditions:
structure, controlling the number of genes that are converging, if the
1. Based on the chromosome structure,
number of genes is united at a point or beyond controlling the number
that point, of genes that
the algorithm are converging, if
ends.
2. the number of genes is united at a point or beyond that point, the algorithm
Based on the special meaning of the chromosome, examine the change of the algorithm ends. after each
2. BasedIfonthe
generation. thedifference
special meaning
is less of theachromosome,
than constant, thenexamine the change
the algorithm of the algorithm after
ends.
each generation. If the difference is less than a constant, then the algorithm ends.

Figure3.3.Flow
Figure Flowchart
chart of
of aa genetic algorithm(GA).
genetic algorithm (GA).

3.3. 3.3.
Imperialist Competitive Algorithm (ICA)
Imperialist Competitive Algorithm (ICA)
Inspired by by
Inspired thethe
simulation
simulationofofa acomputer
computer of of human socialevolution,
human social evolution,thetheICAICA waswas proposed
proposed by by
Atashpaz-Gargari,
Atashpaz-Gargari, Lucas
Lucas[47]
[47]totosolve
solveoptimization
optimization problems.
problems. ItItisisone
oneofofthetheswarm
swarm intelligence
intelligence
techniques that
techniques cancan
that effectively
effectivelysolve
solvecontinuous
continuous functions [48–50].Briefly,
functions [48–50]. Briefly,ICAICA is is a global
a global search
search
algorithm inspired
algorithm by imperialistic
inspired competition
by imperialistic and based
competition and on a social
based on policy
a socialof imperialism. Accordingly,
policy of imperialism.
the Accordingly, the mostwill
most potent empire potent empiremany
dominate will dominate
colonies and many colonies
their sourcesandoftheir
use. sources of use.collapses,
If an empire If an
empire
other realmscollapses, other realms
will compete for thewill compete
territory. Theforcore
the of
territory.
the ICAThecancore of the ICA by
be described canthe
be following
described by steps:
the following steps:
1. Create random
1. Create searchsearch
random spaces and initial
spaces empires;
and initial empires;
2. Assimilation of colonies:
2. Assimilation the colonies
of colonies: moved
the colonies movedin different directions
in different to to
directions thethe
realms;
realms;
3. 3. Revolution: random changes occur in the characteristics
Revolution: random changes occur in the characteristics of each country; of each country;
4. 4. Exchange
Exchange the position
the position of the of the territory
territory for theforempire.
the empire. A colony
A colony withwith a better
a better place
place thanthan
thethe
realm
realm will have the opportunity to rise and control the empire, replacing
will have the opportunity to rise and control the empire, replacing the existing empire; the existing empire;
5. Imperial competition: competition and conquest occurs among the empires to possess each
5. Imperial competition: competition and conquest occurs among the empires to possess each
other’s colonies;
other’s colonies;
6. Eliminate weaker empires. Natural selection rules are applied. Weak empires will collapse
6. Eliminateandweaker
lose the empires. Natural selection rules are applied. Weak empires will collapse and
entire colonies;
lose7.theIf entire colonies;
the stop condition is satisfied, stop, otherwise return to step 2;
7. If the
8. stop
End.condition is satisfied, stop, otherwise return to step 2;
8. End.
3.4. Artificial Bee Colony (ABC)
3.4. Artificial Bee Colony (ABC)
Optimization algorithms are one of the branches of AI which have been researched and developed
based on nature’s inspiration, and swarm intelligence is one of them. Inspired by the bees’ search
Appl. Sci. 2019, 9, 2630 7 of 23

for food, Karaboga [51] introduced the ABC optimization algorithm as a robust tool for optimization
problems. Although it is pure swarm intelligence, valid for both discrete optimization problems and
continuous are significant [52–54]. In the ABC algorithm, the bees are divided into three groups in the
population, including employed bees, onlookers, and scouts. Employed bees get food from the found
food sources and send information to the onlooker bees. The onlooker bees get information from the
employed bees and make choices for better food sources. When the source of the food is exhausted by
the employed bees, the onlooker bees will become scouting bees looking for random food sources.
The framework of ABC optimization is shown in Figure 4.
For initialization of the swarm, each food source xi is a D-dimensional vector with D is the number
of variables; i = 1, 2, . . . N. It can be created using the uniform distribution in Equation (1):

xi,j = xminj + rand[0, 1](xmax j − xmin j ) (1)

where rand[0, 1] is a uniformly distributed random number in the range [0,1]; xmin j and xmax j are the
bounds of xi in jth dimension. After initialization of the swarm, ABC performed cycles of three phases,
including employed, onlooker bees, and scouts.
For the employed bees phase, the position of the ith food source is updated as follows:

vij = xi,j + ρi,j (xi,j − xt,j ) (2)

where t ∈ {1, 2, . . . N} and t , i; j ∈ {1, 2 . . . D}; ρi,j lies in the range [−1,1].
For the onlooker bees phase, the food source can be chosen depending on the probability value
associated, i.e., pi , can be computed by the following equation:

f iti
pi,j = (3)
N
P
f itn
n=1

where f iti is the solution fitness value ith evaluated by employed bees. Based on the probability, the
onlooker bees select a better position for the food source.
In the scouting phase, the feed will be dropped if no location is updated according to Equation (2)
in a predetermined cycle. Now, the onlooker will become a scout. A scout will perform a search for
new food sources randomly in the search space, as described in Equation (1). In ABC, the number of
cycles a food source is then dropped is called limit. It is an important parameter used to assess the
quality of the model.
Appl. Sci. 2019, 9, 2630 8 of 23
Appl. Sci. 2019, 9, x FOR PEER REVIEW 8 of 24

The framework of the artificial bee colony (ABC) optimization.


Figure 4. The

3.5. Artificial
3.5. Artificial Neural
Neural Network
Network (ANN)
(ANN)
Based on
Based on the
the human
humanbrainbrainoperation
operationprinciple,
principle,ANN ANN has
has been
been researched
researched andand developed
developed as
as an
an alternative tool for different social purposes. It is even smarter than the human
alternative tool for different social purposes. It is even smarter than the human in some cases, with in some cases,
with substantial
substantial computing
computing power.power. In real-life,
In real-life, ANN was ANN was and
studied studied
appliedandtoapplied to solve
solve many many
problems,
problems, such as prediction
such as prediction of self-compacting
of self-compacting concrete[55],
concrete strength strength [55], anisotropic
anisotropic masonry masonry failure
failure criterion
criterion [56], prediction of the mechanical properties of sandcrete materials [57],
[56], prediction of the mechanical properties of sandcrete materials [57], blasting issues [58–64], blasting issues [58–64],
landslide assessment
landslide assessment [65–67],
[65–67], to
to name
name aa few
few [68–75]. They operate
[68–75]. They operate based
based onon data
data analysis
analysis from
from input
input
neurons, where the input data of the dataset is contained. Here, the information is analyzed and
neurons, where the input data of the dataset is contained. Here, the information is analyzed and
transmitted through
transmitted throughhidden
hiddenlayers
layerscontaining
containing hidden neurons,
hidden via the
neurons, viatransfer function.
the transfer In the hidden
function. In the
layers, data
hidden layers,is encrypted, analyzed,analyzed,
data is encrypted, and calculated through weights.
and calculated through The biasesThe
weights. are biases
also estimated
are also
to ensure a balanced level of data. Finally, the outcome is computed on
estimated to ensure a balanced level of data. Finally, the outcome is computed on the output the output layer. Figure 5
layer.
illustrates the framework of ANN model for predicting the HL of EEB systems
Figure 5 illustrates the framework of ANN model for predicting the HL of EEB systems in this study in this study based on
the eight
based on input
the eightvariables and one output
input variables and one variable.
output variable.
Appl. Sci. 2019, 9, 2630 9 of 23
Appl. Sci. 2019, 9, x FOR PEER REVIEW 9 of 24

5. Framework of artificial neural network (ANN) model for estimating


Figure 5. estimating heating load (HL) of
of
buildings’
buildings’ energy
energy efficiency
efficiency (EEB)
(EEB) systems.
systems.

4. Evaluation Performance Indices


4. Evaluation Performance Indices
To evaluate the quality of the PSO-ANN, GA-ANN, ICA-ANN, and ABC-ANN models, R22 ,
To evaluate the quality of the PSO-ANN, GA-ANN, ICA-ANN, and ABC-ANN models, R ,
RMSE, and MAE, were used as the indicators of the model’s performances. They were computed as
RMSE, and MAE, were used as the indicators of the model’s performances. They were computed as
Equations (4–6):
Equations (4–6): v
t n
1X 2
RMSE =
1n n ( yi − ŷi ) 2 (4)
RMSE = i=1 ( yi − y
ˆi )
nP
 (4)
n i =1
( yi − ŷi )2
i=1
R2 = 1n− (5)
 ( y −( yyˆi −) y)2
n
P
i i
2

R2 = 1 − i =1
n
i=1
(5)

MAE i =
=1
n
(1yX−
i y) 2
yi − ŷi (6)
n
i=1

1 ŷni are considered as average, calculated, and modeled


n stands for the number of instances; y, yi , and
amounts of the response variable. MAE =
n
yi − yˆi
i =1
(6)

5. Prediction of Heating Load (HL) by the Genetic Algorithm-Artificial Neural Network


n stands for the number of instances; y , yi , and yˆi are considered as average, calculated, and
(GA-ANN) Model
modeled amounts of the response variable.
Before predicting the HL of EEB systems by the stated models, the dataset was split into two
clusters, i.e., training
5. Prediction and testing.
of Heating Load (HL)According
by the to the previous
Genetic studies, the original
Algorithm-Artificial Neuraldataset should
Network be
(GA-
divided into
ANN) Model two parts by randomly according to the 80/20 ratio [76,77]. Thus, for the training process,
80% of the whole dataset (672 experimental datasets) was selected randomly to develop the models.
Before predicting
The remaining 20% (165the HL of EEBdatasets)
experimental systems was
by the stated
used models,
for the testingthe dataset
process, was split
which is theinto two
method
clusters, i.e., training and testing. According to the previous studies, the original
for evaluating the quality/performance of the GA-ANN, PSO-ANN, ICA-ANN, and ABC-ANN models. dataset should be
divided
For into two parts by
the prediction randomly
of HL according
of EEB systems byto
thethe 80/20 ratio
GA-ANN [76,77].
model, Thus, for the ANN
an initialization training process,
model was
80% of the whole dataset (672 experimental datasets) was selected randomly to
developed first; then, the GA was used to optimize the developed ANN model, where the weights develop the models.
and
The remaining
biases 20% (165According
were optimized. experimental datasets)
to Nguyen etwas usedone
al. [68], foror
thetwo
testing process,
hidden layerswhich
of theisANN
the method
model
for evaluating the quality/performance of the GA-ANN, PSO-ANN, ICA-ANN, and ABC-ANN
models.
Appl. Sci. 2019, 9, x FOR PEER REVIEW 10 of 24

For the prediction of HL of EEB systems by the GA-ANN model, an initialization ANN model
was developed
Appl. Sci. first; then, the GA was used to optimize the developed ANN model, where
2019, 9, 2630 the
10 of 23
weights and biases were optimized. According to Nguyen et al. [68], one or two hidden layers of the
ANN model can implement very well all regression problems. Therefore, a “trial and error” (TAE)
can implement
procedure was very well all
conducted regression
with one andproblems.
two hidden Therefore,
layers ofa “trial
ANN and error”
models. To(TAE)
avoidprocedure
overfittingwas of
conducted with one and two hidden layers of ANN models.
the initial ANN model, the min-max scale method was applied with the scale lies in the To avoid overfitting of the initial ANN
range of
model,Ultimately,
[−1,1]. the min-max thescale
ANN method
modelwas appliedwas
8-24-18-1 withdefined
the scaleaslies
thein theANN
best rangetechnique
of [−1,1]. Ultimately,
for predictingthe
ANN
HL of model
EEB systems8-24-18-1 wasstudy.
in this defined Thisaswasthe the
bestmoment
ANN technique for predicting
for the optimization HLweights
of the of EEB systems
and biases in
this study. This was the moment for the optimization of the weights
of the ANN 8-24-18-1 model by the GA. The number of populations (p), crossover probability (Pc),and biases of the ANN 8-24-18-1
model by probability
mutation the GA. The(P number
m), andof thepopulations
number variable(p), crossover
(n), areprobability
the parameters (Pc ), mutation
of the GA,probability
that needed (Pmto),
and the number variable (n), are the parameters of the GA, that needed
be set up before optimizing herein. In this study, the TAE procedure of p with different values wasto be set up before optimizing
herein. In this
conducted, i.e.,study, the 200,
p = 100, TAE 300,
procedure
400, 500;of pPwith
m was different
set equalvalues
to 0.1;wasPcconducted,
was set equal i.e., pto=0.9;
100,n200,
= 4.300,
To
400, 500; P was set equal to 0.1; P was set equal to 0.9; n = 4.
evaluate the performance of the optimization process, RMSE was used as the fitness function
m c To evaluate the performance of the
optimization
according process, RMSE
to Equation (4). Thewassearching
used as the fitness function
operation accordinginto1000
were performed Equation (4). The
iterations searching
to ensure the
operation
optimal were performed
searching in 1000 and
for the weights iterations
biasesto ofensure the optimal
the selected ANN searching
model. The foroptimal
the weightsvaluesand of
biases of the selected ANN model. The optimal values of weight
weight and bias for the ANN 8-24-18-1 model after optimizing by the GA (i.e., GA-ANN model), and bias for the ANN 8-24-18-1
model
were after optimizing
corresponding bylowest
to the the GARMSE.(i.e., GA-ANN model), of
The performance were
the corresponding to the by
optimization process lowest
the GARMSE.
for
The performance of the optimization process by the GA for the ANN
the ANN 8-24-18-1 model is shown in Figure 6. The final ANN model, after optimized by the GA (i.e., 8-24-18-1 model is shown in
Figure 6. The final ANN model,
GA-ANN model), is shown in Figure 7. after optimized by the GA (i.e., GA-ANN model), is shown in Figure 7.

Genetic algorithm-artificial
Figure 6. Genetic algorithm-artificial neural
neural network
network (GA-ANN)
(GA-ANN) performance
performance for
for estimating
estimating HL
HL of
EEB systems.
Appl. Sci. 2019, 9, 2630 11 of 23
Appl. Sci. 2019, 9, x FOR PEER REVIEW 11 of 24

Figure 7. Structure
Structure of the GA-ANN model for determining HL of EEB systems.

As stated above, 672 experimental datasets were investigated and analyzed to develop the
Theback-propagation
models. The back-propagationalgorithm
algorithmwaswasapplied
applied toto training
training thethe GA-ANN
GA-ANN model.
model. NoteNote
thatthat
the
the min-max scale with the range [−1,1] was used for all the models to avoid underfitting/overfitting.
min-max scale with the range [−1,1] was used for all the models to avoid underfitting/overfitting. The
The performance
performance of the
of the training
training process
process for for predicting
predicting HLHL of of EEB
EEB systemsisisinterpreted
systems interpretedininFigure
Figure 8.
8.
Subsequently, 165 experimental datasets were used to evaluate GA-ANN performance performance as the new
dataset. The
The results
results of HL
HL prediction
prediction on the new
new data
data (i.e.,
(i.e., 165
165 experimental
experimental datasets)
datasets) were estimated
by the developed GA-ANN model and and are
are shown
shown inin Figure
Figure 9.9.

Figure 8. HL
HL predictions
predictions on the training dataset of the GA-ANN model.
Appl. Sci. 2019, 9, 2630 12 of 23
Appl. Sci. 2019, 9, x FOR PEER REVIEW 12 of 24
Appl. Sci. 2019, 9, x FOR PEER REVIEW 12 of 24

Figure
Figure HL
9.9.HL
Figure9. predictions
HLpredictions on
predictionson the
onthe testing dataset
thetesting dataset of
dataset of the
of theGA-ANN
the GA-ANNmodel.
GA-ANN model.
model.

6. 6.
Prediction
Prediction of HL by the Particle Swarm Optimization (PSO)-ANNModel Model
6. Prediction ofof HLHLby bythetheParticle
ParticleSwarm
SwarmOptimization
Optimization (PSO)-ANN
(PSO)-ANN Model
Like
Like the GA-ANN
theGA-ANN
GA-ANNmodel, model,
model,the the selected
theselected initialization
selected initialization ANN modelwas
initialization ANN was optimizedbybythe the PSO
Like the ANN model
model was optimized
optimized by thePSO PSO
algorithm
algorithm for predicting HL of EEB systems, called PSO-ANN model. In this regard, the parameters of
algorithm forfor predictingHL
predicting HLofofEEBEEBsystems,
systems,called
called PSO-ANN
PSO-ANN model.
model. In Inthis
thisregard,
regard,the theparameters
parameters
the PSO algorithm were set set
upup before optimization
optimizationof ofthe
theANN model (i.e.,ANN ANN 8-24-18-1model), model),
of of
thethe
PSOPSO algorithm
algorithm were
were set up before
before optimization of the ANN
ANN model(i.e., (i.e., ANN8-24-18-1
8-24-18-1 model),
including
including thethenumber
number of particle swarms
of particle (Sw), maximum
swarms particle’s
(Sw), maximum velocityvelocity
particle’s (Vmax ), individual cognitive
(Vmax), individual
including the number of particle swarms (Sw), maximum particle’s velocity (Vmax), individual
(φ1cognitive ( φ ), group 2cognitive(
), group cognitive(φ φ ), inertia
), inertia weight (w), and
weightmaximum
(w), andnumber
maximum of iteration
number of (miteration
i ). Then,(m thei). weights
Then,
cognitive ( φ1 ),1 group cognitive( φ2 2), inertia weight (w), and maximum number of iteration (mi). Then,
andthe weights and biases of the initialization ANN model were optimized by the PSO algorithms, asfor
biases of the initialization ANN model were optimized by the PSO algorithms, as those applied
the weights and
thethose
GA-ANN biases
model of the
above. initialization
Similar to the ANN model
GA-ANN were optimized by theSwPSO algorithms, as
applied for the GA-ANN model above. Similarmodel, a TAE
to the GA-ANN procedure
model, aofTAE was implemented,
procedure of Sw
those applied for the GA-ANN model above. Similar to the GA-ANN
= 1.8; φ1 = Vφmax model,
2 ==1.7;
a TAE procedure of Sw
wasSw
with of 100, 200,with
implemented, 300, Sw 400,of500,
100, respectively;
200, 300, 400, 500, Vmaxrespectively; 1.8; wφ1= =1.8, = 1.7;mwi =
φ and 1000.
= 1.8,
was implemented,
The similar techniques withas Swthose
of 100,used200,for300,
the400,
GA-ANN500, respectively;
model wereVmax also = 1.8;
appliedφ1 =forφ2 2the
= 1.7; w = 1.8,
PSO-ANN
and mi = 1000. The similar techniques as those used for the GA-ANN model were also applied for the
model
and mi =in1000.
developing
The similarthe techniques
model (i.e.,asback-propagation
those(i.e.,
used for thealgorithm,
GA-ANN min-max
model were scale
also[−1,1]).
applied Finally,
for the
PSO-ANN model in developing the model back-propagation algorithm, min-max scale [−1,1]).
the best PSO-ANN
PSO-ANN model inmodel was determined
developing the model with
(i.e., the lowest RMSE. Figure
back-propagation algorithm, 10 shows
min-max the performance
scale [−1,1]).
Finally, the best PSO-ANN model was determined with the lowest RMSE. Figure 10 shows the
of the PSO-ANN
Finally, the best model in the
PSO-ANN training
model was process.
determined Figure 11 illustrates
with the lowest the structure
RMSE. Figureof the
10 PSO-ANN
shows the
performance of the PSO-ANN model in the training process. Figure 11 illustrates the structure of the
model.
performanceNote that,
of the although
PSO-ANN the number
model in of
the input
trainingneurons,
process.hidden
Figurelayers,
11
PSO-ANN model. Note that, although the number of input neurons, hidden layers, and neurons, as and
illustratesneurons,
the as well
structure as
of the
output
PSO-ANN
well aslayer,
the is the Note
model.
output same as
is Figure
layer,that, 8; as
thealthough
same however,
the number
Figure the
8; weights
of input
however, and
the biases and
neurons,
weights ofhidden
them
biasesareofdifferent.
layers,
them and Eventually,
areneurons,
different. as
the
well HL predictions
as the output
Eventually, the HL on
layer,the training
is the same
predictions dataset
on as
theFigureand
training testing
8; however,dataset
dataset and were
thetesting
weightsconducted
and biases
dataset based on the developed
of them arebased
were conducted different.
on
PSO-ANN
Eventually,
the developedmodel,
the HL as
PSO-ANNshownmodel,
predictions in Figures
on as 12 and
theshown
trainingin 13, respectively.
dataset
Figures and
12 andtesting dataset were conducted based on
13, respectively.
the developed PSO-ANN model, as shown in Figures 12 and 13, respectively.

Figure
Figure Particle
10.10. Particleswarm
swarmoptimization
optimization(PSO-ANN)
(PSO-ANN)performance
performancefor
forestimating
estimatingHL
HLof
ofEEB
EEBsystems
systemsin
theintraining process.
the training process.
Figure 10. Particle swarm optimization (PSO-ANN) performance for estimating HL of EEB systems
in the training process.
Appl. Sci. 2019, 9, 2630 13 of 23
Appl. Sci.
Appl. Sci. 2019,
2019, 9,
9, xx FOR
FOR PEER
PEER REVIEW
REVIEW 13 of
13 of 24
24

Figure 11.
Figure 11.
Figure Structure of
Structureof
11. Structure the
ofthe PSO-ANN
thePSO-ANN model
PSO-ANNmodel for
modelfor estimating
for estimating HL
estimating HL of
HL of EEB
of EEB systems.
EEB systems.
systems.

Figure 12.
Figure 12.
Figure HLpredictions
HL
12. HL predictionson
predictions onthe
on thetraining
the trainingdataset
training datasetof
dataset ofthe
of the PSO-ANN
the PSO-ANN model.
PSO-ANN model.
model.
Appl. Sci. 2019, 9, 2630 14 of 23
Appl. Sci. 2019, 9, x FOR PEER REVIEW 14 of 24

Appl. Sci. 2019, 9, x FOR PEER REVIEW 14 of 24

Figure 13. HL predictions on the testing dataset of the PSO-ANN model.

7. Prediction of
7. Prediction of HL
HL by
by the13.
the
Figure Imperialist
Imperialist Competitive
on the testingAlgorithm
Competitive
HL predictions dataset of the(ICA)-ANN
Algorithm (ICA)-ANN Model
Model
PSO-ANN model.
In
In this
this section,
section, the
thebyHL
HLthe of EEB
EEB systems
ofImperialistsystems was
was predicted
predicted by
by thethe ICA-ANN
ICA-ANN model.
model. As As those
those applied
applied
7. Prediction of HL Competitive Algorithm (ICA)-ANN Model
for
for the GA-ANN and PSO-ANN model, the ICA was used to optimize the weights and biases
the GA-ANN and PSO-ANN model, the ICA was used to optimize the weights and biases of of the
the
selected In this section, the HL of EEB systems was predicted by the ICA-ANN model. As those applied
selected initialization
initializationANN ANNmodel model (i.e., ANN
(i.e., ANN 8-24-18-1
8-24-18-1model).
model). The parameters
The parameters of ICAofareICA alsoare needed
also
to be for
needed
the
settoupbeGA-ANN
before and PSO-ANN
set upoptimization
before optimization
model,
of the ANN themodel,
of the
ICA was used to optimize
ANNincluding the number
model, including
the weightsinitialand
theofnumber
biases of(N
countries the
country ),
of initial countries
selected initialization ANN model (i.e., ANN 8-24-18-1 model). The parameters of ICA are also
initial
(Ncountryimperialists
), initial (N ), maximum number of iterations (N ), lower-upper limit of the optimization
to beimperialists
set up before (N imper), maximum number of iterations
of the ANN model, including the(N i), lower-upper limit of the
imper i
needed optimization number of initial countries
region (L),
optimization assimilation
region coefficient
(L), assimilation (As), and revolution
coefficient of
(As),ofand each country
revolution (r).
of For
eachimplementing
country (r). this
For
(Ncountry), initial imperialists (Nimper), maximum number iterations (Ni), lower-upper limit of the
task, a TAE
implementing
optimizationprocedure
thisregion was
task, (L), also
a TAE applied
procedure
assimilation for N country
was also(As),
coefficient , with
applied N country
and for set equal
Ncountry, of
revolution with to
each 100,
Ncountry200, 300,
set equal
country 400,
(r). For 500,
to 100,
respectively;
200, 300, 400, N
implementing 500, this
imper was
task,seta equal
respectively; TAE Nimpertowas10, set
procedure 20,was 30,also
equal respectively;
toapplied
10, 20,for 30,LNrespectively;
was set in the
country, with Ncountry
rage
L was set
set equalofin[−10,10];
the100,
to rageAsof
equal to
200, 3; r
300, as to
400, 0.5,
500, and N was
respectively; set
N equal
was to
set 1000.
equal toAfterward,
10, 20, 30, the emperies
respectively;
[−10,10]; As equal to 3; r as to 0.5, and Ni was set equal to 1000. Afterward, the emperies perform a
i imper L perform
was set in a global
the rage search
of
for the
global colonies
[−10,10];
search Asfor(e.g.,
equal
the toweights
3; r as to
colonies and 0.5,biases).
(e.g., and The
Ni was
weights fitness
set
and equal
biases).of
tothe
Theemperies
1000. Afterward,
fitness was
of theassessed
the emperiesthrough
emperies perform
was assessedRMSE.
a
The global search
best ICA-ANN for the colonies (e.g., weights and biases). The fitness of the emperies was assessed
through RMSE. Themodel is associated
best ICA-ANN modelwithisthe lowest RMSE.
associated with Figure
the lowest 14 shows
RMSE. theFigure
performance
14 shows of the
the
through RMSE.
optimization process The
by best
the ICA-ANN
ICA for the model
ANN ismodel.
associated with the the
Ultimately, lowestfinalRMSE.
ICA-ANN Figuremodel
14 shows was thefound,
performance of the optimization process by the ICA for the ANN model. Ultimately, the final ICA-
as performance
shown in Figure of the
15.optimization
Note that the process by the ICA thefor the ANN model. Ultimately, the finalsame
ICA- with
ANN model was found, as shown in structure
Figure 15.ofNote developed
that the structure
ANN model was found, as shown in Figure 15. Note that the structure of the developed ICA-ANN
ICA-ANN of the model is the
developed ICA-ANN
the
modelGA-ANN
is the and PSO-ANN models; and however, the weights and biases (e.g., black and grey lines) of
model is same
the same with thethe
with GA-ANN
GA-ANN andPSO-ANN PSO-ANN models; models; however,
however, thethe weights
weights and andbiasesbiases
(e.g., (e.g.,
them are
blackblack different.
and and
greygrey Additionally,
lines) of them are the similar
different. techniques as those used for the GA-ANN and PSO-ANN
lines) of them are different.Additionally,
Additionally, the thesimilar
similar techniques
techniques as those
as those usedusedfor thefor the
models
GA-ANN were
GA-ANN andalsoand appliedmodels
PSO-ANN
PSO-ANN for the were
models ICA-ANN
werealso modelfor
alsoapplied
applied in the
for developing
theICA-ANN
ICA-ANN the model
model
model in(i.e., back-propagation
developing
in developing the model
the model
algorithm, min-max scale
(i.e., back-propagation
(i.e., back-propagation [−1,1]). min-max
algorithm,
algorithm, min-maxscale [−1,1]).
scale [−1,1]).

Figure 14. Imperialist


Figure competitive
14. Imperialist competitivealgorithm (ICA)-ANNperformance
algorithm (ICA)-ANN performancefor for estimating
estimating HLEEB
HL of of EEB
systems
systems in training
in the the training process.
process.
Figure 14. Imperialist competitive algorithm (ICA)-ANN performance for estimating HL of EEB
systems in the training process.
Appl. Sci. 2019, 9, 2630 15 of 23
Appl. Sci. 2019, 9, x FOR PEER REVIEW 15 of 24

Figure 15.
Figure Structure of
15. Structure of the
the ICA-ANN
ICA-ANN model
model for
for estimating
estimating HL
HL of
of EEB
EEB systems.
systems.

Based on
Based onthe
theICA-ANN
ICA-ANN model
model developed,
developed, the outcome
the outcome of HLofpredictions
HL predictions was performed.
was performed. Figure
Figure 16 shows the HL predictions of the training dataset when the development
16 shows the HL predictions of the training dataset when the development of the ICA-ANN model.of the ICA-ANN
model. Applying
Applying the ICA-ANN
the ICA-ANN model model developed,
developed, the dataset
the new new dataset includes
includes 165 experimental
165 experimental datasets
datasets on
on the
the testing
testing dataset
dataset waswas
usedused to check
to check the quality
the quality of model,
of the the model,
like like those
those tested
tested forGA-ANN
for the the GA-ANNand
and PSO-ANN
PSO-ANN models.
models. The results
The results of the of
HLthe HL predictions
predictions on the on
newthedataset
new dataset
(testing(testing
dataset)dataset) are
are shown
shown in Figure
in Figure 17. 17.

Figure 16. HL
HL predictions
predictions on the training dataset of the ICA-ANN model.
Appl. Sci. 2019, 9, 2630 16 of 23
Appl. Sci. 2019, 9, x FOR PEER REVIEW 16 of 24

Appl. Sci. 2019, 9, x FOR PEER REVIEW 16 of 24

Figure 17.
Figure 17. HL predictions
predictions on
on the
the testing
testing dataset
dataset of
of the
the ICA-ANN
ICA-ANN model.
model.

8. Figure 17. HL predictions on the testing dataset of the ICA-ANN model.


8. Prediction
Prediction of
of HL
HL by
by the
the Artificial
ArtificialBee
BeeColony
Colony(ABC)-ANN
(ABC)-ANNModel Model
8.For the
theHL
Prediction
For HLpredictions
of HL by the
predictions byArtificial
thethe
by ABC-ANNBee Colony
ABC-ANN model, a process
(ABC)-ANN
model, a processof theofdevelopment
Model the development of the hybrid model
of the hybrid
was conducted,
model was similar
conducted, to those
similar models above (e.g., ICA-ANN, PSO-ANN, GA-ANN). Accordingly, the
For the HL predictions by to
thethose
ABC-ANN models model,above (e.g., of
a process ICA-ANN,
the development PSO-ANN, of the GA-ANN).
hybrid
ABC algorithm was
Accordingly,
model wasthe ABC applied
conducted,
to optimize
algorithm
similarwas
the parameters
applied
to those to optimize
models
of the
above the
selected
(e.g.,parameters
ANN model
ICA-ANN,ofPSO-ANN,
(i.e., ANN
the selected ANN
GA-ANN).
8-24-18-1
model
model), for
(i.e.,Accordingly, predicting
ANN 8-24-18-1 the ABC HL
model),of EEB
algorithm systems.
for was
predicting
applied to The HLinitial setting
of EEBthe
optimize for
systems. the
parameters ABC algorithm
Theofinitial setting
the selected is
ANN necessary,
formodel
the ABC as
with those
(i.e., ANN
algorithm set for the previous
8-24-18-1 model),
is necessary, models
as with for (e.g.,
predicting
those ICA-ANN,
set for the HL previousPSO-ANN,
of EEB systems.
models TheGA-ANN),
(e.g.,initial including
ICA-ANN, the
setting PSO-ANN, number
for the ABCGA- of
bees
ANN), (Nbees
algorithm ), the
includingis number
necessary,
the number of as
food ofsources
with those(N
bees (N
set
beesfor ),previous
the limit
thenumber
foodsource
), the of a food
ofmodels
food source
(e.g.,
sources ICA-ANN,(Mfoodsource
(Nfoodsource the),limit
),PSO-ANN, the boundary
GA-
of a food
of the parameters
ANN), including (b), and
the the
number maximum
of bees (N number
bees), the of repetitions
number of
source (Mfoodsource), the boundary of the parameters (b), and the maximum number of repetitions foodfor optimization
sources (N foodsource(nround).
), the limit Similar
of a foodto the
for
GA, source
PSO, (Mfoodsource
and ICA, ),a TAE
the boundary
procedure offor
theinparameters
the ABC
optimization (nround). Similar to the GA, PSO, and ICA, a TAE procedure for in the (b), and the
algorithm was maximum
conducted, numberwith of
N =
repetitions
100, for
200,
beesABC algorithm 300,
of the =
400, optimization (nround). Similar to the GA, PSO, andABC ICA,algorithm
a TAE procedure foras infollow:
the ABCNalgorithm
was 500, respectively.
conducted, with The Nbees other
= 100,parameters
200, 300, 400, of the
500, respectively. wereother
The set parameters foodsource ABC50;
M was conducted,
=100; b = with N
[−10;10],bees = 100, 200, 300, 400, 500, respectively. The other parameters of the ABC
and nround = 1000. Once the parameters of the ABC algorithms were
foodsource were set as follow: Nfoodsource = 50; Mfoodsource =100; b = [−10;10], and nround = 1000. Once the
algorithm
algorithmthe
established, were set as follow:
initialization ANN Nfoodsource
model = 50;8–24–18–1
Mfoodsource =100;
modelb = [−10;10],
was optimized and nround by the= 1000.
global Once the of
search
parameters
parameters of the
of theABC ABC algorithms
algorithmswere were established,
established, the the initialization
initialization ANNANN model
model 8–24–18–1
8–24–18–1 modelmodel
the bee
waswas colony.
optimized RMSE
byby thethe was
global also used to evaluate the efficiency of the optimization of the ABC-ANN
optimized globalsearch
searchof ofthe the beebee colony.
colony. RMSERMSEwas wasalso also used
used to to evaluate
evaluate the the efficiency
efficiency
model,
of the with the optimal ABC-ANN model corresponding to the lowest RMSE. Figure 18 presents the
of the optimization of the ABC-ANN model, with the optimal ABC-ANN model corresponding to to
optimization of the ABC-ANN model, with the optimal ABC-ANN model corresponding
performance
the the
lowest
lowest
ofRMSE.
RMSE. the Figure
optimization
Figure
processthe
1818presents
presentsthe
ofperformance
the ABC-ANN
performance of
model
ofthe in estimating
theoptimization
optimization theofHL
process
process ofof
the theEEB systems.
ABC-ANN
ABC-ANN
Finally,
model model the optimal
in estimating
in estimating ABC-ANN
thetheHL HLofofEEBmodel
EEBsystems. was defined
systems. Finally, with the
theoptimal
Finally, the optimal
optimalABC-ANNABC-ANN weights and
model
model biases,
waswas as
defined
defined shown
with with
in Figure
the the
optimal 19.
optimal weights
weights andand biases,asasshown
biases, shown in in Figure
Figure 19.19.

Figure Artificial
18. 18.
Figure bee
Artificial colony
bee colony(ABC)-ANN
(ABC)-ANN performance forestimating
performance for estimatingHLHL
of of
EEBEEB systems
systems in the
in the
training process.
training process.
Figure 18. Artificial bee colony (ABC)-ANN performance for estimating HL of EEB systems in the
training process.
Appl. Sci. 2019, 9, 2630 17 of 23
Appl. Sci. 2019, 9, x FOR PEER REVIEW 17 of 24

Appl. Sci. 2019, 9, x FOR PEER REVIEW 17 of 24

19. Structure of the ABC-ANN model for


Figure 19. for estimating
estimating HL
HL of
of EEB
EEB systems.
systems.
Figure 19. Structure of the ABC-ANN model for estimating HL of EEB systems.

It
ItItshould
should be noted that although Figures
figures 7,7, 11,
11, 15,
15 and 19 are the same, their structure is different
different
should be noted that although figures 7, 11, 15, and 19 are the same, their structure is different
since
since the
the weights
weights and
and biases
biases of
of the
the models
models are
are different.
different. In addition,
addition, similar
similar techniques
techniques as
as those
those used
used
since the weights and biases of the models are different. In addition, similar techniques as those used
for
for the ICA-ANN, PSO-ANN, and GA-ANN models were also applied for the development of the
forthe
theICA-ANN,
ICA-ANN,PSO-ANN,
PSO-ANN, and GA-ANN
GA-ANN models were also applied appliedfor the thedevelopment
development of thethe
ABC-ANN
ABC-ANN
ABC-ANNmodel
model (i.e.,
model(i.e.,
back-propagation
(i.e.,back-propagation
algorithm, min-max
algorithm, min-max
back-propagation algorithm,
scale
min-maxscale [−1,1]).
scale[−1,1]). Figure2020shows
[−1,1]).Figure shows the HL
the HLHL
predictions
predictions of
of the
the ABC-ANN
ABC-ANN model
model on
on the
the training
training dataset.
dataset. Then,
Then, 165
165 experimental
experimental
predictions of the ABC-ANN model on the training dataset. Then, 165 experimental datasets were datasets
datasets were
were
predicted
predicted based
predictedbased on
basedon the
onthe developed
thedeveloped ABC-ANN
developed ABC-ANN models,
models,as
ABC-ANN models, as shown
asshown in Figure
shownininFigure 21.
Figure21.21.

Figure20.
Figure 20.HL
HLpredictions
predictions on the training
training dataset
datasetof
ofthe
theABC-ANN
ABC-ANNmodel.
model.
Figure 20. HL predictions on the training dataset of the ABC-ANN model.
Appl. Sci. 2019, 9, 2630 18 of 23
Appl. Sci. 2019, 9, x FOR PEER REVIEW 18 of 24

Figure 21. HL predictions on the testing dataset of the ABC-ANN model.

9. Comparison and
9. Comparison and Evaluation
Evaluation of
of the
the Developed
Developed Models
Models
After
After the
the models
models were
were developed
developed and
and HLHL of
of EEB
EEB systems
systems waswas predicted,
predicted, their
their results
results were
were
compared and evaluated together through the performance metrics (e.g., RMSE, R 2 , and MAE), and
compared and evaluated together through the performance metrics (e.g., RMSE, R , and MAE), and
2

the
the intensity
intensity of
of color
color and
and ranking
ranking methods.
methods. A A comprehensive
comprehensive assessment
assessment of of the
the developed
developed models
models
based
based on both training and the testing dataset was conducted in this section. Table 2 presents the
on both training and the testing dataset was conducted in this section. Table 2 presents the
prediction
prediction results
results of
of HL
HL by
by the
the hybrid
hybrid intelligent
intelligent techniques
techniques (i.e.,
(i.e., GA-ANN,
GA-ANN, ABC-ANN, PSO-ANN,
ABC-ANN, PSO-ANN,
and
and ICA-ANN),
ICA-ANN), andand their
their performance
performance in in the
the training
training process.
process.

Table 2.
Table Prediction results
2. Prediction results of
of the
the hybrid
hybrid models
models and
and their
their performance
performance (for
(for the
the training
training process).
process).
Model RMSE R2 MAE Rank for RMSE 2
Rank for R Rank for MAE Total Ranking
Rank for4 Rank for Rank for Total
Model
GA-ANN RMSE
1.701 R2
0.972 MAE0.784 2 4 10
PSO-ANN 1.822 0.972 0.872 RMSE 3 R2 2 MAE1 ranking
6
ICA-ANN
GA-ANN 1.847
1.701 0.971
0.972 0.784 0.860 4 1 2 1 4 2 104
ABC-ANN 1.833 0.972 0.813 2 2 3 7
PSO-ANN 1.822 0.972 0.872 3 2 1 6
ICA-ANN 1.847 0.971 0.860 1 1 2 4
From Table 2, the color intensity revealed that the GA-ANN model provided the most dominant
ABC-ANN 1.833 0.972 0.813 2 2 3 7
performance in the training process. It obtained the lowest error with an RMSE of 1.701, R2 of 0.972, and
MAEFromof 0.784,
Tableand
2, the total ranking
the color intensityofrevealed
10, on thethattraining dataset.model
the GA-ANN The ABC and PSO
provided the meta-heuristics
most dominant
algorithms yielded lower performance in the optimization of the ANN model
performance in the training process. It obtained the lowest error with an RMSE of 1.701, R2 of in the training process,
0.972,
with RMSE of 1.833, 1.822; R 2 of 0.927, 0.972; MAE of 0.813, 0.872, and the total ranking of 7, and 6,
and MAE of 0.784, and the total ranking of 10, on the training dataset. The ABC and PSO meta-
respectively. The weakest
heuristics algorithms modellower
yielded in thisperformance
optimization in process is the ICA-ANN
the optimization model
of the ANN with an RMSE
model of
in the
1.847, 2
R process,
of 0.971, MAE of 0.860,ofand the 1.822;
total ranking of 4. To haveMAE
a complete conclusion, thethe
models’
training with RMSE 1.833, R2 of 0.927, 0.972; of 0.813, 0.872, and total
performances were assessed on the testing dataset, where the dataset was considered
ranking of 7, and 6, respectively. The weakest model in this optimization process is the ICA-ANN as the new data
and ever not used in the training process. Table 3 shows the results and the performance
model with an RMSE of 1.847, R2 of 0.971, MAE of 0.860, and the total ranking of 4. To have a complete of the models
in the testingthe
conclusion, process.
models’ performances were assessed on the testing dataset, where the dataset was
considered as the new data and ever not used in the training process. Table 3 shows the results and
Table 3. Prediction results of the hybrid models and their performance (for the testing process).
the performance of the models in the testing process.
Model RMSE R2 MAE Rank for RMSE Rank for R2 Rank for MAE Total Ranking
GA-ANN
Table 3.1.625
Prediction 0.980 0.798 4 4
results of the hybrid models and their performance (for the 4testing process).
12
PSO-ANN 1.932 0.972 1.027 2 2 1 5
ICA-ANN 1.982 0.970 0.980 Rank for 1 Rank 1
for Rank 2
for 4
Total
Model
ABC-ANN RMSE
1.878 R2
0.973 MAE0.957 3 3 3 9
RMSE R2 MAE ranking
GA-ANN 1.625 0.980 0.798 4 4 4 12
Based on the reports of Table 3, similar results to the training process were reflected. The color
PSO-ANN 1.932 0.972 1.027 2 2 1 5
intensity of the red color indicated that the GA-ANN model was the best model in a comparison of
ICA-ANN 1.982 0.970 0.980 1 1 2 4
ABC-ANN 1.878 0.973 0.957 3 3 3 9
Appl. Sci. 2019, 9, x FOR PEER REVIEW 19 of 24

Appl. Sci. 9, 2630


2019, on
Based the 19 of 23
reports of Table 3, similar results to the training process were reflected. The color
intensity of the red color indicated that the GA-ANN model was the best model in a comparison of
the other models. The corresponding performance values of the GA-ANN model also found with an
the other models. The corresponding performance values of the GA-ANN model also found with
RMSE of 1.625, R2 of 0.980, MAE of 0.798, and the total ranking of 12. Whereas, the ABC-ANN, PSO-
an RMSE of 1.625, R2 of 0.980, MAE of 0.798, and the total ranking of 12. Whereas, the ABC-ANN,
ANN, and ICA-ANN model proved lower performances, as like the training process, with RMSE of
PSO-ANN, and ICA-ANN model proved lower performances, as like the training process, with RMSE
1.878, 1.932, 1.982; R2 of 0.973, 0.972, 0.970; MAE of 0.957, 1.027, 0.980; and the total ranking of 9, 5, 4,
of 1.878, 1.932, 1.982; R2 of 0.973, 0.972, 0.970; MAE of 0.957, 1.027, 0.980; and the total ranking of 9, 5,
respectively.
4, respectively.
10. Sensitivity Analysis
10. Sensitivity Analysis
To get an overall conclusion and optimization solutions in building design aim to use energy-
To get an overall conclusion and optimization solutions in building design aim to use
efficiency, the importance level of the input variables for predicting HL in the present work was
energy-efficiency, the importance level of the input variables for predicting HL in the present work was
conducted. The initial ANN model (i.e., ANN 8-24-18-1) was investigated using the Olden method
conducted. The initial ANN model (i.e., ANN 8-24-18-1) was investigated using the Olden method [78]
[78] to analyze the importance of the input variables. This method enables the analysis of the
to analyze the importance of the input variables. This method enables the analysis of the importance
importance of input variables for hidden multiple-layer ANN models [79]. Ultimately, the
of input variables for hidden multiple-layer ANN models [79]. Ultimately, the importance level of the
importance level of the input variables for predicting HL of EEB systems was determined, as shown
input variables
in Figure for predicting
22. Based HL of EEB
on the sensitivity systems
analysis was determined,
results of this study,asit shown
can be in Figure
seen that 22.
GAD,Based
SA, on
GA,the
sensitivity analysis results of this study, it can be seen that GAD, SA, GA, RA, OH, and
RA, OH, and WA, were the most important variables in predicting the HL of EEB systems, especially WA, were the
most important
SA and GA. variables in predicting the HL of EEB systems, especially SA and GA.

Figure 22. Importance


Figure22. Importance level of the input
input variables
variablesfor
forpredicting
predictingthe
theHL
HLofofEEB
EEBsystems.
systems.

11.
11.Conclusions
Conclusion
Energy
Energyefficiency
efficiencyisisone
oneofof
the essential
the requirements
essential requirements forfor
smart
smartcities. Artificial
cities. intelligence
Artificial hashas
intelligence also
been
also considered as powerful
been considered supportsupport
as powerful tools fortools
theseforobjectives in smart in
these objectives cities.
smart This study
cities. developed
This study
and proposed four new hybrid models based on AI techniques for estimating
developed and proposed four new hybrid models based on AI techniques for estimating the HL of the HL of EEB systems
with
EEBhigh reliability,
systems i.e., GA-ANN,
with high PSO-ANN,
reliability, i.e., GA-ANN, ICA-ANN,
PSO-ANN, andICA-ANN,
ABC-ANNand models.
ABC-ANN A comprehensive
models. A
comparison
comprehensiveand comparison
assessment of andthe developedofmodels
assessment were performed
the developed models were in this work. Asinathis
performed conclusion,
work.
the meta-heuristics algorithms performed very well in the optimization of the
As a conclusion, the meta-heuristics algorithms performed very well in the optimization of the ANN ANN model. Of the
meta-heuristics
model. Of thealgorithms used inalgorithms
meta-heuristics this study, theusedGAinprovided the highest
this study, the GA performance
provided the in optimizing
highest
the ANN model,
performance to predict the
in optimizing theHL
ANNof EEB systems,
model, i.e., GA-ANN
to predict the HL ofmodel. The remaining
EEB systems, meta-heuristics
i.e., GA-ANN model.
The remaining
algorithms meta-heuristics
(i.e., PSO, ICA, ABC) algorithms
provided more (i.e., unsatisfactory
PSO, ICA, ABC) provided more
performance, unsatisfactory
corresponding to the
performance,of corresponding
performance the PSO-ANN, to the performance
ICA-ANN, and ABC-ANN of the models.
PSO-ANN, ICA-ANN, and ABC-ANN
models.
Based on the results of this study, the HL of EEB can be accurately predicted and controlled to
ensure the energy efficiency of buildings in smart cities. Software or applications on computers and
smartphones can be developed in the future based on the results of this study for the use of energy
Appl. Sci. 2019, 9, 2630 20 of 23

saving and efficiency of buildings in smart cities. Besides, it can also be integrated into smart houses
to adjust and control the HL of the houses automatically. Furthermore, optimization techniques of
building design, as well as smart city planning, can also be conducted based on the models developed
in this study. Notably, the GAD, SA, GA, RA, OH, and WA are the main parameters which should be
carefully concerned and calculated in designing buildings and smart cities. Based on the results of this
study as well as software or applications on smartphones and computers, engineers can optimize the
building parameters to use HL in smart cities effectively.

Author Contributions: Data collection and experimental works: L.T.L., H.N., J.D.; Writing, discussion, analysis:
L.T.L., H.N., J.Z.
Funding: This research received no external funding.
Acknowledgments: The authors would like to thank Thanh Hoa University of Culture, Sports and Tourism,
Thanh Hoa City, Vietnam, for supporting this study.
Conflicts of Interest: The authors declare no conflict of interest.

References
1. Cocchia, A. Smart and digital city: A systematic literature review. In Smart City: How to Create Public
and Economic Value with High Technology in Urban Space; Dameri, R.P., Rosenthal-Sabroux, C., Eds.;
Springer International Publishing: Cham, Switzerland; pp. 13–43. [CrossRef]
2. Anthopoulos, L.G. Understanding the smart city domain: A literature review. In Transforming City
Governments for Successful Smart Cities; Rodríguez-Bolívar, M.P., Ed.; Springer International Publishing:
Cham, Switzerland; pp. 9–21. [CrossRef]
3. Caragliu, A.; Del Bo, C.; Nijkamp, P. Smart cities in Europe. J. Urban Technol. 2011, 18, 65–82. [CrossRef]
4. Bibri, S.E.; Krogstie, J. Smart sustainable cities of the future: An extensive interdisciplinary literature review.
Sustain. Cities Soc. 2017, 31, 183–212. [CrossRef]
5. Talari, S.; Shafie-Khah, M.; Siano, P.; Loia, V.; Tommasetti, A.; Catalão, J. A review of smart cities based on the
internet of things concept. Energies 2017, 10, 421. [CrossRef]
6. Silva, B.N.; Khan, M.; Han, K. Towards sustainable smart cities: A review of trends, architectures, components,
and open challenges in smart cities. Sustain. Cities Soc. 2018, 38, 697–713. [CrossRef]
7. Esmaeilian, B.; Wang, B.; Lewis, K.; Duarte, F.; Ratti, C.; Behdad, S. The future of waste management in smart
and sustainable cities: A review and concept paper. Waste Manag. 2018, 81, 177–195. [CrossRef] [PubMed]
8. Martin, C.J.; Evans, J.; Karvonen, A. Smart and sustainable? Five tensions in the visions and practices of
the smart-sustainable city in Europe and North America. Technol. Forecast. Soc. Chang. 2018, 133, 269–278.
[CrossRef]
9. Zhao, H.-X.; Magoulès, F. A review on the prediction of building energy consumption. Renew. Sustain.
Energy Rev. 2012, 16, 3586–3592. [CrossRef]
10. Catalina, T.; Iordache, V.; Caracaleanu, B. Multiple regression model for fast prediction of the heating energy
demand. Energy Build. 2013, 57, 302–312. [CrossRef]
11. Chou, J.-S.; Bui, D.-K. Modeling heating and cooling loads by artificial intelligence for energy-efficient
building design. Energy Build. 2014, 82, 437–446. [CrossRef]
12. Rubin, D.B. Iteratively Reweighted Least Squares; Encyclopedia of Statistical Sciences, © John Wiley & Sons, Inc.
and republished in Wiley StatsRef: Statistics Reference Online, 2014. [CrossRef]
13. Tsanas, A.; Xifara, A. Accurate quantitative estimation of energy performance of residential buildings using
statistical machine learning tools. Energy Build. 2012, 49, 560–567. [CrossRef]
14. Castelli, M.; Trujillo, L.; Vanneschi, L.; Popovič, A. Prediction of energy performance of residential buildings:
A genetic programming approach. Energy Build. 2015, 102, 67–74. [CrossRef]
15. Fan, C.; Xiao, F.; Zhao, Y. A short-term building cooling load prediction method using deep learning
algorithms. Appl. Energy 2017, 195, 222–233. [CrossRef]
16. Ascione, F.; Bianco, N.; De Stasio, C.; Mauro, G.M.; Vanoli, G.P. Artificial neural networks to predict energy
performance and retrofit scenarios for any member of a building category: A novel approach. Energy 2017,
118, 999–1017. [CrossRef]
Appl. Sci. 2019, 9, 2630 21 of 23

17. Ngo, N.-T. Early predicting cooling loads for energy-efficient design in office buildings by machine learning.
Energy Build. 2019, 182, 264–273. [CrossRef]
18. Nguyen, H.; Moayedi, H.; Jusoh, W.A.W.; Sharifi, A. Proposing a novel predictive technique using
M5Rules-PSO model estimating cooling load in energy-efficient building system. Eng. Comput. 2019,
1–10. [CrossRef]
19. Bui, X.-N.; Moayedi, H.; Rashid, A.S.A. Developing a predictive method based on optimized M5Rules–GA
predicting heating load of an energy-efficient building system. Eng. Comput. 2019, 1–10. [CrossRef]
20. Pino-Mejías, R.; Pérez-Fargallo, A.; Rubio-Bellido, C.; Pulido-Arcas, J.A. Comparison of linear regression and
artificial neural networks models to predict heating and cooling energy demand, energy consumption and
CO2 emissions. Energy 2017, 118, 24–36. [CrossRef]
21. Idowu, S.; Saguna, S.; Åhlund, C.; Schelén, O. Applied machine learning: Forecasting heat load in district
heating system. Energy Build. 2016, 133, 478–488. [CrossRef]
22. Roy, S.S.; Roy, R.; Balas, V.E. Estimating heating load in buildings using multivariate adaptive regression
splines, extreme learning machine, a hybrid model of MARS and ELM. Renew. Sustain. Energy Rev. 2018, 82,
4256–4268.
23. Wang, L.; Kubichek, R.; Zhou, X. Adaptive learning based data-driven models for predicting hourly building
energy use. Energy Build. 2018, 159, 454–461. [CrossRef]
24. Niemierko, R.; Töppel, J.; Tränkler, T. A D-vine copula quantile regression approach for the prediction of
residential heating energy consumption based on historical data. Appl. Energy 2019, 233, 691–708. [CrossRef]
25. Bui, X.-N.; Muazu, M.A.; Nguyen, H. Optimizing Levenberg–Marquardt backpropagation technique in
predicting factor of safety of slopes after two-dimensional OptumG2 analysis. Eng. Comput. 2019, 35,
813–832. [CrossRef]
26. Moayed, H.; Rashid, A.S.A.; Muazu, M.A.; Nguyen, H.; Bui, X.-N.; Bui, D.T. Prediction of ultimate bearing
capacity through various novel evolutionary and neural network models. Eng. Comput. 2019, 1–17.
[CrossRef]
27. Zhang, X.; Nguyen, H.; Bui, X.-N.; Tran, Q.-H.; Nguyen, D.-A.; Bui, D.T.; Moayedi, H. Novel Soft
Computing Model for Predicting Blast-Induced Ground Vibration in Open-Pit Mines Based on Particle
Swarm Optimization and XGBoost. Nat. Resour. Res. 2019, 1–11. [CrossRef]
28. Moayedi, H.; Raftari, M.; Sharifi, A.; Jusoh, W.A.W.; Rashid, A.S.A. Optimization of ANFIS with GA and
PSO estimating α ratio in driven piles. Eng. Comput. 2019. [CrossRef]
29. Nguyen, H.; Moayedi, H.; Foong, L.K.; Al Najjar, H.A.H.; Jusoh, W.A.W.; Rashid, A.S.A.; Jamali, J.
Optimizing ANN models with PSO for predicting short building seismic response. Eng. Comput. 2019, 1–15.
[CrossRef]
30. Armaghani, D.J.; Hajihassani, M.; Marto, A.; Faradonbeh, R.S.; Mohamad, E.T. Prediction of blast-induced
air overpressure: A hybrid AI-based predictive model. Environ. Monit. Assess. 2015, 187, 666. [CrossRef]
31. Armaghani, D.J.; Hasanipanah, M.; Mahdiyar, A.; Majid, M.Z.A.; Amnieh, H.B.; Tahir, M.M.
Airblast prediction through a hybrid genetic algorithm-ANN model. Neural Comput. Appl. 2016, 29,
619–629. [CrossRef]
32. Armaghani, D.J.; Mohamad, E.T.; Narayanasamy, M.S.; Narita, N.; Yagiz, S. Development of hybrid intelligent
models for predicting TBM penetration rate in hard rock condition. Tunn. Undergr. Space Technol. 2017, 63,
29–43. [CrossRef]
33. Zhou, J.; Nekouie, A.; Arslan, C.A.; Pham, B.T.; Hasanipanah, M. Novel approach for forecasting the
blast-induced AOp using a hybrid fuzzy system and firefly algorithm. Eng. Comput. 2019, 1–10. [CrossRef]
34. Asteris, P.G.; Nozhati, S.; Nikoo, M.; Cavaleri, L.; Nikoo, M. Krill herd algorithm-based neural network in
structural seismic reliability evaluation. Mech. Adv. Mater. Struct. 2018, 1–8. [CrossRef]
35. Asteris, P.G.; Nikoo, M. Artificial bee colony-based neural network for the prediction of the fundamental
period of infilled frame structures. Neural Comput. Appl. 2019, 1–11. [CrossRef]
36. Eberhart Kennedy, J. A new optimizer using particle swarm theory. In Proceedings of the Sixth International
Symposium on Micro Machine and Human Science (MHS’95), Nagoya, Japan, 4–6 October 1995; pp. 39–43.
37. Armaghani, D.J.; Hajihassani, M.; Mohamad, E.T.; Marto, A.; Noorani, S.A. Blasting-induced flyrock and
ground vibration prediction through an expert artificial neural network based on particle swarm optimization.
Arab. J. Geosci. 2014, 7, 5383–5396. [CrossRef]
Appl. Sci. 2019, 9, 2630 22 of 23

38. Gordan, B.; Armaghani, D.J.; Hajihassani, M.; Monjezi, M. Prediction of seismic slope stability through
combination of particle swarm optimization and neural network. Eng. Comput. 2016, 32, 85–97. [CrossRef]
39. Yang, X.; Zhang, Y.; Yang, Y.; Lv, W. Deterministic and Probabilistic Wind Power Forecasting Based on Bi-Level
Convolutional Neural Network and Particle Swarm Optimization. Appl. Sci. 2019, 9, 1794. [CrossRef]
40. Kulkarni, R.V.; Venayagamoorthy, G.K. An estimation of distribution improved particle swarm optimization
algorithm. In Proceedings of the 2007 3rd International Conference on Intelligent Sensors, Sensor Networks
and Information, Melbourne, QLD, Australia, 3–6 December 2007; pp. 539–544.
41. Mitchell, M. An Introduction to Genetic Algorithms; MIT Press: Cambridge, MA, USA; London, England, 1998.
42. Carr, J. An introduction to genetic algorithms. Sr. Proj. 2014, 1, 40.
43. Kinnear, K.E., Jr. A perspective on the work in this book. In Advances in Genetic Programming; MIT Press:
Cambridge, MA, USA; London, England, 1994; pp. 3–19.
44. Raeisi-Vanani, H.; Shayannejad, M.; Soltani-Toudeshki, A.-R.; Arab, M.-A.; Eslamian, S.;
Amoushahi-Khouzani, M.; Marani-Barzani, M.; Ostad-Ali-Askari, K. A Simple Method for Land Grading
Computations and its Comparison with Genetic Algorithm (GA) Method. Int. J. Res. Stud. Agric. Sci. 2017,
3, 26–38.
45. Goldberg, D. Genetic Algorithms in Search, Optimization, and Machine Language; Addison-Wesley: Reading, UK, 1989.
46. Zheng, Y.; Huang, M.; Lu, Y.; Li, W. Fractional stochastic resonance multi-parameter adaptive optimization
algorithm based on genetic algorithm. Neural Comput. Appl. 2018, 1–12. [CrossRef]
47. Atashpaz-Gargari, E.; Lucas, C. Imperialist competitive algorithm: An algorithm for optimization inspired
by imperialistic competition. In Proceedings of the 2007 IEEE Congress on Evolutionary computation,
Singapore, 25–28 September 2007; pp. 4661–4667.
48. Hosseini, S.; Al Khaled, A. A survey on the imperialist competitive algorithm metaheuristic: Implementation
in engineering domain and directions for future research. Appl. Soft Comput. 2014, 24, 1078–1094. [CrossRef]
49. Elsisi, M. Design of neural network predictive controller based on imperialist competitive algorithm for
automatic voltage regulator. Neural Comput. Appl. 2019, 1–11. [CrossRef]
50. Zadeh Shirazi, A.; Mohammadi, Z. A hybrid intelligent model combining ANN and imperialist competitive
algorithm for prediction of corrosion rate in 3C steel under seawater environment. Neural Comput. Appl.
2017, 28, 3455–3464. [CrossRef]
51. Karaboga, D. An Idea Based on Honey Bee Swarm for Numerical Optimization; Technical Report-tr06;
Erciyes University, Engineering Faculty, Computer Engineering Department: Melikgazi/Kayseri, Turkey, 2005.
52. Zhong, F.; Li, H.; Zhong, S. An improved artificial bee colony algorithm with modified-neighborhood-based
update operator and independent-inheriting-search strategy for global optimization. Eng. Appl. Artif. Intell.
2017, 58, 134–156. [CrossRef]
53. Jadon, S.S.; Bansal, J.C.; Tiwari, R.; Sharma, H. Artificial bee colony algorithm with global and local
neighborhoods. Int. J. Syst. Assur. Eng. Manag. 2018, 9, 589–601. [CrossRef]
54. Ning, J.; Zhang, B.; Liu, T.; Zhang, C. An archive-based artificial bee colony optimization algorithm for
multi-objective continuous optimization problem. Neural Comput. Appl. 2018, 30, 2661–2671. [CrossRef]
55. Asteris, P.; Kolovos, K.; Douvika, M.; Roinos, K. Prediction of self-compacting concrete strength using
artificial neural networks. Eur. J. Environ. Civ. Eng. 2016, 20 (Suppl. 1), s102–s122. [CrossRef]
56. Asteris, P.G.; Plevris, V. Anisotropic masonry failure criterion using artificial neural networks.
Neural Comput. Appl. 2017, 28, 2207–2229. [CrossRef]
57. Asteris, P.; Roussis, P.; Douvika, M. Feed-forward neural network prediction of the mechanical properties of
sandcrete materials. Sensors 2017, 17, 1344. [CrossRef] [PubMed]
58. Dimitraki, L.; Christaras, B.; Marinos, V.; Vlahavas, I.; Arampelos, N. Predicting the average size of blasted
rocks in aggregate quarries using artificial neural networks. Bull. Eng. Geol. Environ. 2019, 78, 2717–2729.
[CrossRef]
59. Armaghani, D.J.; Hasanipanah, M.; Mohamad, E.T. A combination of the ICA-ANN model to predict
air-overpressure resulting from blasting. Eng. Comput. 2016, 32, 155–171. [CrossRef]
60. Armaghani, D.J.; Momeni, E.; Abad, S.V.A.N.K.; Khandelwal, M. Feasibility of ANFIS model for prediction
of ground vibrations resulting from quarry blasting. Environ. Earth Sci. 2015, 74, 2845–2860. [CrossRef]
61. Nguyen, H.; Bui, X.-N. Predicting Blast-Induced Air Overpressure: A Robust Artificial Intelligence System
Based on Artificial Neural Networks and Random Forest. Nat. Resour. Res. 2018, 28, 893–907. [CrossRef]
Appl. Sci. 2019, 9, 2630 23 of 23

62. Nguyen, H.; Bui, X.-N.; Bui, H.-B.; Mai, N.-L. A comparative study of artificial neural networks in predicting
blast-induced air-blast overpressure at Deo Nai open-pit coal mine, Vietnam. Neural Comput. Appl. 2018,
1–17. [CrossRef]
63. Nguyen, H.; Bui, X.-N.; Tran, Q.-H.; Le, T.-Q.; Do, N.-H.; Hoa, L.T.T. Evaluating and predicting blast-induced
ground vibration in open-cast mine using ANN: A case study in Vietnam. SN Appl. Sci. 2018, 1, 125.
[CrossRef]
64. Nguyen, H.; Drebenstedt, C.; Bui, X.-N.; Bui, D.T. Prediction of Blast-Induced Ground Vibration in an
Open-Pit Mine by a Novel Hybrid Model Based on Clustering and Artificial Neural Network. Nat. Resour. Res.
2019, 1–19. [CrossRef]
65. Dou, J.; Yamagishi, H.; Pourghasemi, H.R.; Yunus, A.P.; Song, X.; Xu, Y.; Zhu, Z. An integrated artificial
neural network model for the landslide susceptibility assessment of Osado Island, Japan. Nat. Hazards 2015,
78, 1749–1776. [CrossRef]
66. Dou, J.; Paudel, U.; Oguchi, T.; Uchiyama, S.; Hayakavva, Y.S. Shallow and Deep-Seated Landslide
Differentiation Using Support Vector Machines: A Case Study of the Chuetsu Area, Japan. Terr. Atmos.
Ocean. Sci. 2015, 26, 227–239. [CrossRef]
67. Oh, H.-J.; Lee, S. Shallow landslide susceptibility modeling using the data mining models artificial neural
network and boosted tree. Appl. Sci. 2017, 7, 1000. [CrossRef]
68. Nguyen, H.; Bui, X.-N.; Moayedi, H. A comparison of advanced computational models and experimental
techniques in predicting blast-induced ground vibration in open-pit coal mine. Acta Geophys. 2019. [CrossRef]
69. Asteris, P.G.; Tsaris, A.K.; Cavaleri, L.; Repapis, C.C.; Papalou, A.; Di Trapani, F.; Karypidis, D.F. Prediction of
the fundamental period of infilled RC frame structures using artificial neural networks. Comput. Intell. Neurosci.
2016, 2016, 20. [CrossRef] [PubMed]
70. Plevris, V.; Asteris, P.G. Modeling of masonry failure surface under biaxial compressive stress using Neural
Networks. Constr. Build. Mater. 2014, 55, 447–461. [CrossRef]
71. Cavaleri, L.; Chatzarakis, G.E.; Trapani, F.D.; Douvika, M.G.; Roinos, K.; Vaxevanidis, N.M.; Asteris, P.G.
Modeling of surface roughness in electro-discharge machining using artificial neural networks. Adv. Mater. Res.
2017, 6, 169–184.
72. Ferrero Bermejo, J.; Gómez Fernández, J.F.; Olivencia Polo, F.; Crespo Márquez, A. A Review of the Use
of Artificial Neural Network Models for Energy and Reliability Prediction. A Study of the Solar PV,
Hydraulic and Wind Energy Sources. Appl. Sci. 2019, 9, 1844. [CrossRef]
73. Kim, C.; Lee, J.-Y.; Kim, M. Prediction of the Dynamic Stiffness of Resilient Materials using Artificial Neural
Network (ANN) Technique. Appl. Sci. 2019, 9, 1088. [CrossRef]
74. Wang, D.-L.; Sun, Q.-Y.; Li, Y.-Y.; Liu, X.-R. Optimal Energy Routing Design in Energy Internet with Multiple
Energy Routing Centers Using Artificial Neural Network-Based Reinforcement Learning Method. Appl. Sci.
2019, 9, 520. [CrossRef]
75. Azeez, O.S.; Pradhan, B.; Shafri, H.Z.; Shukla, N.; Lee, C.-W.; Rizeei, H.M. Modeling of CO emissions from
traffic vehicles using artificial neural networks. Appl. Sci. 2019, 9, 313. [CrossRef]
76. Shang, Y.; Nguyen, H.; Bui, X.-N.; Tran, Q.-H.; Moayedi, H. A Novel Artificial Intelligence Approach to
Predict Blast-Induced Ground Vibration in Open-Pit Mines Based on the Firefly Algorithm and Artificial
Neural Network. Nat. Resour. Res. 2019, 1–15. [CrossRef]
77. Nguyen, H. Support vector regression approach with different kernel functions for predicting blast-induced
ground vibration: A case study in an open-pit coal mine of Vietnam. SN Appl. Sci. 2019, 1, 283. [CrossRef]
78. Olden, J.D.; Joy, M.K.; Death, R.G. An accurate comparison of methods for quantifying variable importance
in artificial neural networks using simulated data. Ecol. Model. 2004, 178, 389–397. [CrossRef]
79. Olden, J.D.; Jackson, D.A. Illuminating the “black box”: A randomization approach for understanding
variable contributions in artificial neural networks. Ecol. Model. 2002, 154, 135–150. [CrossRef]

© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access
article distributed under the terms and conditions of the Creative Commons Attribution
(CC BY) license (http://creativecommons.org/licenses/by/4.0/).

You might also like