0% found this document useful (0 votes)
32 views28 pages

Modified Tlbo

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
32 views28 pages

Modified Tlbo

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 28

Journal Pre-proof

A modified teaching-learning-based optimization algorithm for solving


optimization problem

Yunpeng Ma, Xinxin Zhang, Jiancai Song, Lei Chen

PII: S0950-7051(20)30728-0
DOI: https://doi.org/10.1016/j.knosys.2020.106599
Reference: KNOSYS 106599

To appear in: Knowledge-Based Systems

Received date : 10 February 2020


Revised date : 31 October 2020
Accepted date : 4 November 2020

Please cite this article as: Y. Ma, X. Zhang, J. Song et al., A modified teaching-learning-based
optimization algorithm for solving optimization problem, Knowledge-Based Systems (2020), doi:
https://doi.org/10.1016/j.knosys.2020.106599.

This is a PDF file of an article that has undergone enhancements after acceptance, such as the
addition of a cover page and metadata, and formatting for readability, but it is not yet the definitive
version of record. This version will undergo additional copyediting, typesetting and review before it
is published in its final form, but we are providing this version to give early visibility of the article.
Please note that, during the production process, errors may be discovered which could affect the
content, and all legal disclaimers that apply to the journal pertain.

© 2020 Elsevier B.V. All rights reserved.


Journal Pre-proof
*Credit Author Statement

Ma Yunpeng: Conceptualization, Methodology, Software, Data curation, Writing- Original draft


preparation; Zhang Xinxin: Writing- Reviewing and Editing; Song Jiancai: Supervision,
Validation; Chen Lei: Methodology, Software.

of
pro
re-
lP
rna
Jou
Journal Pre-proof
Manuscript

A modified teaching-learning-based optimization algorithm for solving

optimization problem
Yunpeng Ma, Xinxin Zhang*, Jiancai Song, Lei Chen

of
a School of Information Engineering, Tianjin University of Commerce, Beichen, Tianjin CO300134, China
* corresponding author. Tel:+86 15227290723; e-mail address:[email protected]
Abstract: In order to reduce the NOx emissions concentration of a circulation fluidized bed boiler,
a modified teaching-learning-based optimization algorithm (MTLBO) is proposed, which

pro
introduces a new population group mechanism into the conventional teaching learning based
optimization algorithm. The MTLBO still has two phases: Teaching phase and Learning phase. In
teaching phase, all students are divided into two groups based on the mean marks of the class, the
two groups present different solution updating strategies, separately. In learning phase, all students
are divided into two groups again, where the first group includes the top half of the students and
the second group contains the remaining students. The two groups also have different solution
updating strategies. Performance of the proposed MTLBO algorithm is evaluated by 14
re-
unconstrained numerical functions. Compared with TLBO and other several state-of-the-art
optimization algorithms, the results indicate that the MTLBO shows better solution quality and
faster convergence speed. In addition, the tuned extreme learning machine by MTLBO is applied
to establish the NOx emission model. Based on the established model, the MTLBO is used to
optimize the operation conditions of a 330 MW circulation fluidized bed boiler for reducing the
lP
NOx emissions concentration. Experimental results reveal that the MTLBO is an effective tool for
reducing the NOx emissions concentration.
Keywords: Teaching-learning-based optimization; Modified teaching-learning-based optimization;
Extreme learning machine; NOx emission model; Circulation fluidized bed boiler

1 Introduction
rna

Across the world, the thermal power generation is still the profound important way of
generating electric energy. To provide abundant electric energy for civil and industrial use, the
power plant consumes large amount of coal resources and emits polluting gases into the air. For
the common habitat of mankind, energy saving and emission reduction must be paid highly
attention. Therefore, it is profound necessary to optimize the boiler combustion operation process
to improve the thermal efficiency and reduce polluting gas emission. Currently, numbers of
scholars and technical staffs have devoted themselves into settling the boiler combustion
Jou

optimization problem[1-5]. The most significant optimizing object is to reduce NOx emissions.
Before reducing NOx emission, a relative accurate NOx emission model must be established. It is
essential to establish one precise system model before optimizing the combustion operation
process of boiler. However, the combustion process of boiler possesses several complex properties,
such as large lag, sluggishness and non-linearity. For a real combustion system, there are many
factors that affect NOx emissions concentration, including load, the coal feeder, the primary air
Journal Pre-proof

velocity, the primary air temperature, the second air velocity and the second air temperature etc.
Therefore, it is profound difficult to build NOx emissions model based on the mechanism. To
solve the above-mention problem, we adopt tuned extreme learning machine[6] to build NOx
emissions model and use modified teaching learning based optimization algorithm (TLBO)[7] to

of
optimize the combustion operation process of boiler based on the established NOx emissions
model.
Extreme learning machine, which is proposed by Huang et al. in 2004, is a novel single
feed-forward artificial neural network. Its input-weights and bias of hidden layer are generated

pro
randomly. The output-weights are determined analytically and unique. Therefore, the extreme
learning machine shows faster running speed than traditional artificial neural networks and also
can avoid falling into local minimum. Up to now, the ELM has been successfully applied into
multiple fields, such as speech recognition[8-9], image processing[10-12] and system modeling and
prediction[13-15]. Undoubtedly, the random input-weights and thresholds of hidden layers are not
the best model parameters, which may not pledge the training goals of the extreme learning
machine to achieve a global minimum. In order to solve the aforementioned problem, many
re-
scholars have proposed numerous modified ELM techniques. In literature [16], an adaptive
differential evolution algorithm is combined with extreme learning machine to optimize the model
parameters of extreme learning machine. In literature [17], authors proposed a hybrid extreme
learning machine which used differential evolution to optimize the input-weights of ELM. In
literature [18], an optimal extreme learning machine was put forward, and the structure parameters
lP
of ELM were optimized by one heuristic optimization algorithm. Authors used modified PSO to
optimize the input-weights and the bias of hidden layer of extreme learning machine[19]. In
literature [20-21], Li et al. proposed a hybrid method called tuned extreme learning machine by
artificial bee colony, which can obtain the optimal input-weight and threshold and improve the
generalization performance of extreme learning machine. In this paper, we use a modified
rna

teaching learning based optimization to optimize the structure parameters of ELM, improving the
regression performance and generalization ability of the conventional ELM.
Many real-life optimization problems possess complicated properties, such as multimodality,
high dimensionality and non-differentiability, so that they are difficult to solve. Many experts and
scholars have indicated that exact optimization techniques, such as steepest decent, dynamic
programming linear programming, fail to provide a optima solution for these types of optimization
problems[22-23]. For example, some traditional optimization methods require gradient information
Jou

so that they cannot solve non-differentiable problems. Hence, lots of efficient nature-inspired
meta-heuristic optimization techniques have been proposed to solve these complex optimization
problems, including particle swarm optimization[24], artificial bee colony[25], Krill herds
algorithm[26], Social-spider optimization algorithm[27], Butterfly Optimization Algorithm[28] and
Teaching-learning-based optimization and so on.
Teaching learning based optimization algorithm (TLBO) is a novel population-based
Journal Pre-proof

optimization method, which is proposed to obtain global solutions for continuous non-linear
functions and engineering optimization problems. It has some superior properties, such as less
computational effort, high consistency and less setting parameters. The TLBO has been applied to
a wide range of real-world optimization problems, such as electrical engineering[29-32],

of
manufacturing processes[33-34] and economic load dispatch[35]. However, many researchers still
propose a large of variants to improve the performance of TLBO algorithm. In order to improve
the solution quality and quicken the convergence speed of the TLBO, Li et al [36] proposed an
ameliorated teaching learning based optimization algorithm. In [37], the elitism mechanism is

pro
introduced in TLBO to enhance its performance. To enhance the exploration and exploitation
capacities of TLBO, Rao et al introduced some improved mechanisms in teaching-learning-based
optimization algorithm[38]. In literature [39], quasi-opposition based learning concept is integinal
TLBO to accelerate with orig the convergence speed of the conventional TLBO. Yu et al.
introduced mutation and crossover operators of differential evolution algorithm and chaotic
fluctuation algorithm into the TLBO, which can improve the exploration ability and increase the
population diversity[40]. Huang et al. combined TLBO with cuckoo algorithm to improve the local
re-
search ability of TLBO algorithm[41]; Tuo et al. fused harmony search algorithm and teaching and
learning optimization algorithm so that TLBO algorithm can effectively solve complex
high-dimensional optimization problems[42]. Above mentioned various TLBO variants have
outperformed the conventional TLBO algorithm on convergence speed and convergence accuracy.
And these variants have been successfully applied into a wide range of real-world optimization
lP
problems.
Although the TLBO and TLBO variants have displayed excellent performance for a wide
range of real-world optimization problems, the solution quality and convergence speed of some of
those algorithms can be further enhanced. Because the TLBO algorithm was proposed based on
the effect of the influence of a teacher on the output of learner is a class[43]. However, Rao et al.
rna

were not taking into account the actual teaching-learning phenomenon in a class and student’s
subjective learning behavior. In a real-world class, we can simply consider that a superior student
has high learning enthusiasm and excellent self-study ability, so he can initiative obtain knowledge
from teacher and a more superior student. For an underachiever, who has poor self-learning ability,
so he usually passively gets knowledge from teacher or a more superior student. Therefore,
inspired by actual teaching learning phenomenon, we propose a sort of modified teaching learning
based optimization algorithm, namely MTLBO, to improve the solution quality and convergence
Jou

speed of the original TLBO. Compared with other TLBO variants, the proposed MTLBO
algorithm has different updating mechanisms of the population individuals in teaching phase and
learning phase. Actually, we separately first use different group mechanisms in teaching phase and
learning phase. In teaching phase, if one student’s marks are higher than mean marks of the class,
he can be considered as a superior student. Otherwise, he is an underachiever. In learning phase,
top 50% students are regarded as superior student, and the remaining students are underachievers.
Journal Pre-proof

Based on the group mechanism, the diversity of population individuals can be enhanced obviously,
avoiding trapping into local minimum effectively. Moreover, the MTLBO algorithm is provided
with good balance between exploration and exploitation based on several inertia weights. In
literature [27], authors defined the exploration and exploitation, “ Exploration is the process of

of
visiting entirely new regions of a search space, whilst exploitation is the process of visiting those
regions of a search space within the neighborhood of previously visited points ”. The detailed
description of MTLBO is presented in section 3. To evaluate the effectiveness of MTLBO,
different experiments have been conducted, including 14 benchmark numerical functions and

pro
some real-world engineering optimization problems. Compared with other optimization
techniques, the proposed MTLBO algorithm shows better solution quality and faster convergence
speed.
The main contributions of this paper are summarized as follows:
1. Based on the main theoretical framework of TLBO, this paper proposes a novel modified
teaching learning based optimization algorithm. Inspired by the actual teaching learning

re-
phenomenon, the new proposed algorithm introduces reasonable group mechanisms into
teaching phase and learning phase.
2. 14 benchmark functions and some mechanical design problems are used to evaluate the
performance of MTLBO. Compared with other state-of-the-art algorithms, MTLBO can
provide competitive solutions to these test problems, and shows faster convergence
speed.
lP
3. This paper adopts MTLBO to optimize the input weights and bias of hidden layer
neurons of extreme learning machine, improving the model precision of the extreme
learning machine. And then, this paper uses the tuned extreme learning machine to build
NOx emissions model.
The rest of this paper is organized as follows. Section 2 reviews the basic TLBO algorithm
rna

and extreme learning machine. Section 3 describes the implementation procedures of MTLBO
algorithm in detail. In section 4, the MTLBO algorithm is used to carry out 14 benchmark
functions and some mechanical design problems. Section 5 presents the modeling procedure of
NOx emissions. Finally, section.6 presents the conclusion and prospects of future work.

2 Review of related works


2.1 Teaching learning based optimization algorithm
Teaching learning based optimization algorithm (TLBO) is a novel population-based
Jou

meta-heuristic intelligent algorithm, which is inspired and proposed by the influence of a teacher
on the output of learners in a class. For the TLBO algorithm, it has two vital parts, namely
‘Teacher phase’ and ‘Learner phase’. The teaching-learning-based optimization algorithm is
described briefly as follows.

2.1.1 Teacher phase


Journal Pre-proof

In teacher phase, learners obtain knowledge from their teacher. The teacher is regarded as the
most knowledgeable person in a class, who makes big efforts to bring learners up to his or her
level. Supposed that at any iteration i , M i is the mean value of the marks and Ti is the teacher. The

teacher will put effort to move the mean value M i to its own level. In this phase, the learners

of
update their knowledge according to the equation (1):

X new,i  X old ,i  ri (Ti  TF M i ) (1)

where X old ,i is the ith learner’s mark before updating, X new,i is the mark after learning from the

pro
teacher. TF is a teaching factor, which controls the mean value to be changed, ri is the uniform
random numbers from 0 to1.

2.1.2 Learner phase


In this phase, learners increase their knowledge through learning mutually. A learner can
improve his or her knowledge through interacting randomly with other learners, such as group
discussions, presentations and formal communications. Moreover, a learner gains knowledge from

described as follows.
re-
the more knowledge and experience person. The modification process of learners could be

At any iteration i , randomly select two learners X i and X j , where i  j .

 X old ,i  ri ( X i  X j ) if f (Xi)  f (X j )
X new,i   (2)
 X old ,i  ri ( X j  X i ) if f (Xi)  f (X j )
lP
The X new is accepted if it gives a better function value.
2.2 Extreme learning machine

Extreme learning machine (ELM) is a novel single hidden layer feed-forward neural network

proposed by Huang et al. The basic mechanism of ELM is described briefly as follows:
rna

Suppose, there are N stochastic samples ( xi , t i ) , where xi  [ x1i , x2i ,, xni ]T is the ith training

sample, n is the number of input neuron, ti  [t1i , t2i ,, tli ]T is the target vector. Here, the

input-weights are ωM  n , the bias of hidden layer is bM 1 , and the output-weights are βl M ,

where M delegates the number of hidden nodes. The matrix ωM  n and bM 1 are generated randomly
without tuning. The target output of ELM could be calculated by the following equation:
M
tki    kj g j (ω, b, X ), k  1,2,, l
Jou

(3)
j 1

where g (x) is the activation function of hidden layer.


The Equation.(3) could be rewritten as follows:
T  H (4)
where H is the output matrix of hidden layer and defined as:
Journal Pre-proof

 g (1 x1  b1 )  g (M x1  bM ) 
H ( , b, X )      
 (5)
 g (1 xN  b1 )  g (M xN  bM ) NM

  [1, 2 ,, M ]Tl M and T  [t1, t2 ,, t N ]Tl  N .

of
The output weight matrix   [1, 2 ,, M ]TlM can be determined analytically by the
minimum norm least square solution:
~
  arg min H  T  H T (6)

pro
where H  is the M-P generalized inverse of H .
The extreme learning machine algorithm could be summarized as follows:
1) Randomly assign the input weights matrix ωM  n and the bias matrix bM 1 .
2) Calculate the output matrix H of the hidden layer by Eq.(5).
3) Calculate the output weight matrix βl M .

3 Implementation procedures of the MTLBO re-


In a realistic class, one student is considered as a superior student or an underachiever rely
on his comprehensive performance. As a superior student, he has good self-learning and active
learning ability, so he increases his knowledge not only relying on self-study, but obtaining from
teachers or a more superior student. As an underachiever, he obtains knowledge from his teacher
or a more superior student principally. Moreover, a good teacher makes his or her efforts to bring
lP
the learners to his or her level in terms of knowledge. Based on the actual ‘teaching learning’
situation, a modified teaching-learning-based optimization algorithm, namely MTLBO, which
shows better solution quality and higher convergence speed than conventional TLBO. The
MTLBO also has two parts: teaching phase and learning phase. The MTLBO algorithm is
described in detail as follows.
rna

3.1 Teaching phase


For the conventional TLBO, which is inspired by the effect of influence of a teacher on
learners. Assume that there are two different teachers teaching a subject in different classes. If
teacher T1 outperforms teacher T2, it reveals that teacher T1 produces a better mean of the results
of the learners. So a teacher increases the mean of the class based on his capability. Additionally,
in teacher phase of the TLBO, the mean mark plays an important role for enhancing the
individuals’ mark. Simultaneously, for a real-world class, the mean mark of the class is also a
Jou

significant index to measure the learn ability of students. Therefore, in teaching phase of the
proposed technique, based on the mean marks, all students are divided into two groups. One group
is superior students, the other is underachievers. How to group? We assume that if one student’s
comprehensive mark is higher than the mean mark of the class, the student is considered as a
superior student. Otherwise, the student is regarded as an underachiever. As mentioned earlier,
superior student owns strong active learning ability and underachiever mainly obtains knowledge
Journal Pre-proof

from his teachers. For minimum optimization problem, if the value of fitness function of ith
student is less than mean mark’s, the ith student is regarded as superior student. Otherwise, he is
an underachiever. Therefore, for different students, they have their own way to obtain knowledge
in teaching phase. In MTLBO, for a superior student, he gets knowledge from the best individual

of
and self-study, whose updating mechanism is displayed in Eq.(7). For an underachiever, he obtains
knowledge from his teacher and tries to reach the class average. So the updating mechanism is
shown in Eq.(8). Moreover, the expressions of inertia weights are presented from Eq.(9) to Eq.(11)
separately.

pro
X new,i  X old ,i  W  ( X best  X old ,i )  rand if f ( X old ,i )  f ( X mean ) (7)

X new,i  ( X old ,i  (rand  0.5)  2  ( X mean  X old ,i ))  1  diff  2 if f ( X old ,i )  f ( X mean ) (8)

iter
W  wstart  ( wstart  wend )  (9)
MaxIter
 iter
1  sin(  ) (10)
2 MaxIter
re- iter
 2  cos( 
2 MaxIter
) (11)

Seen from Eq.(7) and Eq.(8), X mean is the mean mark, W is the inertia weight which decides to
balance the exploration and exploitation ability. Seen from Eq.(9), the inertia weight descends
linearly from wstart to wend . Therefore, the adjustment process of the inertia weight allows the
lP
MTLBO algorithm to explore the search space at the initial steps and to exploit the optimal
solution at the latter steps. Additionally, we also introduce 1 and  2 as inertia weights which can
accelerate the convergence speed, which are presented in Eq.(10) and Eq.(11). iter is the current
iteration, MaxIter is the maximum iteration.
It is noted that the two inertia weights 1 and  2 are firstly introduced into MTLBO to
rna

accelerate the convergence speed. During the iterative process of the MTLBO, the changing
curves of two inertia weights 1 and  2 are showed in Figure 1 by red line and blue line,
separately. Seen from Fig.1, horizontal ordinate represents the number of iterations and vertical
ordinate represents the function value. Seen from Eq.(8), for an underachiever, his fitness
value f ( X old ,i ) is higher than the mean fitness value f ( X mean ) , he obtains knowledge from his

teacher at the initial steps. So the teacher plays an important role to improve the student’s
Jou

knowledge at the initial steps. That means the mark of one underachiever can be quickly improved
and close to the mean mark at the initial steps by the inertia weights  2 . With the running of the
algorithm, all individuals close to the optimal solution gradually if the algorithm shows good
global convergence ability. Therefore, to avoid trapping into local optimum, the diversity of
population is increased at the latter steps. That means the self-information of one underachiever
plays a significant role to improve the student’s knowledge at the latter steps. Based on the
Journal Pre-proof

analysis, the two inertia weights enhance the solution quality and convergence speed indeed.

of
0.8 sin(x)
cos(x)

0.6
value

pro
0.4

0.2

0
0 200 400 600 800 1000
iteration

Fig.1 The simulation curves of sin(x) and cos(x) function

3.2 Learning phase


re-
After teaching phase, the fitness values of all learners are sorted in ascending order. Then, the
students are divided into two groups, where the first group includes the top half of the student and
the second group contains the remaining students. The first group members are regarded as
superior students, so they are able not only to obtain knowledge from a more superior student, but
lP
also to study independently. The second group members get knowledge from their teacher
principally. Therefore, the first group students update their results based on Eq.(12). On the
contrary, the second group learners update their results according to Eq.(13).
if f ( X old ,i )  f ( X neighbour )
 iter
X new,i  X old ,i  ( X neighbour  X old ,i )  cos(  )
2 MaxIter
rna

else (12)
X new,i  X old ,i  (rand  0.5)  2  ( X upper lim it  X lower lim it )
end
 iter
X new,i  X old ,i  ( X best  X old ,i )  cos(  ) (13)
2 MaxIter
Shown in equation (12), in jth iteration, for a learner X i , randomly select a learner X neighbour ,

where neighbour  i . If X neighbour has a smaller fitness value than X i , the student X i will obtain the
Jou

knowledge from X neighbour ; otherwise, he will learn knowledge by himself. Based on this kind of

mechanism, the diversity of population will be increased and the convergence speed will be
quickened simultaneously. For the second group members, there is a big gap between the latter of
half learners and the teacher, so a big correction is needed to improve the learner’s mark. Also, the
convergence speed is accelerated obviously.
Journal Pre-proof

3.3 MTLBO procedure


In order to understand the implementation procedure of the MTLBO algorithm more clearly,
the pseudo-code of the MTLBO algorithm is given in detail as follows.
MTLBO Algorithm

of
1: Objective function f(x), xi(i=1,2,....,n)
2: Initialize algorithm parameters.
3: Generate the initial population of individuals.
4: Evaluate the fitness of the population.
5: while the stopping criteria is not adequate do

pro
6: Teaching phase
7: Select the best individual X best in the current population.
8: Calculation the mean value X mean .
9: Divide the students into two groups.
10: For each student in population do
11: If a student is superior then
12: Produce new solution Xnew,i by using Eq.(7).
13: Else

15: end If
16: Evaluate new solutions.
re-
14: Produce new solution Xnew,i by using Eq.(8).

17: Update better solutions.


18: End For
19: Learning phase
20: Learners are divided into two groups based on fitness values.
lP
21: For the first group members
22: For each student in population do
23: Randomly select a learner X neighbour
24: Produce new solution Xnew,i by using Eq.(12).
25: End For
26: For the second group members
27: For each student in population do
28: Produce new solution Xnew,i by using Eq.(13).
rna

29: End For


30: Evaluate new solutions.
31: Update better solutions.
32: gen=gen+1.
33: End While
34: Output the best solution found.

4 Function optimization problems


4.1 Benchmark functions and parameter settings
Jou

To demonstrate the validity of MTLBO, 14 benchmark numerical function problems are used
to evaluate the efficiency of MTLBO. These testing functions are described in detail in Table 1, in
which includes the searching range, the theory global optima and acceptable solution. Among 14
functions, F1 to F5 are unimodal functions, F6 to F10 are multimodal functions, and the remaining
4 functions are the rotation functions. For unimodal test functions, every function has only one
global optima solution, which makes them beneficial for verifying the convergence speed and
exploitation of algorithm. For multimodal test functions, every function has multiple local
Journal Pre-proof

solutions other than the global optima solution. These multimodal test functions are suitable for
measuring local optima avoidance and explorative ability of algorithms. To compare the
performance of MTLBO, 10 methods are adopted and the results are taken from the previous work
of Chen et al.[45]. In addition, MTLBO is also experimented with the same maximum function

of
evaluations as the stopping criterion.
All the tests are implemented on Intel (R) Core(TM)64×2 Dual Core Processor T5670
@1.80GHz, 1.79GHz and 2GB RAM. All algorithms are coded and carried out in Matlab 2009
version under the Windows XP Professional. In order to reduce statistical errors, each algorithm is
independently simulated 30 runs. Note that: in original TLBO, duplicate elimination process is

pro
applied to increase the population diversity. However, the process is not used in our algorithm. So
the number of function evaluations of the MTLBO algorithm is = (2 × population size × number
of generations). The computation cost will be decreased when the two algorithms have the same
maximum generations. In this paper, the population size of MTLBO algorithm is set 20.
Table 1 14 benchmark functions used in experiments
Function Range fmin Acceptance

F1(Sphere) [-100,100] 0 1e-6

F2(Quadric) [-100,100] 0 1e-6

F3(Sum Square)

F4(Zakharov)

F5(Rosenbrock)
re- [-100,100]

[-10,10]

[-2.048,2.048]
0

0
1e-6

1e-6

0.1

F6(Ackley) [-32.768,32.768] 0 1e-6

F7(Rastrigin) [-5.12,5.12] 0 2.5

F8(Weierstrass) [-0.5,0.5] 0 1e-6


lP
F9(Griewank) [-600,600] 0 0.1

F10(Schwefel’s) [-500,500] 0 500

F11(Rotated Ackley) [-32.768,32.768] 0 1e-6

F12(Rotated Rastrigin) [-5.12,5.12] 0 5

F13(Rotated Weierstrass) [-0.5,0.5] 0 1e-6

F14(Rotated Griewank) [-600,600] 0 0.01


rna

4.2 Results and discussions


4.2.1 Comparisons on the solution accuracy
The performance of MTLBO in terms of the mean and standard deviation (Std) of the
solutions which are acquired in the 30 simulations for 10 and 30 dimensional functions are
presented in Table 2 and Table 3, separately. The results of other algorithms are taken from the
previous work [45]. For each function, the best results in terms of the lowest mean and lowest
standard deviation are shown in boldface. It is noted that the lowest the performance is, the best
solution accuracy of algorithms are. In addition, Fig.(2) graphically shows the convergence
Jou

characteristic of MTLBO for solving the 14 function with 10 dimensions and 30 dimensions.
From Table 2, it is easy to see that MTLBO wins the lowest mean and lowest standard
deviation than all other methods for functions F1, F3, F4, F6, F9, F11, F12 and F14. VTTLBO has
the smallest mean and standard deviation for function F2. jDE outperforms all other algorithms for
functions F5 and F10. In literature [45], Chen et al. proposed a variant of TLBO, namely
VTTLBO. And the authors have proved that the VTTLBO outperformed some other algorithms
for some functions. Compared to VTTLBO and other variant TLBOs, MTLBO has the smallest
Journal Pre-proof

mean and Std for functions F1, F3, F4, F5, F6, F7, F9, F10, F11, F12 and F14. For functions F8
and F13, original TLBO and other variants can converge to global optima. Table 3 indicates that
the mean and Std of MTLBO is the lowest than all other algorithms for functions F1, F3, F4, F5,
F6, F7, F11and F12. VTTLBO shows very well performance for functions F2, F8, F9, F13 and

of
F14. SaDE has the smallest mean and standard deviation for function F10. According to the
analysis, the MTLBO outperforms some algorithms in terms of solution accuracy. Fig.2 also
presents the convergence process of MTLBO for the 14 test functions with 10 dimensions and 30
dimensions. The figure reveals MTLBO shows fast convergence speed and well solution accuracy.
It’s important to not that for functions F1, F3, F7, F8, F9, F11and F14 which have fast

pro
convergence speed and high solution accuracy, so the scale of x label of these figures are set as
1000 or 2000 for observing clearly.
Table 2 The mean solutions and standard deviation of the 30 trials obtained by various method for 10 dimensional functions [45]

F Perf. DE jDE SaDE PSOwFIPS CLPSO ABC TLBO ETLBO sawTLBO VTTLBO MTLBO

F1 Mean 7.13e-073 1.31e-076 1.35e-071 3.98e-016 1.09e-018 8.02e-017 3.29e-184 2.84e-166 3.01e-064 3.56e-296 0.00e+000

Std 7.18e-073 1.58e-076 2.02e-071 6.09e-016 1.5e-018 3.22e-017 3.08e-185 4.27e-167 4.86e-064 0.00e+000 0.00e+000

F2 Mean 4.18e-012 1.14e-021 1.89e-019 6.19e-006 5.37e-001 4.04e+001 2.56e-082 3.22e-079 4.59e-050 3.50e-130 2.21e-061

Std 9.35e-012 1.52e-021 3.54e-019 2.15e-006 1.38e-001 2.35e+001 5.58e-082 5.07e-079 1.12e-049 5.05e-130 1.21e-060

F3

F4
Mean

Std

Mean
2.47e-074

4.86e-074

1.08e-005
6.97e-078

1.39e-077

1.31e-031
1.28e-074

2.52e-074

6.65e-031
re-
2.18e-017

1.39e-017

3.23e-009
2.59e-020

1.76e-020

2.66e-003
7.18e-017

3.94e-017

1.31e+001
9.94e-187

1.25e-187

1.51e-089
6.50e-169

5.49e-170

2.94e-087
4.86e-067

1.54e-066

6.87e-053
4.63e-298

0.00e+000

1.02e-139
0.00e+000

0.00e+000

0.00e+000

Std 2.41e-005 1.30e-031 1.48e-030 2.23e-009 2.37e-003 8.38e+000 1.62e-089 3.10e-087 1.61e-052 3.21e-139 0.00e+000

F5 Mean 6.24e+000 5.14e-007 2.62e+000 4.51e+000 2.45e+000 2.84e-001 4.96e-001 1.46e-001 1.90e+000 1.13e+000 5.6 e-003

Std 9.02e-001 9.47e-007 1.50e+000 7.17e-02 1.00e+000 3.23e-001 4.21e-001 1.38e-001 5.28e-001 5.06e-001 9.4 e-003
lP
F6 Mean 3.48e-015 3.36e-015 3.28e-015 8.04e-009 4.28e-010 8.53e-015 3.43e-015 3.37e-015 2.84e-015 1.78e-015 8.8818e-016

Std 2.37e-016 4.27e-016 2.51e-016 4.33e-009 2.89e-010 3.18e-015 2.17e-015 1.05e-015 1.50e-015 1.87e-015 0.00e+000

F7 Mean 1.46e+000 0.00e+000 0.00e+000 1.89e+000 2.76e-009 0.00e+000 3.06e+000 3.02e+000 8.57e+000 1.09e+000 0.00e+000

Std 5.02e-001 0.00e+000 0.00e+000 1.03e+000 3.94e-009 0.00e+000 1.52e+000 1.86e+000 3.23e+000 1.52e+000 0.00e+000

F8 Mean 0.00e+000 0.00e+000 0.00e+000 4.24e-004 6.33e-012 0.00e+000 0.00e+000 0.00e+000 0.00e+000 0.00e+000 0.00e+000

Std 0.00e+000 0.00e+000 0.00e+000 6.60e-004 6.08e-012 0.00e+000 0.00e+000 0.00e+000 0.00e+000 0.00e+000 0.00e+000
rna

F9 Mean 3.50e-002 0.00e+000 1.48e-003 7.59e-002 4.15e-003 3.94e-003 6.48e-003 2.42e-002 1.33e-002 3.82e-008 0.00e+000

Std 2.22e-002 0.00e+000 3.31e-003 5.19e-002 5.68e-003 5.67e-003 9.71e-003 3.69e-002 2.22e-002 1.21e-007 0.00e+000

F10 Mean 2.18e+002 1.27e-004 1.27e-004 3.69e-002 1.27e-004 1.27e-004 6.68e+002 7.03e+002 8.25e+002 7.92e+002 6.05 e+001

Std 1.96e+002 0.00e+000 0.00e+000 3.41e-002 1.22e-009 4.98e-013 1.51e+002 1.81e+002 1.22e+002 1.90e+002 2.52 e+002

F11 Mean 3.31e-015 2.84e-015 3.21e-015 9.98e-009 1.68e-007 5.83e-005 3.52e-015 3.49e-015 3.20e-015 2.13e-015 0.00e+000

Std 4.25e-016 1.59e-015 2.38e-016 2.82e-009 2.04e-007 9.40e-005 6.05e-016 3.27e-016 1.12e-015 1.83e-015 0.00e+000

F12 Mean 4.15e+000 3.59e+000 5.78e+000 9.31e+000 6.11e+000 1.33e+001 4.38e+000 3.00e+000 1.34e+001 2.55e+000 0.00e+000

Std 2.69e+000 1.84e+000 1.85e+000 1.96e+000 3.00e+000 7.00e+000 8.91e-001 1.22e+000 4.28e+000 2.14e+000 0.00e+000
Jou

F13 Mean 1.18e-001 9.27e-002 0.00e+000 2.55e-002 1.44e-001 1.69e+000 0.00e+000 0.00e+000 0.00e+000 0.00e+000 0.00e+000

Std 2.64e-001 2.07e-001 0.00e+000 2.95e-002 1.21e-001 6.25e-001 0.00e+000 0.00e+000 0.00e+000 0.00e+000 0.00e+000

F14 Mean 2.41e-002 1.40e-002 6.90e-003 1.02e-001 2.73e-002 9.56e-003 2.81e-003 6.77e-003 1.91e-002 5.40e-010 0.00e+000

Std 2.08e-002 9.09e-003 6.83e-003 3.85e-002 2.35e-002 1.12e-002 6.29e-003 5.15e-003 2.92e-002 1.70e-009 0.00e+000

Table 3 The mean solutions and standard deviation of the 30 trials obtained by various method for 30 dimensional functions [45]

F Perf. DE jDE SaDE PSOwFIPS CLPSO ABC TLBO ETLBO sawTLBO VTTLBO MTLBO

F1 Mean 4.90e-014 1.95e-022 3.84e-023 1.43e+000 1.94e-001 2.45e-006 4.04e-111 2.66e-095 2.61e-065 4.85e-158 0.00e+000
Journal Pre-proof

Std 1.08e-013 2.76e-022 2.15e-023 2.78e-001 7.79e-002 1.21e-006 3.20e-111 1.84e-095 3.09e-065 1.06e-157 0.00e+000

F2 Mean 4.14e+000 2.06e+001 1.06e+001 3.82e+003 1.16e+004 1.52e+004 1.08e-022 3.42e-022 1.68e-026 1.24e-032 2.83 e+000

Std 3.22e+000 6.71e+000 6.53e+000 1.01e+003 2.72e+003 2.77e+003 1.43e-022 4.72e-022 1.78e-026 1.79e-032 1.14 e+001

F3 Mean 7.99e-017 3.92e-023 3.00e-024 2.17e-001 2.25e-002 6.88e-007 5.38e-111 8.21e-096 6.13e-066 2.50e-158 0.00e+000

of
Std 8.83e-017 3.86e-023 2.47e-024 7.69e-002 6.43e-003 3.24e-007 3.43e-111 1.11e-095 1.12e-065 2.48e-158 0.00e+000

F4 Mean 8.79e-001 1.35e+000 1.50e-001 1.11e+002 2.30e+002 5.48e+002 5.11e-011 1.92e-011 1.23e-008 8.06e-018 0.00e+000

Std 6.15e-001 1.68e+000 1.31e-001 1.89e+001 5.45e+001 6.58e+001 8.59e-011 1.93e-011 2.54e-008 1.11e-017 0.00e+000

F5 Mean 2.54e+001 2.18e+001 2.52e+001 2.73e+001 6.84e+001 2.16e+001 2.38e+001 2.38e+001 2.36e+001 2.28e+001 3.87 e-001

Std 5.86e-001 2.59e-001 1.36e+000 2.99e-001 2.84e+001 4.59e+000 7.01e-001 8.57e-001 8.68e-001 4.23e-001 4.56 e-001

pro
F6 Mean 7.49e-009 2.82e-012 1.30e-012 5.21e-001 7.13e-001 9.08e-003 3.55e-015 3.55e-015 3.55e-015 3.55e-015 8.8818e-016

Std 5.58e-009 1.77e-012 8.26e-013 1.78e-001 5.74e-001 4.76e-003 0.00e+000 0.00e+000 0.00e+000 0.00e+000 0.00e+000

F7 Mean 7.75e+001 2.24e-009 5.53e-001 1.23e+002 2.91e+001 4.22e+000 1.17e+001 1.22e+001 2.36e+001 1.25e+001 0.00e+000

Std 3.01e+001 4.15e-009 7.55e-001 1.57e+001 4.77e+000 5.91e-001 3.71e+000 9.07e+000 6.04e+000 7.27e+000 0.00e+000

F8 Mean 1.20e-002 3.16e-001 6.56e-011 2.46e+000 5.18e-001 3.66e-002 0.00e+000 0.00e+000 0.00e+000 0.00e+000 0.00e+000

Std 2.67e-002 5.07e-001 8.87e-011 7.11e-001 6.83e-002 8.53e-003 0.00e+000 0.00e+000 0.00e+000 0.00e+000 0.00e+000

F9 Mean 3.94e-003 0.00e+000 0.00e+000 8.92e-001 2.68e-001 3.32e-003 0.00e+000 0.00e+000 0.00e+000 0.00e+000 0.00e+000

Std 5.40e-003 0.00e+000 0.00e+000 5.53e-002 4.38e-002 6.99e-003 0.00e+000 0.00e+000 0.00e+000 0.00e+000 0.00e+000

F10

F11
Mean

Std

Mean
3.33e+003

1.56e+003

3.31e-008
2.37e+001

5.30e+001

7.24e-012
3.82e-004

1.99e-010

1.36e-012
re-
4.99e+003

5.50e+002

5.96e-001
1.72e+003

3.21e+002

2.68e+000
5.93e+002

1.45e+002

2.24e+000
4.29e+003

9.40e+002

3.55e-015
4.55e+003

7.81e+002

3.55e-015
5.26e+003

6.41e+002

3.55e-015
4.70e+003

4.22e+002

3.55e-015
1.43 e+002

6.38e+002

0.00e+000

Std 3.18e-008 3.21e-012 9.25e-013 1.55e-001 1.40e-001 4.15e-001 0.00e+000 0.00e+000 0.00e+000 0.00e+000 0.00e+000

F12 Mean 1.28e+002 6.18e+001 9.63e+001 1.52e+002 1.10e+002 7.67e+001 2.17e+001 1.36e+001 4.09e+001 1.41e+001 0.00e+000

Std 6.97e+001 1.29e+001 1.50e+001 1.53e+001 1.14e+001 1.04e+001 1.64e+001 7.85e+000 1.15e+001 5.55e+000 0.00e+000
lP
F13 Mean 4.17e-001 2.20e+000 9.96e-003 5.03e+000 1.38e+001 1.18e+001 0.00e+000 0.00e+000 0.00e+000 0.00e+000 0.00e+000

Std 6.25e-001 4.19e+000 2.18e-002 6.56e-001 1.21e+000 9.05e-001 0.00e+000 0.00e+000 0.00e+000 0.00e+000 0.00e+000

F14 Mean 1.97e-003 2.46e-003 7.98e-013 9.34e-001 3.64e-001 1.62e-003 1.15e-003 0.00e+000 0.00e+000 0.00e+000 0.00e+000

Std 4.41e-003 5.51e-003 1.26e-012 6.89e-002 1.05e-001 1.89e-003 2.57e-003 0.00e+000 0.00e+000 0.00e+000 0.00e+000

F1 10
F2
50
10 10
rna

10 dimension
0 30 dimension 0
10 10

10 dimension
-10
-50
10
10 30 dimension

-20
-100 10
10
fitness value
fitness value

-30
-150 10
10
-40
10
-200
10
-50
10
-250
10
-60
10
-300
10
-70
10
0 200 400 600 800 1000 1200 1400 1600 1800 2000 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
Jou

iteration 4
x 10
Journal Pre-proof

50
F3 50
F4
10 10
10 dimension 10 dimension
0 30 dimension 0 30 dimension
10 10

-50 -50
10 10

-100 -100
10 10
fitness value

fitness value

of
-150 -150
10 10

-200 -200
10 10

-250 -250
10 10

-300 -300
10 10

pro
0 100 200 300 400 500 600 700 800 900 1000 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
4
x 10

4
F5 2
F6
10 10
10 dimension 10 dimension
0
30 dimension 10 30 dimension
3
10
-2
10
2
10
-4
10

1
fitness value

fitness value
10 -6
10

-8
0 10
10

-10
10
-1
10

10

10
-2

-3

0 0.5 1 1.5 2 2.5


iteration
3 3.5
re-4 4.5
x 10
4
5
-12
10

-14
10

-16
10
0 0.5 1 1.5 2 2.5
iteration
3 3.5 4 4.5
x 10
4
5

F7 F8
5 2
10 10
10 dimension 10 dimension
30 dimension 30 dimension
0
10
0
10
-2
10
lP
-5
10
fitness value

-4
fitness value

10

-6
-10 10
10

-8
10
-15
10
-10
10

-20 -12
10 10
0 100 200 300 400 500 600 700 800 900 1000 0 100 200 300 400 500 600 700 800 900 1000
rna

iteration iteration

5
F9 F10
4
10 10
10 dimension 10 dimension
30 dimension 30 dimension

0
10
3
10

-5
10
fitness value

fitness value

2
10

-10
10

1
10
-15
Jou

10

-20 0
10 10
0 100 200 300 400 500 600 700 800 900 1000 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
iteration iteration 4
x 10
Journal Pre-proof

5
F11 5
F12
10 10
10 dimension 10 dimension
30 dimension 30 dimension

0
10
0
10

-5
10
fitness value

fitness value

of
-5
10

-10
10

-10
10
-15
10

-15 -20
10 10

pro
0 100 200 300 400 500 600 700 800 900 1000 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
iteration iteration 4
x 10

5
F13 5
F14
10 10
10 dimension 10 dimension
30 dimension 30 dimension

0 0
10 10

-5 -5
10 10
fitness value

fitness value
-10 -10
10 10

-15
10

-20
10
0 0.5 1 1.5 2 2.5
iteration
3 3.5
re- 4 4.5
x 10
4
5
-15
10

-20
10
0 100 200 300 400 500
iteration
600 700 800 900 1000

Fig.2 Convergence curves of MTLBO on 14 functions with 10 and 30 dimensions


4.2.2 Comparisons on the convergence speed
In this subsection, the convergence speed and reliability of algorithms are measured by 14
lP
test functions. A salient evaluation criterion of an algorithm is the speed in getting the global
optima. Chen et al. pointed out that the mean number of function evolutions (mFEs) is used to
measure the speed of algorithm[45]. In this paper, the mean number of function evolutions of
MTLBO for 14 test functions with 10 and 30 dimensions is listed in Table 4 and Table 5,
respectively. The convergence characteristics of other algorithms in terms of mean number of
function evolutions are also taken from the previous work [45]. The best results are shown in bold
face. ‘mFEs’ in Table 4 and Table 5 represents the mean number of function evolutions when the
rna

method converges to the acceptable solutions which are listed in Table 1. And ‘ratio’ represents the
successful rate when the algorithm reaches the acceptable solutions during the 30 simulations.
‘NaN’ in two tables represents the method is not convergent.
Table 4 shows that the mFEs of MTLBO is the smallest than all other algorithms for most
functions except F2, F5 and F6. The mFEs of jDE is the smallest for function F5. VTTLBO has
the well performance for functions F2 and F6. Table 5 indicates that MTLBO outperforms all
other algorithms for most 30 dimension test functions when the algorithm is convergent. In
Jou

addition, Table 4 and Table 5 show that MTLBO can reach the acceptable solutions with a high
successful rate for most test functions. Based on the analysis, the MTLBO has very quick
convergence speed and well reliability.
Table 4 The mean FEs and reliability ratio being the percentage of trail runs reaching acceptable solutions for 10 dimensional functions [45]

F Index . DE jDE SaDE PSOwFIPS CLPSO ABC TLBO ETLBO sawTLBO VTTLBO MTLBO

F1 mFEs 6733 6218 6395 24291 27464 11300 2728 3038 2381 1201 171

ratio(%) 100 100 100 100 100 100 100 100 100 100 100
Journal Pre-proof

F2 mFEs 11892 17719 19236 NaN NaN NaN 5659 5968 4775 3343 14192

ratio(%) 100 100 100 0 0 0 100 100 100 100 100

F3 mFEs 5947 5628 5635 20803 24573 9627 2400 2584 2066 975 43

ratio(%) 100 100 100 100 100 100 100 100 100 100 100

of
F4 mFEs 9372 13083 13163 37939 NaN NaN 5814 6062 4863 3563 67

ratio(%) 80 100 100 100 0 0 100 100 100 100 100

F5 mFEs NaN 30868 NaN NaN NaN 46110 NaN 44657 NaN NaN 48085

ratio(%) 0 100 0 0 0 60.3 0 60.8 0 0 100

F6 mFEs 10121 9619 9586 38944 37753 21143 4126 4563 3472 2089 4287

pro
ratio(%) 100 100 100 100 100 100 100 100 100 100 100

F7 mFEs 31720 6297 9132 39383 28860 8529 27555 41698 NaN 15596 856

ratio(%) 100 100 100 80.3 100 100 40.7 20.4 0 80.6 100

F8 mFEs 15015 16636 12775 NaN 41247 25594 6093 6604 4876 3817 15

ratio(%) 100 100 100 0 100 100 100 100 100 100 100

F9 mFEs 11865 5231 8896 32582 23562 7392 5084 5604 9030 1940 15

ratio(%) 100 100 100 80.5 100 100 100 100 100 100 100

F10 mFEs 11045 2970 4031 14641 16295 4644 44232 9000 NaN NaN 19

F11
ratio(%)

mFEs

ratio(%)
80.7

10396

100
100

10097

100
100

9963

100
100

39525

100
re- 100

45103

100
100

41642

40.7
20.8

4166

100
20.6

4509

100
0

3485

100
0

2146

100
96.7

25

100

F12 mFEs 25715 30618 37112 NaN 44141 NaN 15690 19242 NaN 17338 34

ratio(%) 80.5 80.2 40.6 0 40.9 0 80.2 100 0 90.4 100

F13 mFEs 16937 33109 19909 NaN NaN NaN 7071 7219 5368 4482 1093
lP
ratio(%) 80 80 100 0 0 0 100 100 100 100 100

F14 mFEs 10522 20995 23761 NaN 41831 22955 5473 22714 18404 3939 22

ratio(%) 40.2 40.3 60.6 0 40.5 60.8 80.2 80.3 60.5 100 100

Table 5 The mean FEs and reliability ratio being the percentage of trail runs reaching acceptable solutions for 30 dimensional functions [45]

F Index . DE jDE SaDE PSOwFIPS CLPSO ABC TLBO ETLBO sawTLBO VTTLBO MTLBO

F1 mFEs 27150 20132 19179 NaN NaN NaN 4724 5521 4065 2896 1070
rna

ratio(%) 100 100 100 0 0 0 100 100 100 100 100

F2 mFEs NaN NaN NaN NaN NaN NaN 19289 19131 16458 13656 NaN

ratio(%) 0 0 0 0 0 0 100 100 100 100 70

F3 mFEs 24454 18788 17115 NaN NaN 48083 4397 5119 3833 2571 31

ratio(%) 100 100 100 0 0 60 100 100 100 100 100

F4 mFEs NaN NaN NaN NaN NaN NaN 36703 36607 38919 26751 128

ratio(%) 0 0 0 0 0 0 100 100 100 100 100

F5 mFEs NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
Jou

ratio(%) 0 0 0 0 0 0 0 0 0 0 0

F6 mFEs 39175 29075 27726 NaN NaN NaN 6813 8024 5730 4580 20

ratio(%) 100 100 100 0 0 0 100 100 100 100 100

F7 mFEs NaN 29888 45886 NaN NaN NaN NaN 9069 NaN 3841 951

ratio(%) 0 100 100 0 0 0 0 20 0 20.5 100

F8 mFEs NaN NaN 39975 NaN NaN NaN 9809 11236 7773 7395 59

ratio(%) 0 0 100 0 0 0 100 100 100 100 100


Journal Pre-proof

F9 mFEs 15701 11780 11037 NaN NaN 29454 2808 3192 2495 1663 18

ratio(%) 100 100 100 0 0 100 100 100 100 100 100

F10 mFEs NaN 18013 21675 NaN NaN 48980 NaN NaN NaN NaN NaN

ratio(%) 0 100 100 0 0 40 0 0 0 0 0

of
F11 mFEs 41747 30421 28286 NaN NaN NaN 6853 7966 5744 4687 805

ratio(%) 100 100 100 0 0 0 100 100 100 100 100

F12 mFEs NaN NaN NaN NaN NaN NaN NaN NaN NaN 44773 27

ratio(%) 0 0 0 0 0 0 0 0 0 20 100

F13 mFEs NaN NaN NaN NaN NaN NaN 10243 11514 8092 7674 3838

pro
ratio(%) 0 0 0 0 0 0 100 100 100 100 100

F14 mFEs 19680 17233 14234 NaN NaN 40107 8751 3826 2962 1904 15

ratio(%) 100 80.4 100 0 0 100 100 100 100 100 100

5 Real world applications

In this subsection, a tuned ELM by MTLBO is used to build the nitrogen oxides (NOx)
emission model of a 330 MW circulation fluidized bed boiler (CFBB). For presenting the validity
re-
of MTLBO, another three algorithms ABC, PSO and TLBO are employed to optimize the
input-weights and bias of ELM. Firstly, the NOx emission data are collected from a 330 MW
circulation fluidized bed boiler. Secondly, the model parameters of ELM are adjusted by
optimization algorithms. Finally, the experimental results are analyzed in detail.

5.1 Model NOx emission from a CFBB


lP
Before building the NOx emission model, 300 combustion operation cases data are collected
from a 330MW CFBB, which are listed in table 6. Seen from table 6, these data are obtained
under various operating conditions, including 50% load, 70% load and 100% load. The 300
operation cases are divided into two parts: 240 training data and 60 testing data. The training set is
applied to build the NOx emission model and the testing set tests the model.
The NOx emission in the CFBB are mainly determined by 19 operational conditions, such as
rna

the coal feeder (A, B, C, D), bed temperature, the primary air velocity, the primary air temperature,
the second air velocity, the second air temperature, the oxygen content in the flue gas, the exhaust
gas temperature.
This paper adopts the tuned ELM by MTLBO to build the NOx emission model, which could
represent the function mapping relation between NOx emission and 19 combustion operational
conditions. In fact, the 19 operational parameters are regarded as the input of the ELM and the
NOx emission concentration is set as the output value of the ELM model. Note that: in order to
unified dimension, the input data and output data are normalized to [0,1].
Jou

Table.6 The field data of CFBB


Coal feeder Primary air velocity Primary air temp.
Bed temp. Oxygen
Cases (t/h) (KNm3/h) (℃)

A B C D (℃) Left Right Left Right (%)

1 61.43 55.49 55.37 61.09 867.59 414.70 311.71 281.33 278.60 4.73

2 61.43 55.49 55.37 61.19 867.59 377.17 288.14 281.33 278.60 4.73

3 61.43 55.39 55.67 61.35 865.97 388.38 268.46 281.33 278.60 4.73
Journal Pre-proof

4 61.08 55.08 55.29 61.17 864.49 343.53 333.23 281.33 278.60 4.73

… … … … … … … … … … …

101 30.93 31.11 30.99 30.96 873.35 207.58 164.09 244.37 242.76 7.56

102 31.58 31.91 31.46 31.56 874.28 186.29 215.82 244.37 242.76 7.56

of
103 31.86 31.92 31.80 31.81 875.71 163.41 124.27 244.37 242.76 7.56

… … … … … … … … … … …

297 53.06 45.43 19.98 56.36 871.59 231.61 203.46 264.74 262.09 5.84

298 53.53 45.84 19.97 57.28 871.26 199.11 239.85 264.74 262.09 5.84

299 52.91 45.55 19.89 56.85 870.19 190.41 258.16 264.74 262.61 5.84

pro
300 53.29 45.07 20.07 56.04 869.11 214.67 237.79 265.07 262.61 6.19

Table.6 The field data of CFBB (continued)


Second air velocity Second air temp. AMP Exhaust gas temp.
NOx emission
Cases (KNm3/h) (℃) (A)

Left (in) Left (out) Right (in) Right (out) Left Right A B (℃) (mg/m3)

1 101.93 104.98 108.89 87.50 292.29 276.81 122.94 136.29 160.84 176.38

2 101.48 106.24 108.73 88.81 292.29 276.81 125.76 134.38 160.84 176.38

3 101.08 93.54 108.99 85.02 292.29 276.81 126.03 137.32 160.84 181.49

101
4 101.13

55.42
99.2

51.09
107.84

51.52
re-
87.11

56.63
292.29

253.83
276.81

245.47
125.76

97.84
134.04

93.45
160.84

147.00
183.70

146.78

102 53.14 51.08 51.70 56.99 253.83 245.47 90.24 94.63 146.49 146.17

103 54.79 46.31 51.65 57.77 253.83 245.47 91.23 94.17 146.49 146.09

… … … … … … … … … … …
lP
297 72.33 87.17 81.02 59.87 273.77 263.30 86.51 101.27 149.70 165.39

298 71.47 88.42 81.39 55.76 273.77 263.30 86.70 101.69 149.70 168.45

299 72.84 86.98 79.09 61.86 273.77 263.30 86.24 101.73 149.70 166.31

300 75.84 80.86 79.50 58.93 274.13 263.62 86.47 101.92 149.74 166.23

5.2 Tuning parameters of ELM


rna

In ELM, the input-weights and bias of hidden layer, namely model parameters, would
directly affect the regression accuracy and the generalization ability. Therefore, the generation of
appropriate model parameters is seriously significant for obtaining well-adjusted ELM. In this
paper, the MTLBO algorithm is applied to tune the model parameters of ELM. During the
selection of model parameters, each set of input-weights and bias is considered as a feasible
solution, which can represent X i  [11,12 ,,1n ,,mn , b1 ,, bm ] . The dimension of solutions is set
as m  (n  1) . Where m is the number of hidden layer neurons, n is the number of input layer

neurons, bm is the mth bias of hidden layer neuron,  indicates the input-weights. For ELM, the
Jou

number of hidden neurons is randomly set as 20 and the hidden activation function is set as
‘sigmoid function’.
In addition, there is a crucial step to define the fitness function for estimating the solution

quality. Here, the objective function is set as min  (g(x )  t )


i i
2

, where g ( xi ) is the prediction


N
Journal Pre-proof

output value, t i is the target output value, N is the number of training sample. The objective is to

minimize the fitness value, when the solution presents smaller fitness value than others, it should
be saved during the optimization process. Finally, the optimal model parameters could be selected.

of
To evaluate the validity of MTLBO, another three algorithms: ABC, PSO and original TLBO
are also employed to optimize the model parameters of ELM. The common parameters of
algorithms need set. The population size of ABC and PSO is set as 40. For MTLBO and TLBO,
the population size is set as 20. The maximum iteration is set as 200. In addition, the specific
parameters of algorithms are also set. For ABC, the ‘limit’ is set as 30. For PSO, two

pro
coefficients C1  2, C 2  2 , and inertia weight W  [0.4,0.9] .

5.3 Results and discussion

In this subsection, the predicted NOx emission model in the 300MW circulation fluidized bed
boiler by tuned ELM is presented. There are two important performance indexes for an accurate
model, one is regression accuracy, and another is generalization ability. Especially, the
generalization ability plays a significant role for a model. The root mean square error (RMSE)
and mean absolute percentage error (MAPE) are employed to evaluate the performance of
re-
algorithms. The smaller the two parameters are, the better the performance of algorithm is.
The experiment results are listed in Table 7.
Table 7 A comparison of the performance of different models
Training data Testing data
Model
RMSE Mape RMSE Mape
MTLBO-ELM 0.0499 2.2680e-05 0.0653 7.1561e-06
lP
TLBO-ELM 0.0441 1.7716e-05 0.0626 2.3190e-04
ABC-ELM 0.0410 1.5337e-05 0.0659 1.8915e-04
PSO-ELM 0.0352 1.1312e-05 0.0634 2.7372e-05
ELM 0.0832 6.2981e-05 0.1035 6.1623e-04
As seen from Fig.3 and Fig.4, the green star curve presents the output value of MTLBO-ELM,
and the blue dotted line is the output value of ELM. From the two figures, it is easy to see that the
rna

output NOx emission of MTLBO-ELM obviously approaches the target output, and the output
value of ELM fluctuates larger than the MTLBO-ELM. Moreover, it is easy to observe from Table
7 that the training accuracy of MTLBO-ELM is 0.0499 and the ELM is 0.0832. For testing data,
the testing accuracy of MTLBO-ELM is 0.0653, and the ELM is 0.1035. Therefore, the tuned
ELM model by MTLBO has better regression accuracy and generalization ability than ELM.
Compared to other algorithms, the testing error curves of algorithms are presented in Fig.5.
As shown in Fig.5, the MTLBO shows satisfactory performance. Moreover, the testing Mape of
MTLBO is the smallest than all other algorithms in Table 7. Hence, the results indicate that the
Jou

MTLBO has good generalization ability.


Journal Pre-proof

200

180

of
160
NOx emission(mg/Nm3)

140

target value

pro
120 MTLBO-ELM
ELM

100

80

60
0 50 100 150 200 250
Cases

re-
Fig.3 Comparisons between ELM and MTLBO-ELM for training model of NOx emission

200
target value
MTLBO-ELM
180 ELM
lP
160
NOx emission(mg/Nm3)

140

120

100
rna

80

60
0 10 20 30 40 50 60
Cases

Fig.4 Comparisons between ELM and MTLBO-ELM for predicting model of NOx emission
Jou
Journal Pre-proof

40
MTLBO-ELM
30 TLBO-ELM
ABC-ELM
PSO-ELM

of
20
ELM

10
testing error

pro
-10

-20

-30

-40

-50
0 10 20 30 40 50 60
Cases

5.4 Optimize NOx emissions


re-
Fig.5 Predict errors of tuned ELM model by 5 methods for testing set

In this subsection, the MTLBO is used to optimize the combustion operation process of CFBB
based on the above NOx emissions model for reducing the NOx emissions concentration. First, the
air excess coefficient and coal supply play the decisive role for the NOx emissions concentration.
And the air excess coefficient is related to the Oxygen in flue gas. Therefore, if we want to reduce
lP
the NOx emissions concentration, the coal supply and Oxygen in flue gas should be reduced
properly. Second, the primary air mainly plays the role of pulverized coal fluidization. Third, the
second air can support combustion. Moreover, the bed temperature has an effect on the NOx
emissions concentration, which is generally maintained at 850~900℃, but the bed temperature is
an unadjustable parameter, so the bed temperature is not optimized. According to the above
analysis, there are 13 parameters need be optimized, including coal feeder (A, B, C, D), the
rna

primary air velocity, the second air velocity, the oxygen in flue gas etc.
The objective function is set as min f ( x)  min( NO x ) , and the preparative optimization solutions
are expressed as X  [ x1 , x2 , x3 , x4 , x5 , x6 , x7 , x8 , x9 , x10 , x11, x12 , x13 ] in the table 6, x1 denotes the coal
feeder A, x2 denotes the coal feeder B, x3 denotes the coal feeder C, x4 denotes the coal feeder D,
x5 , x6 are the primary air velocity, x7 , x8 , x9 , x10 are the second air velocity, x11, x12 are the
AMP, x13 is the oxygen in flue gas.
According to the real combustion condition of CFBB, the range of 13 adjustable parameters are
set as follows.
Jou

20  x1 , x2 , x3 , x4  80
50  x , x  500
 5 6

 30  x 7 , x8 , x9 , x10  150 (14)


80  x , x  180
 11 12

3  x13  9
After setting the range of adjustable parameters, a typical combustion operation case is
Journal Pre-proof

randomly chosen as the preparative working condition from table 6. The optimized parameters are
recorded in table 7 and table 8. Seen from the table 7 and table 8, the coal feeders (A, B, C, D) and
the primary air velocity and the second air velocity are decreased obviously after optimizing, and
the optimized parameters are reasonable, therefore the NOx emission is reduced from
183.7050(mg/m3) to 129.7558(mg/m3). In conclusion, the MTLBO is an effective optimization

of
tool for reducing NOx emissions concentration.
Table 7 comparison for optimized parameters

Primary air
Coal feeder(t/h) Oxygen

pro
Cases velocity(tKNm3/h)
%
A B C D Left Right
Before
61.0820 55.0850 55.29 61.178 4.7340 343.5310 333.2320
optimization
After
59.8803 53.1718 52.8430 54.9916 4.2876 292.5077 220.3141
optimization

Table 8 comparison for optimized parameters


Second air velocity (KNm3/h) AMP NOx
Cases emissions
Left(in) Left(out) Right(in) Right(out) First Second
(mg/m3)
Before
101.131 99.2200 107.8430 87.1120 125.7630 134.0400 183.7050
optimization
After
optimization
84.2207 70.3956 re- 95.8278 69.1239 121.9968 145.4113 129.7558

6 Conclusions

Inspired by actual teaching learning phenomenon, a novel optimization method, MTLBO, is


proposed based on the main theoretical framework of the conventional TLBO. In the proposed
lP
MTLBO algorithm, two novel population individuals group mechanisms are introduced into
teaching phase and learning phase, separately. In addition, three inertia weights are firstly used
into individual updating mechanism. The performance of MTLBO is checked by experimenting
with 14 benchmark functions. Experiment results reveal that MTLBO has faster convergence
speed and better solution quality than other state-of-the-art algorithms on most optimization
problems.
rna

In addition, the proposed MTLBO is used to optimize the input weights and bias of hidden
layer of ELM, improving the model precision of the extreme learning machine. And then, this
paper uses the tuned extreme learning machine to build NOx emissions model. Simulation results
reveal that the tuned ELM by MTLBO shows well regression accuracy and generalization ability.
In addition, the MTLBO is used to optimize the combustion operation process of CFBB for
reducing NOx emissions concentration. Therefore, the MTLBO is an effective optimization
algorithm.
Future work will focus on the following tasks:
Jou

1) Based on the main framework of MTLBO, designing a kind of multi-objective MTLBO


algorithm to solve multi-objective optimization problems, including mathematics and application
problems.
2) According to experiment analysis, the MTLBO algorithm presents fast convergence speed
with complex updating mechanisms. Therefore, we will theoretically prove the convergence and
steady properties of the MTLBO.
3) The proposed MTLBO will be further enhanced for solving constrained numerical
Journal Pre-proof

optimization problems or constrained mechanical design optimization problems.


Acknowledgement
Funding: This work is supported by the National Natural Science Foundation of China (Grant
No.61573306) and the National Natural Science Foundation of Tianjin (Grant

of
No.20JCQNJC00430) and the Tianjin Technical Expert Project of China (Grant Number:
19JCTPJC51000) and Hebei Innovation Capability Improvement Project (Grant Number:
20554501D).

Reference

pro
[1]. Song J, Romero C E, Zheng Y, et al. Improved artificial bee colony-based optimization
of boiler combustion considering NOX emissions, heat rate and fly ash recycling for
on-line applications[J]. Fuel, 2016, 172:20-28.
[2]. Zhenqi, Wang, Mei, et al. Online adaptive least squares support vector machine and its
application in utility boiler combustion optimization systems[J]. Journal of Process
Control, 2011, 21(7):1040-1048.
[3]. Li S, Chen Z, Li X, et al. Effect of outer secondary-air vane angle on the flow and
combustion characteristics and NO x, formation of the swirl burner in a 300-MW

2016, 90(2):239-256.
re-
low-volatile coal-fired boiler with deep air staging[J]. Journal of the Energy Institute,

[4]. Krzywanski J, Czakiert T, Blaszczuk A, et al. A generalized model of SO 2, emissions


from large- and small-scale CFB boilers by artificial neural network approach : Part 1.
The mathematical model of SO 2, emissions in air-firing, oxygen-enriched and
oxycombustion CFB conditions[J]. Fuel Processing Technology, 2015, 137:66-74.
lP
[5]. Liu X, Bansal R C. Integrating multi-objective optimization with computational fluid
dynamics to optimize boiler combustion process of a coal fired power plant[J]. Applied
Energy, 2014, 130(5):658-669.
[6]. Huang G B, Zhu Q Y, Siew C K. Extreme learning machine: a new learning scheme of
feedforward neural networks[C]// IEEE International Joint Conference on Neural
Networks, 2004. Proceedings. IEEE, 2005:985-990 vol.2.
rna

[7]. Rao R V, Savsani V J, Vakharia D P. Teaching–Learning-Based Optimization: An


optimization method for continuous non-linear large scale problems[J]. Information Sciences,
2012, 183(1):1-15.
[8]. Muthusamy H, Polat K, Yaacob S. Improved Emotion Recognition Using Gaussian Mixture
Model and Extreme Learning Machine in Speech and Glottal Signals[J]. Mathematical
Problems in Engineering, 2015(6):1-13.
[9]. Lan Y, Hu Z, Soh Y C, et al. An extreme learning machine approach for speaker
recognition[J]. Neural Computing & Applications, 2013, 22(3-4):417-425.
Jou

[10]. Deng W, Chen L. Color image watermarking using regularized extreme learning machine[J].
Neural Network World, 2010, 20(3):317-330.
[11]. Wang S, Deng C, Lin W, et al. NMF-Based Image Quality Assessment Using Extreme
Learning Machine[J]. IEEE Transactions on Cybernetics, 2016, 47(1):232-243.
[12]. Shikha, B., Gitanjali, P., & Kumar, D. P. An Extreme Learning Machine-Relevance Feedback
Framework for Enhancing the Accuracy of a Hybrid Image Retrieval System[J]. International
Journal of Interactive Multimedia and Artificial Intelligence, 2020, 6:15-27.
Journal Pre-proof

[13]. Beque A , Lessmann S . Extreme learning machines for credit scoring: An empirical
evaluation[J]. Expert Systems with Applications, 2017, 86(nov.):42-53.
[14]. Mohapatra P, Chakravarty S, Dash P K. An improved cuckoo search based extreme learning
machine for medical data classification[J]. Swarm & Evolutionary Computation, 2015,

of
24:25-49.
[15]. Jha, S., Dey, A., Kumar, R., & Kumar, V. A Novel Approach on Visual Question Answering
by Parameter Prediction using Faster Region Based Convolutional Neural Network[J].
IJIMAI, 2019, 5(5), 30-37.
[16]. Cao J, Lin Z, Huang G B. Self-Adaptive Evolutionary Extreme Learning Machine[J]. Neural

pro
Processing Letters, 2012, 36(3):285-305.
[17]. Zhu Q Y, Qin A K, Suganthan P N, et al. Evolutionary extreme learning machine[J]. Pattern
Recognition, 2005, 38(10):1759-1763.
[18]. Matias T, Souza F, Jo R, et al. Learning of a single-hidden layer feedforward neural network
using an optimized extreme learning machine[J]. Neurocomputing, 2014, 129(129):428–436.
[19]. Han F, Yao H F, Ling Q H. An improved evolutionary extreme learning machine based on
particle swarm optimization[J]. Neurocomputing, 2013, 116:87-93.
[20]. Li G, Niu P, Ma Y, et al. Tuning extreme learning machine by an improved artificial bee

67(3):278-289.
re-
colony to model and optimize the boiler efficiency[J]. Knowledge-Based Systems, 2014,

[21]. Li G, Niu P, Liu C, et al. Enhanced combination modeling method for combustion efficiency
in coal-fired boilers[J]. Applied Soft Computing, 2012, 12(10):3132–3140.
[22]. Doğan B, Ölmez T. A new metaheuristic for numerical function optimization: Vortex Search
algorithm[J]. Information Sciences, 2015, 293:125-145.
lP
[23]. Ghasemi M, Ghavidel S, Rahmani S, et al. A novel hybrid algorithm of imperialist
competitive algorithm and teaching learning algorithm for optimal power flow problem with
non-smooth cost functions[J]. Engineering Applications of Artificial Intelligence, 2014, 29:
54-69.
[24]. Kennedy J. Particle swarm optimization[M]//Encyclopedia of Machine Learning. Springer
US, 2010: 760-766.
rna

[25]. Karaboga D, Basturk B. A powerful and efficient algorithm for numerical function
optimization: artificial bee colony (ABC) algorithm[J]. Journal of global optimization, 2007,
39(3): 459-471.
[26]. Gandomi A H, Alavi A H. Krill herd: a new bio-inspired optimization algorithm[J].
Communications in Nonlinear Science and Numerical Simulation, 2012, 17(12): 4831-4845.
[27]. Cuevas E, Cienfuegos M, Zaldívar D, et al. A swarm optimization algorithm inspired in the
behavior of the social-spider[J]. Expert Systems with Applications, 2013, 40(16): 6374-6384.
[28]. Arora S , Singh S . An Effective Hybrid Butterfly Optimization Algorithm with Artificial Bee
Jou

Colony for Numerical Optimization[J]. International Journal of Interactive Multimedia and


Artificial Intelligence, 2017, 4(4):14-21.
[29]. Niknam T, Fard A K, Baziar A. Multi-objective stochastic distribution feeder reconfiguration
problem considering hydrogen and thermal energy production by fuel cell power plants[J].
Energy, 2012, 42(1): 563-573.
[30]. Niknam T, Azizipanah-Abarghooee R, Narimani M R. An efficient scenario-based stochastic
programming framework for multi-objective optimal micro-grid operation[J]. Applied Energy,
Journal Pre-proof

2012, 99: 455-470.


[31]. Niknam T, Azizipanah-Abarghooee R, Narimani M R. A new multi objective optimization
approach based on TLBO for location of automatic voltage regulators in distribution
systems[J]. Engineering Applications of Artificial Intelligence, 2012, 25(8): 1577-1588.
[32]. Niknam T, Golestaneh F, Sadeghi M S. θ -Multiobjective Teaching–Learning-Based

of
Optimization for Dynamic Economic Emission Dispatch[J]. Systems Journal, IEEE, 2012,
6(2): 341-352.
[33]. Rao R V, Kalyankar V D. Parameter optimization of modern machining processes using
teaching–learning-based optimization algorithm[J]. Engineering Applications of Artificial

pro
Intelligence, 2013, 26(1): 524-531.
[34]. Rao R V, Kalyankar V D. Multi-objective multi-parameter optimization of the industrial
LBW process using a new optimization algorithm[J]. Proceedings of the Institution of
Mechanical Engineers, Part B: Journal of Engineering Manufacture, 2012:
0954405411435865.
[35]. Krishnanand K R, Panigrahi B K, Rout P K, et al. Application of multi-objective
teaching-learning-based algorithm to an economic load dispatch problem with
incommensurable objectives[M]//Swarm, Evolutionary, and Memetic Computing. Springer
re-
Berlin Heidelberg, 2011: 697-705.
[36]. Li G, Niu P, Zhang W, et al. Model NOx emissions by least squares support vector machine
with tuning based on ameliorated teaching–learning-based optimization[J]. Chemometrics
and Intelligent Laboratory Systems, 2013, 126: 11-20.
[37]. Rao R, Patel V. Comparative performance of an elitist teaching-learning-based optimization
algorithm for solving unconstrained optimization problems[J]. International Journal of
lP
Industrial Engineering Computations, 2013, 4(1): 29-50.
[38]. Rao R V, Patel V. An improved teaching-learning-based optimization algorithm for solving
unconstrained optimization problems[J]. Scientia Iranica, 2013, 20(3): 710-720.
[39]. Roy P K, Bhui S. Multi-objective quasi-oppositional teaching learning based optimization for
economic emission load dispatch problem[J]. International Journal of Electrical Power &
Energy Systems, 2013, 53: 937-948.
rna

[40]. Yu K, Wang X, Wang Z. An improved teaching-learning-based optimization algorithm for


numerical and engineering optimization problems[J]. Journal of Intelligent Manufacturing,
2016, 27(4):831-843.
[41]. Huang J, Gao L, Li X. An effective teaching-learning-based cuckoo search algorithm for
parameter optimization problems in structure designing and machining processes[J]. Applied
Soft Computing, 2015, 36(C):349-356.
[42]. Tuo S, Yong L, Deng F, et al. HSTLBO: A hybrid algorithm based on Harmony Search and
Teaching-Learning-Based Optimization for complex high-dimensional optimization
Jou

problems.[J]. Plos One, 2017, 12(4).


[43]. Rao R V , Savsani V J , Vakharia D P . Teaching–learning-based optimization: A novel
method for constrained mechanical design optimization problems[J]. Computer Aided Design,
2011, 43( 3):303-315.
[44]. Črepinšek M, Liu S H, Mernik M. Exploration and exploitation in evolutionary algorithms: A
survey[J]. ACM Computing Surveys (CSUR), 2013, 45(3): 35-67.
[45]. Chen D, Lu R, Zou F, et al. Teaching-learning-based optimization with variable-population
Journal Pre-proof

scheme and its application for ANN and global optimization[J]. Neurocomputing, 2016,
173:1096-1111.

of
Conflicts of interest : none

pro
re-
lP
rna
Jou
Journal Pre-proof

The authors declare that they have no known competing financial interests or personal
relationships that could have appeared to influence the work reported in this paper.

of
pro
re-
lP
rna
Jou

You might also like