Teaching Learning Based Optimization
Teaching Learning Based Optimization
Prakash Kotecha
Debasis Maharana & Remya Kommadath
Department of Chemical Engineering
Indian Institute of Technology Guwahati
Outline
Generic framework of metaheuristic algorithms
Sanitized Teaching Learning Based Optimization (s-TLBO)
Detailed working of s-TLBO with an example
Various types of convergence curves
Statistical analysis of multiple runs
Preliminary comparison of algorithms
Issues in TLBO
Variants of TLBO
2
Terminologies
Optimization Metaheuristic techniques
3
Metaheuristic techniques and optimization problem
4
Generalized Scheme for metaheuristic techniques
No
t=t+1 Is t ≤ T ? Stop
Yes
5
Performance of metaheuristic techniques
D
Rastrigin functions min f x xi2 10cos 2 xi 10 5.12 xi 5.12
Number of decision variables, D = 2 i 1
Iteration 1 Iteration 2 Iteration 3
Information Sciences, Volume 183, Issue 1, 2012, Pages 1-15, Elsevier: https://doi.org/10.1016/j.ins.2011.08.006 6
Teaching Learning Based Optimization (TLBO)
Teaching-learning-based optimization: A novel method for constrained mechanical design
optimization problems, Computer-Aided Design, Volume 43, Issue 3, 2011
Teaching-learning-based optimization algorithm for unconstrained and constrained real-
parameter optimization problems, Engineering Optimization, Volume 44, 2012
Teaching-learning-based optimization: An optimization method for continuous non-linear large
scale problems, Information Sciences, Volume 183, Issue 1, 2012
Codes of TLBO: https://drive.google.com/file/d/0B96X2BLz4rx-
VUQ3OERMZGFhUjg/view?usp=sharing
1200
250 1000
No. of publications
800
200
600
No. of publications
150 400
200
100 0
50
0
2020 2019 2018 2017 2016 2015 2014 2013 2012 2011
Year
Around 1700 publications ( Scopus, Dec 2019) 7
Teaching Learning Based Optimization (TLBO)
Stochastic population based technique proposed by Rao et. al. in 2011
Inspiration: Knowledge transfer in a classroom environment
Required parameters: Population size and number of iterations
Algorithm constitutes of two phases
Teacher Phase
• New solution is generated using the best solution and mean of the population
• Greedy selection: Accept new solution if better than the current solution
Learner Phase
• New solution is generated using a partner solution
• Greedy selection
Each solution undergoes teacher phase followed by learner phase
Iteration 1 Iteration 2 Iteration 3 Iteration T
….
T1L1 T2L2 T3L3 T1L1 T2L2 T3L3 T1L1 T2L2 T3L3 T1L1 T2L2 T3L3
8
Working of sanitized TLBO: Sphere function
Consider
4
min f x xi2 ; 0 xi 10, i 1, 2,3, 4
i 1
Decision variables: x1, x2, x3 and x4 f x x12 x22 x32 x42
Step 3: Generate random solutions within the domain of the decision variables
4 0 0 8 80
3 1 9 7 140
P 0 3 1 5 f 35
2 1 4 9 102
6 2 8 3 113
9
Teacher Phase: Generation of new solution
New solution is generated with the help of teacher and mean of the population
10
Teacher Phase
4 0 0 8 80
Step 4: Select Teacher, Xbest =[0 3 1 5] 3 1 9 7 140
P 0 3 1 5 f 35
Step 5: Determine mean of the class,
2 1 4 9 102
Xmean =[ 3.0 1.4 4.4 6.4 ] 6 2 8 3 113
Mean = [ 15 7 22 32]
=[3.0 1.4 4.4 6.4]
5
Step 6: Teacher phase of first student, ([4 0 0 8])
X new X r X best T f X mean
Let r = [0.8 0.2 0.7 0.4], and Tf = 2
xnew is within bounds xnew violates the upper bound xnew violates the lower bound
xnew xnew
lb ub lb ub
No bounding required Shift xnew to upper bound Shift xnew to lower bound
12
Teacher Phase
0 xi 10
X 1
new 0.80 0.04 5.46 4.88
4 0 0 8 80
Step 7: x1 and x3 violates lower bound 3 1 9 7 140
P 0 3 1 5 f 35
1
X new max X new
1
, lb
2 1 4 9 102
6 2 8 3 113
1
X new max 0.80 0.04 5.46 4.88 , 0 0 0 0
1
X new 0 0.04 0 4.88
13
Teacher Phase: Selection of solution
Evaluate fitness (fnew) of the new solution (Xnew ) generated in teacher phase
Perform greedy selection to update the population
X X new
if f new f
f f new
14
Teacher Phase
4 0 0 8 80
Step 8: Evaluate the fitness of bounded solution 3 1 9 7 140
0 0.04 0 4.88
4
X 1
new f x xi2 P 0 3 1 5 f 35
i 1
2 1 4 9 102
f X new
1
0 0.042 0 4.882 23.82 6 2 8 3 113
X new X r X X p if f f p
Xnew New solution
Xp Partner solution
X new X r X X p if f f p f Fitness of current solution
fnew Fitness of partner solution
r Random number between 0 and 1
16
Learner Phase
0 0.04 0 4.88 23.82
Step 10: Select the partner solution for X1 3 140
1 9 7
Let the partner be X4 P 0 3 1 5 f 35
2 1 4 9 102
r = [0.9 0.1 0.2 0.5] 6 2 8 3 113
X X new
if f new f
f f new
18
Learner Phase
0 xi 10
1
X new 1.8 0.056 0.8 2.82
0 0.04 0 4.88 23.82
3 1 9 7 140
Step 12: x1 , x2 and x3 violates lower bound
P 0 3 1 5 f 35
2 1 4 9 102
max X new , lb
1 1
X new 6 2 8 3 113
1
X new max 1.8 0.056 0.8 2.82 , 0 0 0 0
1
X new 0 0 0 2.82
19
Learner Phase
Step 13: Evaluate the fitness of bounded solution 0 0.04 0 4.88 23.82
3 1 9 7 140
0 0 0 2.82
4
f x xi2
1
X new P 0 3 1 5 f 35
i 1
f X new 0 0 0 2.822 7.95
1 2 1 4 9 102
6 2 8 3 113
X new X r X best T f X mean
Step 3: Teacher phase of second student, ([3 1 9 7])
X2new = [3 1 9 7] + [0.9 0.3 0.8 0.4] x ([0 0 0 2.82] – 1 x [2.2 1.4 4.4 5.36])
= [1.02 0.58 5.48 5.98]
21
Teacher Phase: Second solution
Step 4: Evaluate the fitness of bounded solution 0 0 0 2.82 7.95
3 1 9 7 140
4
X 2
new 1.02 0.58 5.48 5.98 f x xi2 P 0 3 1 5 f 35
i 1
2 1 4 9 102
f X new
2
1.022 0.582 5.482 5.982 67.17 6 2 8 3 113
X2new = [1.02 0.58 5.48 5.98] + [0.09 0.7 0.1 0.6] x ([1.02 0.58 5.48 5.98] – [6 2 8 3])
23
Learner Phase: Second solution
Step 8: Evaluate the fitness of bounded solution 0 0 0 2.82 7.95
1.02 0.58 5.48 5.98 67.17
2
X new 0.57 0 5.23 7.77
P 0 3 1 5 f 35
f X new 0.572 0 5.232 7.772 88.05
2 2 1 4 9 102
6 2 8 3 113
X new X r X X p if f f p (1)
Step 5: Learner phase of X3
X new X r X X p if f f p (2)
f X 3
22.47 f X 1
7.95
X3new = [0 1.92 0.85 4.25] - [0.8 0.4 0.3 0.3] x ([0 1.92 0.85 4.25] – [0 0 0 2.82])
0 0 0 2.82 7.95
X3 new = [0 1.15 0.6 3.82] 1.02 0.58 5.48 5.98 67.17
3
f X new
16.27
P 0 1.15 0.6 3.82
2 1 4 9
f 16.27
102
6 2 8 3 113 26
Fourth solution
0 0 0 2.82 7.95
1.02 0.58 5.48 5.98 67.17 Teacher phase:
P 0 1.15 0.6 3.82 f 16.27 r = [0.9 0.95 0.5 0.8]
2 1 4 9 102
Tf = 2
6 2 8 3 113
27
Fifth Solution
0 0 0 2.82 7.95
1.02 0.58 5.48 5.98 67.17 Teacher phase:
P 0 1.15 0.6 3.82 r = [0.6 0.85 0.8 0.89]
f 16.27
Tf = 1
0 0 0 1.82 3.31
6 2 8 3 113
Determine the population and fitness
(round to two decimal places)
0 0 0 2.82 7.95
1.02 0.58 5.48 5.98 67.17
P 0 1.15 0.6 3.82 f 16.27 Learner phase:
r = [0.7 0.2 0.1 0.3]
0 0 0 1.82
3.31
1.55 1.32 5.23 2.21 36.39 Partner = 3
28
Satisfaction of termination condition
4
min f x xi2 ; 0 xi 10, i 1, 2,3, 4
i 1
0 0 0 0 0
0.01 0 0 0.05 0.0026
P 0 0.01 0 0.06 f 0.0037
0 0 0 0 0
0.01 0.02 0.11 0.04 0.0142
29
Performance of s-TLBO
30
Pseudocode
1. Input: Fitness function, lb, ub, Np, T
2. Initialize a random population (P)
FE = NP 3. Evaluate fitness of P
4. for t = 1 to T
for i = 1 to NP
Choose Xbest
Determine Xmean
Xnew = Xi + r ( Xbest – Tf Xmean )
FE = 1 Bound Xnew and evaluate its fitness fnew
Accept Xnew if it is better than Xi
Choose any solution randomly , Xp FE = NP + 2NP T
Determine Xnew as T iterations
One iteration
if fi < fp FE = 2NP T
FE = 2NP
i
X new = Xi + r ( Xi – Xp )
else i
X new = X - r (X – X )
i i p
end
FE = 1 Bound Xnew and evaluate its fitness fnew
Accept Xnew if it is better than Xi
end
end
31
Generalized Scheme for metaheuristic techniques
No
t=t+1 Is t ≤ T ? Stop
Yes
32
Convergence curve: Iteration vs. Best fitness
14 9 277
Iteration 0 P0 19 6 f 397
8 12 208
10 4 116
Iteration 1 P1 13 10 f 269
6 11 157
10 4 116
Iteration 2 P2 8 2 f 68
5 8 89
0 0 0
Iteration 10 P10 0 0 f 0
0 1 1
33
Cases of convergence
T = 16
34
Convergence curve: # FE vs. Best fitness value
Fitness Best fitness
# FE
value value
1 8 8
2 12 8
3 9 8
4 5 5
5 11 5
6 4 4
7 6 4
8 1 1
9 8 1
10 3 1
11 2 1
12 6 1
13 4 1
14 1 1
15 2 1
16 8 1
35
Multiple runs and statistical table
Run Best fitness after
10 iterations
1 0
2 0
3 0
4 1
5 0
6 0
7 1
8 0
9 1
10 0
11 4
12 0
13 1
Standard
Best Worst Mean Median 14 0
Deviation
15 0
0 4 0.533 0 1.06
36
Mean convergence curve
Run 1 Run 2 Run 3 Run 4
#
F Best F Best F Best F Best Mean
FE
value value value value value value value value
1 8 8 12 12 20 20 16 16 14
2 12 8 8 8 12 12 14 14 10.5
3 9 8 9 8 9 9 14 14 9.75
4 5 5 6 6 5 5 5 5 5.25
5 11 5 11 6 11 5 11 5 5.25
6 4 4 5 5 5 5 5 5 4.75
7 6 4 6 5 6 5 6 5 4.75
8 1 1 7 5 7 5 7 5 4
9 8 1 8 5 8 5 8 5 4
10 3 1 5 5 6 5 7 5 4
11 2 1 3 3 5 2 5 5 2.75
12 6 1 6 3 6 2 6 5 2.75
13 4 1 4 3 4 2 4 4 2.5
14 1 1 6 3 1 1 4 4 2.25
15 2 1 4 3 2 1 1 1 1.5
16 8 1 3 3 8 1 8 1 1.5
37
38
Comparison of algorithms
Algorithm 1 Algorithm 2
Standard Standard
Function Best Worst Mean Median Function Best Worst Mean Median
Deviation Deviation
Function 1 26 30 27.2 28 1.46 Function 1 26 58 43.4 57 15.78
Function 2 18 21 18.12 18 0.6 Function 2 18 21 20.6 21 0.99
Function 3 60 137 120.68 131 27.08 Function 3 136 141 139 139 1.43
Function 4 46 51 47.24 47 1.36 Function 4 46 50 48.4 48 1.00
Function 5 235 250 239.24 238 4.21 Function 5 254 263 259 259 2.54
40
Issues in the implementation of TLBO
for i = 1:NP
Perform the Teacher Phase of ith solution
Update the population
Perform the Learner Phase of ith solution
Update the population
end
Iteration 1 Iteration 2 Iteration 3 Iteration T
….
T1L1 T2L2 T3L3 T1L1 T2L2 T3L3 T1L1 T2L2 T3L3 T1L1 T2L2 T3L3
for i = 1:NP
Perform the Teacher Phase of ith solution
Update the population
end
for i = 1:NP
Perform the Learner Phase of ith solution
Update the population
end
Iteration 1 Iteration 2 Iteration 3 Iteration T
….
T1T2T3 L1L2L3 T1T2T3 L1L2L3 T1T2T3 L1L2L3 T1T2T3 L1L2L3
Computer-Aided Design, Volume 43, Issue 3, 2011, Pages 303-315, Elsevier: https://doi.org/10.1016/j.cad.2010.12.015 41
Issues in Implementation of TLBO
1. Input: Fitness function, lb, ub, Np, T
2. Initialize a random population (P)
3. Evaluate fitness of P
4. for t = 1 to T
for i = 1 to NP
Choose Xbest
Determine Xmean
Xnew = Xi + r ( Xbest – Tf Xmean )
Bound Xnew and evaluate its fitness fnew
Accept Xnew if it is better than Xi
Choose any solution randomly , Xp
Determine Xnew as
if fi < fp
Xnew = Xi + r ( Xi – Xp )
else
Xnew = Xi + r ( Xi – Xp )
end
Bound Xnew and evaluate its fitness fnew
Accept Xnew if it is better than Xi
end
end
Computer-Aided Design, Volume 43, Issue 3, 2011, Pages 303-315, Elsevier: https://doi.org/10.1016/j.cad.2010.12.015 42
Duplicates
Two solutions with identical set of decision variables
S1 and S2 are identical solutions if the values of decision variables are identical. Comparison of
S1 and S2 should NOT be done after sort the variables
Tag Solution f Tag Solution f
S1 [2 5 4] -7 Sorted S1 [2 4 5] -7 𝑓 = 𝑥1 − 𝑥2 − 𝑥3
S2 [4 2 5] -3 Sorted S2 [2 4 5] -7
S1 and S3 are not identical solutions if the decision variables are not identical but their objective
function values are identical. S1 and S3 are realizations
Tag Solution f
S1 [2 5 4] -7 𝑓 = 𝑥1 − 𝑥2 − 𝑥3
S3 [0 5 2] -7
43
Difference between TLBO and s-TLBO
Duplicate removal
Included in TLBO. Duplicates identified by sorting the solutions.
No duplicate removal in sanitized TLBO
Partners
Multiple solutions can have the same partner in TLBO
Every member has an unique partner
44
Elitist TLBO (ETLBO): Variant of TLBO
Elitism: replacement of worst solutions with the elite solutions.
Elite size specifies the number of worst solutions which have to be replaced.
Duplicate removal is performed after replacing worst solutions with elite solutions.
An elitist teaching-learning-based optimization algorithm for solving complex constrained optimization problems, International Journal of Industrial Engineering Computations, 3(4), 535-560, 2012 45
Improved TLBO: Variant of TLBO
Divides the population into groups.
Incorporates tutorial learning in teacher phase.
Incorporates self-learning in learner phase.
Number of teachers in the population is equal to number of groups.
Solution corresponding to best fitness value is chief teacher.
Other teachers are selected based on the fitness value of chief teacher and their fitness.
An adaptive teaching factor is introduced.
Elitism and duplicate removal are incorporated.
An improved teaching-learning-based optimization algorithm for solving unconstrained optimization problems, Scientia Iranica, 20 (3), 710-720, 2013. 46
TLBO Codes
MATLAB Code: by inventors https://sites.google.com/site/tlborao/tlbo-code
(includes duplicate removal, duplicates identified after sorting)
47
Further reading
Teaching-learning-based optimization: An optimization method for continuous non-
linear large scale problems, Information Sciences, Volume 183, Issue 1, 2012
A note on teaching-learning-based optimization algorithm, Information Sciences, Volume
212, Pages 79-93, 2012
Comments on “A note on teaching-learning-based optimization algorithm”, Information
Sciences, Volume 229, Pages 159-169, 2013
Teaching-Learning-Based Optimization (TLBO) Algorithm and its engineering
applications. Springer International Publishing, Switzerland, 2016
A survey of teaching-learning-based optimization, Neurocomputing, Volume 335, Pages
366-383, 2019
Multi-objective optimization using teaching-learning-based optimization algorithm,
Engineering Applications of Artificial Intelligence, Volume 26, Issue 4, Pages 1291-1300, 2013
48
Closure
Generic framework of metaheuristic algorithms
Sanitized Teaching Learning Based Optimization (s-TLBO)
Detailed working of s-TLBO with an example
Various types of convergence curves
Statistical analysis of multiple runs
Preliminary comparison of algorithms
Issues in TLBO
Variants of TLBO
Implementation of s-TLBO in MATLAB
49
Thank You !!!