0% found this document useful (0 votes)
124 views

Teaching Learning Based Optimization

The document outlines the Teaching Learning Based Optimization (TLBO) algorithm, which is a population-based metaheuristic technique inspired by the knowledge transfer that occurs in a classroom. The algorithm has two phases - a teacher phase where a new solution is generated using the best solution and mean of the population, and a learner phase where a new solution is generated using a partner solution. An example application to the sphere function optimization problem is provided to demonstrate the working of the sanitized TLBO algorithm.

Uploaded by

Punit Trivedi
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
124 views

Teaching Learning Based Optimization

The document outlines the Teaching Learning Based Optimization (TLBO) algorithm, which is a population-based metaheuristic technique inspired by the knowledge transfer that occurs in a classroom. The algorithm has two phases - a teacher phase where a new solution is generated using the best solution and mean of the population, and a learner phase where a new solution is generated using a partner solution. An example application to the sphere function optimization problem is provided to demonstrate the working of the sanitized TLBO algorithm.

Uploaded by

Punit Trivedi
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 50

Teaching Learning Based Optimization

Prakash Kotecha
Debasis Maharana & Remya Kommadath
Department of Chemical Engineering
Indian Institute of Technology Guwahati
Outline
Generic framework of metaheuristic algorithms
Sanitized Teaching Learning Based Optimization (s-TLBO)
Detailed working of s-TLBO with an example
Various types of convergence curves
Statistical analysis of multiple runs
Preliminary comparison of algorithms
Issues in TLBO
Variants of TLBO

2
Terminologies
Optimization Metaheuristic techniques

Decision variables marks, subjects, position, gene

population member, learner, chromosome, child,


Solution
parent, particle, bee, moth, flame, stream

Set of Solutions population, class, moths, flames, water body, swarm

Objective function value nectar amount, energy, fitness*

Iteration generation, cycles

3
Metaheuristic techniques and optimization problem

Metaheuristic techniques Optimization problem

1) Generate randomly a single X  5 2 3


solution/a set of solutions Decision Variables
2) Based on the fitness of the Evaluate fitness function
current solution/set of f  x12  8 x1 x2  x3
solutions, suggests other
solutions with the help of Fitness
“intelligent” operators f  52

4
Generalized Scheme for metaheuristic techniques

Problem: Fitness function, Bounds of decision variables


Technique: Population size, Maximum iteration (T)
Generate random Evaluate P to
Define parameters Initialize t = 1
population P determine fitness

No
t=t+1 Is t ≤ T ? Stop

Yes

P(t+1) = Evaluate P′′ (t) to


P′′(t) = Variation P′(t) P′(t) = Selection P(t)
Survivor(P(t),P′′(t)) determine fitness

5
Performance of metaheuristic techniques
D
Rastrigin functions min f  x     xi2  10cos  2 xi   10  5.12  xi  5.12
Number of decision variables, D = 2 i 1
Iteration 1 Iteration 2 Iteration 3

Iteration 5 Iteration 10 Iteration 20

Information Sciences, Volume 183, Issue 1, 2012, Pages 1-15, Elsevier: https://doi.org/10.1016/j.ins.2011.08.006 6
Teaching Learning Based Optimization (TLBO)
Teaching-learning-based optimization: A novel method for constrained mechanical design
optimization problems, Computer-Aided Design, Volume 43, Issue 3, 2011
Teaching-learning-based optimization algorithm for unconstrained and constrained real-
parameter optimization problems, Engineering Optimization, Volume 44, 2012
Teaching-learning-based optimization: An optimization method for continuous non-linear large
scale problems, Information Sciences, Volume 183, Issue 1, 2012
Codes of TLBO: https://drive.google.com/file/d/0B96X2BLz4rx-
VUQ3OERMZGFhUjg/view?usp=sharing
1200
250 1000

No. of publications
800
200
600
No. of publications

150 400
200
100 0

50

0
2020 2019 2018 2017 2016 2015 2014 2013 2012 2011
Year
Around 1700 publications ( Scopus, Dec 2019) 7
Teaching Learning Based Optimization (TLBO)
Stochastic population based technique proposed by Rao et. al. in 2011
Inspiration: Knowledge transfer in a classroom environment
Required parameters: Population size and number of iterations
Algorithm constitutes of two phases
 Teacher Phase
• New solution is generated using the best solution and mean of the population
• Greedy selection: Accept new solution if better than the current solution
 Learner Phase
• New solution is generated using a partner solution
• Greedy selection
Each solution undergoes teacher phase followed by learner phase
Iteration 1 Iteration 2 Iteration 3 Iteration T
….
T1L1 T2L2 T3L3 T1L1 T2L2 T3L3 T1L1 T2L2 T3L3 T1L1 T2L2 T3L3
8
Working of sanitized TLBO: Sphere function
 Consider
4
min f  x    xi2 ; 0  xi  10, i  1, 2,3, 4
i 1
 Decision variables: x1, x2, x3 and x4 f  x   x12  x22  x32  x42

 Step 1: Fix the population size, NP = 5

 Step 2: Fix the maximum iterations, T = 10

 Step 3: Generate random solutions within the domain of the decision variables
4 0 0 8  80 
3 1 9 7 140 
   
P  0 3 1 5 f   35 
   
2 1 4 9 102 
 6 2 8 3  113
9
Teacher Phase: Generation of new solution
New solution is generated with the help of teacher and mean of the population

Teacher: Solution corresponding to the best fitness value

Each variable in a solution (X) is modified as


X Current solution
Xnew New solution

X new  X  r X best  T f X mean 
Xbest Teacher
 Tf is the same for all variables of a solution Xmean Mean of the population
Tf Teaching factor, either 1 or 2
 r to be selected for each variable
r Random number between 0 and 1

10
Teacher Phase
4 0 0 8  80 
 Step 4: Select Teacher, Xbest =[0 3 1 5] 3 1 9 7 140 
   
P  0 3 1 5 f   35 
 Step 5: Determine mean of the class,    
2 1 4 9 102 
Xmean =[ 3.0 1.4 4.4 6.4 ]  6 2 8 3  113
Mean = [ 15 7 22 32]
=[3.0 1.4 4.4 6.4]
5
 Step 6: Teacher phase of first student, ([4 0 0 8])

X new  X  r X best  T f X mean 
Let r = [0.8 0.2 0.7 0.4], and Tf = 2

Current solution Random number Best solution Mean


X1new = [4 0 0 8] + [0.8 0.2 0.7 0.4] x ( [0 3 1 5] – 2 x [3.0 1.4 4.4 6.4] )
1
X new   0.80 0.04 5.46 4.88
11
Teacher Phase: Bounding of solution

xnew xnew xnew


lb ub lb ub lb ub

xnew is within bounds xnew violates the upper bound xnew violates the lower bound

xnew xnew

lb ub lb ub

No bounding required Shift xnew to upper bound Shift xnew to lower bound

12
Teacher Phase
0  xi  10
X 1
new   0.80 0.04 5.46 4.88
4 0 0 8  80 
 Step 7: x1 and x3 violates lower bound 3 1 9 7 140 
   
P  0 3 1 5 f   35 
1
X new  max  X new
1
, lb     
2 1 4 9 102 
 6 2 8 3  113
1
X new  max   0.80 0.04 5.46 4.88 ,  0 0 0 0 

max  0.80,0   0 max  5.46,0   0

1
X new   0 0.04 0 4.88

13
Teacher Phase: Selection of solution

Evaluate fitness (fnew) of the new solution (Xnew ) generated in teacher phase
Perform greedy selection to update the population

X  X new 
 if f new  f
f  f new 

X and f remains the same if fnew > f

14
Teacher Phase
4 0 0 8  80 
 Step 8: Evaluate the fitness of bounded solution 3 1 9 7 140 
   
  0 0.04 0 4.88
4
X 1
new f  x    xi2 P  0 3 1 5 f   35 
i 1
   
2 1 4 9 102 
f  X new
1
  0  0.042  0  4.882  23.82  6 2 8 3  113

 Step 9: Perform greedy selection to update the population


X 1  4 0 0 8, f 1  80
1
X new   0 0.04 0 4.88 , 1
f new  23.82
 0 0.04 0 4.88  23.82 
1
f new  f1 3 1 9 7   140 
   
P  0 3 1 5  f   35 
X 1  X new
1
  0 0.04 0 4.88 
2 1 4 9
 
102

   
f 1  f new
1
 23.82  6 2 8 3   113 
15
Learner Phase: Generation of new solution
New solution is generated with the help of a partner solution

Partner solution: Randomly selected solution from the population

Each variable of solution is modified as


X Current solution

X new  X  r  X  X p  if f  f p
Xnew New solution
Xp Partner solution
X new  X  r  X  X p  if f  f p f Fitness of current solution
fnew Fitness of partner solution
r Random number between 0 and 1

16
Learner Phase
 0 0.04 0 4.88  23.82 
 Step 10: Select the partner solution for X1 3  140 
1 9 7   
 
Let the partner be X4 P  0 3 1 5  f   35 
   
 2 1 4 9   102 
r = [0.9 0.1 0.2 0.5]  6 2 8 3   113 

X 1   0 0.04 0 4.88 and X 4  2 1 4 9 X new  X  r  X  X p  if f  f p  (1)

 Step 11: Learner phase of X1 X new  X  r  X  X p  if f  f p  (2)

f  X 1   23.82  f  X 4   102 Equation 1 is selected


Current solution Random number Current solution Partner solution
X1new = [0 0.04 0 4.88] + [0.9 0.1 0.2 0.5] x ( [0 0.04 0 4.88] – [2 1 4 9] )

X1new = [-1.80 -0.06 -0.80 2.82]


17
Learner Phase: Bounding and selection of solution
Bound the newly generated variables, if required
x  lb if x  lb
x  ub if x  ub
Evaluate fitness of new solution (fnew) generated using learner phase equation

Perform greedy selection to update the population member

X  X new 
 if f new  f
f  f new 

X and f remains the same if fnew > f

18
Learner Phase
0  xi  10
1
X new   1.8 0.056 0.8 2.82
 0 0.04 0 4.88  23.82 
3 1 9 7   140 
 Step 12: x1 , x2 and x3 violates lower bound    
P  0 3 1 5  f   35 
   
2 1 4 9 102
 max  X new , lb 
1 1    
X new  6 2 8 3   113 

1
X new  max   1.8 0.056 0.8 2.82 ,  0 0 0 0 

1
X new   0 0 0 2.82

19
Learner Phase
 Step 13: Evaluate the fitness of bounded solution  0 0.04 0 4.88  23.82 
3 1 9 7   140 
   
  0 0 0 2.82
4
f  x    xi2
1
X new P  0 3 1 5  f   35 
i 1    
f  X new   0  0  0  2.822  7.95
1  2 1 4 9   102 
 6 2 8 3   113 

 Step 14: Perform greedy selection to update the population

X 1   0 0.04 0 4.88 , f 1  23.82


1
X new   0 0 0 2.82 , 1
f new  7.95
0 0 0 2.82   7.95
1
f new  f1 3 1 9 7   140 
   
P  0 3 1 5  f   35 
X 1  X new
1
  0 0 0 2.82   
102

2 1 4 9   
f 1  f new
1
 7.95  6 2 8 3   113 
20
Teacher Phase: Second solution
0 0 0 2.82   7.95
3  140 
 Step 1: Select Teacher , Xbest =[0 0 0 2.82] 1 9 7   
 
P  0 3 1 5  f   35 
 Step 2: Determine mean of the population    
 102 
2 1 4 9 
 6 2 8 3   113 
Xmean =[ 2.2 1.4 4.4 5.36 ]


X new  X  r X best  T f X mean 
 Step 3: Teacher phase of second student, ([3 1 9 7])

Let r = [0.9 0.3 0.8 0.4] and Tf = 1

X2new = [3 1 9 7] + [0.9 0.3 0.8 0.4] x ([0 0 0 2.82] – 1 x [2.2 1.4 4.4 5.36])
= [1.02 0.58 5.48 5.98]
21
Teacher Phase: Second solution
 Step 4: Evaluate the fitness of bounded solution 0 0 0 2.82   7.95
3 1 9 7   140 
4    
X 2
new  1.02 0.58 5.48 5.98 f  x    xi2 P  0 3 1 5  f   35 
i 1    
2 1 4 9   102 
f  X new
2
  1.022  0.582  5.482  5.982  67.17  6 2 8 3   113 

 Step 5: Perform greedy selection to update the population


X 2  3 1 9 7 , f 2  140
2
X new  1.02 0.58 5.48 5.98 , 2
f new  67.17
 0 0 0 2.82   7.95 
2
f new  f2 1.02 0.58 5.48 5.98   67.17 
   
P 0 3 1 5  f   35 
X 2  X new
2
 1.02 0.58 5.48 5.98 
2 1 4 9
 
102

   
f 2  f new
2
 67.17  6 2 8 3   113 
22
Learner Phase: Second solution
 0 0 0 2.82   7.95 
 Step 6: Select the partner solution for X2 1.02 0.58 5.48 5.98   67.17 
   
Let the partner be X5 P 0 3 1 5  f   35 
   
2 1 4 9  102 
 
r = [0.09 0.7 0.1 0.6]  6 2 8 3   113 

X 2  1.02 0.58 5.48 5.98 and X 5  6 2 8 3

 Step 7: Learner phase of X2 X new  X  r  X  X p  if f  f p  (1)


X new  X  r  X  X p  if f  f p  (2)
f X 2
  67.17  f X 5
  113

X2new = [1.02 0.58 5.48 5.98] + [0.09 0.7 0.1 0.6] x ([1.02 0.58 5.48 5.98] – [6 2 8 3])

X2new = [0.57 -0.41 5.23 7.77] Bounding X2 = [0.57 0 5.23 7.77]


new

23
Learner Phase: Second solution
 Step 8: Evaluate the fitness of bounded solution  0 0 0 2.82   7.95 
1.02 0.58 5.48 5.98   67.17 
2
X new   0.57 0 5.23 7.77     
P 0 3 1 5  f   35 
   
f  X new   0.572  0  5.232  7.772  88.05
2  2 1 4 9   102 
 6 2 8 3   113 

 Step 9: Perform greedy selection to update the population

X 2  1.02 0.58 5.48 5.98 , f 2  67.17


2
X new  1.02 0 4.26 7.77  , 2
f new  88.05
 0 0 0 2.82   7.95 
1
f new  f1 1.02 0.58 5.48 5.98   67.17 
   
P 0 3 1 5  f   35 
   
 2 1 4 9   102 
Learner phase does not yield a better solution  6 2 8 3   113 
24
Teacher Phase: Third solution
 0 0 0 2.82   7.95 
 Step 1: Select Teacher, Xbest =[0 0 0 2.82] 1.02 0.58 5.48 5.98   67.17 
   
P 0 3 1 5  f   35 
 Step 2: Determine mean of the class    
 2 1 4 9   102 
Xmean =[ 1.80 1.32 3.7 5.16]  6 2 8 3   113 

 Step 3: Teacher phase of third solution, ([0 3 1 5])


Let r = [0.8 0.41 0.02 0.1] and Tf = 2
X3new = [-2.88 1.92 0.85 4.25] Bounding X3new = [0 1.92 0.85 4.25]
 0 0 0 2.82   7.95 
f  X new
3
  22.47 1.02 0.58 5.48 5.98   67.17 
   
P   0 1.92 0.85 4.25  f   22.47 
X 3  X new
3
  0 1.92 0.85 4.25    
3
f new  f3  2 1 4 9   102 
f 3  f new
3
 22.47  6 2 8 3   113 
25
Learner Phase: Third solution
 Step 4: Select the partner solution for X3  0 0 0 2.82   7.95 
1.02 0.58 5.48 5.98   67.17 
Let the partner be X1    
P   0 1.92 0.85 4.25  f   22.47 
Let r = [0.8 0.4 0.3 0.3]    
2 1 4 9  102 
 
X 3   0 1.92 0.85 4.25 and X 1  0 0 0 2.82  6 2 8 3   113 

X new  X  r  X  X p  if f  f p  (1)
 Step 5: Learner phase of X3
X new  X  r  X  X p  if f  f p  (2)
f X 3
  22.47  f X 1
  7.95
X3new = [0 1.92 0.85 4.25] - [0.8 0.4 0.3 0.3] x ([0 1.92 0.85 4.25] – [0 0 0 2.82])
 0 0 0 2.82   7.95 
X3 new = [0 1.15 0.6 3.82] 1.02 0.58 5.48 5.98   67.17 
   
 3
f X new 
 16.27
P   0 1.15 0.6 3.82 

2 1 4 9

f  16.27 

102

   
 6 2 8 3   113  26
Fourth solution
 0 0 0 2.82   7.95 
1.02 0.58 5.48 5.98   67.17  Teacher phase:
   
P   0 1.15 0.6 3.82  f  16.27  r = [0.9 0.95 0.5 0.8]
   
 2 1 4 9   102 
Tf = 2
 6 2 8 3   113 

Determine the population and fitness


(round to two decimal places)
 0 0 0 2.82   7.95 
1.02 0.58 5.48 5.98   67.17 
    Learner phase:
P   0 1.15 0.6 3.82  f  16.27  r = [0.9 0.7 0.1 0.6]
   
0 0 0 1.82
   3.31  Partner = 2
 6 2 8 3   113 

27
Fifth Solution
 0 0 0 2.82   7.95 
1.02 0.58 5.48 5.98   67.17  Teacher phase:
   
P   0 1.15 0.6 3.82  r = [0.6 0.85 0.8 0.89]
f  16.27 
    Tf = 1
 0 0 0 1.82  3.31
 
 6 2 8 3   113 
Determine the population and fitness
(round to two decimal places)
 0 0 0 2.82   7.95 
1.02 0.58 5.48 5.98   67.17 
   
P   0 1.15 0.6 3.82  f  16.27  Learner phase:
    r = [0.7 0.2 0.1 0.3]
 0 0 0 1.82 
 3.31 
1.55 1.32 5.23 2.21  36.39  Partner = 3

28
Satisfaction of termination condition
4
min f  x    xi2 ; 0  xi  10, i  1, 2,3, 4
i 1

After completion of 10 iterations

 0 0 0 0   0 
 0.01 0 0 0.05   0.0026 
   
P 0 0.01 0 0.06  f   0.0037 
   
 0 0 0 0   0 
 0.01 0.02 0.11 0.04   0.0142 

The minimum value of the function is 0

29
Performance of s-TLBO

30
Pseudocode
1. Input: Fitness function, lb, ub, Np, T
2. Initialize a random population (P)
FE = NP 3. Evaluate fitness of P
4. for t = 1 to T
for i = 1 to NP
Choose Xbest
Determine Xmean
Xnew = Xi + r ( Xbest – Tf Xmean )
FE = 1 Bound Xnew and evaluate its fitness fnew
Accept Xnew if it is better than Xi
Choose any solution randomly , Xp FE = NP + 2NP T
Determine Xnew as T iterations
One iteration
if fi < fp FE = 2NP T
FE = 2NP
i
X new = Xi + r ( Xi – Xp )
else i
X new = X - r (X – X )
i i p
end
FE = 1 Bound Xnew and evaluate its fitness fnew
Accept Xnew if it is better than Xi
end
end
31
Generalized Scheme for metaheuristic techniques

Problem: Fitness function, Bounds of decision variables


Technique: Population size, Maximum iteration (T)
Generate random Evaluate P to
Define parameters Initialize t = 1
population P determine fitness

No
t=t+1 Is t ≤ T ? Stop

Yes

P(t+1) = Evaluate P′′ (t) to


P′′(t) = Variation P′(t) P′(t) = Selection P(t)
Survivor(P(t),P′′(t)) determine fitness

32
Convergence curve: Iteration vs. Best fitness
14 9   277 
Iteration 0 P0  19 6  f  397 
   
 8 12   208 

10 4  116 
Iteration 1 P1  13 10  f   269 
   
 6 11  157 

10 4  116 
Iteration 2 P2   8 2  f  68 
   
 5 8  89 

0 0  0 
Iteration 10 P10  0 0  f  0 
   
0 1 1 

33
Cases of convergence
T = 16

34
Convergence curve: # FE vs. Best fitness value
Fitness Best fitness
# FE
value value
1 8 8
2 12 8
3 9 8
4 5 5
5 11 5
6 4 4
7 6 4
8 1 1
9 8 1
10 3 1
11 2 1
12 6 1
13 4 1
14 1 1
15 2 1
16 8 1

35
Multiple runs and statistical table
Run Best fitness after
10 iterations
1 0
2 0
3 0
4 1
5 0
6 0
7 1
8 0
9 1
10 0
11 4
12 0
13 1
Standard
Best Worst Mean Median 14 0
Deviation
15 0
0 4 0.533 0 1.06
36
Mean convergence curve
Run 1 Run 2 Run 3 Run 4
#
F Best F Best F Best F Best Mean
FE
value value value value value value value value
1 8 8 12 12 20 20 16 16 14
2 12 8 8 8 12 12 14 14 10.5
3 9 8 9 8 9 9 14 14 9.75
4 5 5 6 6 5 5 5 5 5.25
5 11 5 11 6 11 5 11 5 5.25
6 4 4 5 5 5 5 5 5 4.75
7 6 4 6 5 6 5 6 5 4.75
8 1 1 7 5 7 5 7 5 4
9 8 1 8 5 8 5 8 5 4
10 3 1 5 5 6 5 7 5 4
11 2 1 3 3 5 2 5 5 2.75
12 6 1 6 3 6 2 6 5 2.75
13 4 1 4 3 4 2 4 4 2.5
14 1 1 6 3 1 1 4 4 2.25
15 2 1 4 3 2 1 1 1 1.5
16 8 1 3 3 8 1 8 1 1.5

37
38
Comparison of algorithms
Algorithm 1 Algorithm 2
Standard Standard
Function Best Worst Mean Median Function Best Worst Mean Median
Deviation Deviation
Function 1 26 30 27.2 28 1.46 Function 1 26 58 43.4 57 15.78
Function 2 18 21 18.12 18 0.6 Function 2 18 21 20.6 21 0.99
Function 3 60 137 120.68 131 27.08 Function 3 136 141 139 139 1.43
Function 4 46 51 47.24 47 1.36 Function 4 46 50 48.4 48 1.00
Function 5 235 250 239.24 238 4.21 Function 5 254 263 259 259 2.54

Algorithm 1 Algorithm 2 Identical Function Best Solution


Best 2 0 3 Function 1 26
Worst 3 1 1 Function 2 18
Mean 5 0 0 Function 3 60
Median 5 0 0 Function 4 46
Std. Dev. 2 3 0 Function 5 235
39
Issues in the implementation of TLBO

40
Issues in the implementation of TLBO
for i = 1:NP
Perform the Teacher Phase of ith solution
Update the population
Perform the Learner Phase of ith solution
Update the population
end
Iteration 1 Iteration 2 Iteration 3 Iteration T
….
T1L1 T2L2 T3L3 T1L1 T2L2 T3L3 T1L1 T2L2 T3L3 T1L1 T2L2 T3L3

for i = 1:NP
Perform the Teacher Phase of ith solution
Update the population
end
for i = 1:NP
Perform the Learner Phase of ith solution
Update the population
end
Iteration 1 Iteration 2 Iteration 3 Iteration T
….
T1T2T3 L1L2L3 T1T2T3 L1L2L3 T1T2T3 L1L2L3 T1T2T3 L1L2L3

Computer-Aided Design, Volume 43, Issue 3, 2011, Pages 303-315, Elsevier: https://doi.org/10.1016/j.cad.2010.12.015 41
Issues in Implementation of TLBO
1. Input: Fitness function, lb, ub, Np, T
2. Initialize a random population (P)
3. Evaluate fitness of P
4. for t = 1 to T
for i = 1 to NP
Choose Xbest
Determine Xmean
Xnew = Xi + r ( Xbest – Tf Xmean )
Bound Xnew and evaluate its fitness fnew
Accept Xnew if it is better than Xi
Choose any solution randomly , Xp
Determine Xnew as
if fi < fp
Xnew = Xi + r ( Xi – Xp )
else
Xnew = Xi + r ( Xi – Xp )
end
Bound Xnew and evaluate its fitness fnew
Accept Xnew if it is better than Xi
end
end
Computer-Aided Design, Volume 43, Issue 3, 2011, Pages 303-315, Elsevier: https://doi.org/10.1016/j.cad.2010.12.015 42
Duplicates
Two solutions with identical set of decision variables
S1 and S2 are identical solutions if the values of decision variables are identical. Comparison of
S1 and S2 should NOT be done after sort the variables
Tag Solution f Tag Solution f
S1 [2 5 4] -7 Sorted S1 [2 4 5] -7 𝑓 = 𝑥1 − 𝑥2 − 𝑥3
S2 [4 2 5] -3 Sorted S2 [2 4 5] -7

S1 and S3 are not identical solutions if the decision variables are not identical but their objective
function values are identical. S1 and S3 are realizations
Tag Solution f
S1 [2 5 4] -7 𝑓 = 𝑥1 − 𝑥2 − 𝑥3
S3 [0 5 2] -7

Occurrence of duplicates can be very rare, especially in higher dimension problems

43
Difference between TLBO and s-TLBO
Duplicate removal
 Included in TLBO. Duplicates identified by sorting the solutions.
 No duplicate removal in sanitized TLBO

Number of times the fitness function is evaluated


 Is stochastic in TLBO as it depends on the duplicates
 Deterministic in Sanitized TLBO (2NpT + Np)

Partners
 Multiple solutions can have the same partner in TLBO
 Every member has an unique partner

44
Elitist TLBO (ETLBO): Variant of TLBO
 Elitism: replacement of worst solutions with the elite solutions.

 Incorporated in every iteration at the end of learner phase.

 Procedure to generate new solutions is same as in TLBO.

 Algorithm parameters: population size, number of iterations and elite size.

 Elite size specifies the number of worst solutions which have to be replaced.

 Duplicate removal is performed after replacing worst solutions with elite solutions.

An elitist teaching-learning-based optimization algorithm for solving complex constrained optimization problems, International Journal of Industrial Engineering Computations, 3(4), 535-560, 2012 45
Improved TLBO: Variant of TLBO
Divides the population into groups.
Incorporates tutorial learning in teacher phase.
Incorporates self-learning in learner phase.
Number of teachers in the population is equal to number of groups.
Solution corresponding to best fitness value is chief teacher.
Other teachers are selected based on the fitness value of chief teacher and their fitness.
An adaptive teaching factor is introduced.
Elitism and duplicate removal are incorporated.

An improved teaching-learning-based optimization algorithm for solving unconstrained optimization problems, Scientia Iranica, 20 (3), 710-720, 2013. 46
TLBO Codes
MATLAB Code: by inventors https://sites.google.com/site/tlborao/tlbo-code
(includes duplicate removal, duplicates identified after sorting)

MATLAB Code of Sanitized TLBO:


https://in.mathworks.com/matlabcentral/fileexchange/65628-teaching-learning-based-
optimization

MATLAB Code: https://yarpiz.com/83/ypea111-teaching-learning-based-optimization


(entire class undergoes teacher phase first)

JAVA Code: https://github.com/maciejj04/TLBO

47
Further reading
Teaching-learning-based optimization: An optimization method for continuous non-
linear large scale problems, Information Sciences, Volume 183, Issue 1, 2012
A note on teaching-learning-based optimization algorithm, Information Sciences, Volume
212, Pages 79-93, 2012
Comments on “A note on teaching-learning-based optimization algorithm”, Information
Sciences, Volume 229, Pages 159-169, 2013
Teaching-Learning-Based Optimization (TLBO) Algorithm and its engineering
applications. Springer International Publishing, Switzerland, 2016
A survey of teaching-learning-based optimization, Neurocomputing, Volume 335, Pages
366-383, 2019
Multi-objective optimization using teaching-learning-based optimization algorithm,
Engineering Applications of Artificial Intelligence, Volume 26, Issue 4, Pages 1291-1300, 2013
48
Closure
Generic framework of metaheuristic algorithms
Sanitized Teaching Learning Based Optimization (s-TLBO)
Detailed working of s-TLBO with an example
Various types of convergence curves
Statistical analysis of multiple runs
Preliminary comparison of algorithms
Issues in TLBO
Variants of TLBO
Implementation of s-TLBO in MATLAB

49
Thank You !!!

You might also like