0% found this document useful (0 votes)
7 views

optimization

The document discusses various optimization methods in engineering, focusing on direct and gradient search techniques. It covers methods such as the Bracketing Method, Fibonacci Search Method, Golden Section Method, Newton-Raphson Method, Bisection Method, Secant Method, and Box Evolutionary Method. Each method is explained with examples and iterations to demonstrate their application in minimizing functions.

Uploaded by

jaipalsaini611
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views

optimization

The document discusses various optimization methods in engineering, focusing on direct and gradient search techniques. It covers methods such as the Bracketing Method, Fibonacci Search Method, Golden Section Method, Newton-Raphson Method, Bisection Method, Secant Method, and Box Evolutionary Method. Each method is explained with examples and iterations to demonstrate their application in minimizing functions.

Uploaded by

jaipalsaini611
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 69

Optimization Methods in Engineering

Unit-02
Direct & Gradient Search Methods

Dr. Deepak Sharma


Assistant Professor
Department of Mechanical Engineering
National Institute of Technology Hamirpur, H.P., India
BRACKETING METHOD
BRACKETING METHOD
BRACKETING METHOD
BRACKETING METHOD
BRACKETING METHOD
REGION ELIMINATION METHOD
REGION ELIMINATION METHOD
INTERNAL HALVING METHOD
INTERNAL HALVING METHOD
INETRNAL HALVING METHOD
INETRNAL HALVING METHOD
FIBONACCI SEARCH METHOD

𝐹𝑛 = 𝐹𝑛−1 + 𝐹𝑛−2
FIBONACCI SEARCH METHOD
FIBONACCI SEARCH METHOD
FIBONACCI SEARCH METHOD
GOLDEN SECTION METHOD
GOLDEN SECTION METHOD
GOLDEN SECTION METHOD
GOLDEN SECTION METHOD
GOLDEN SECTION METHOD
NEWTON RAPHSON METHOD

Linear approximation to the first derivative of the function is made

at a point.

This expansion is equated to zero to find the next guess value.

If the current point at iteration t is xt the next iteration point

governed by the equation.


NEWTON RAPHSON METHOD
NEWTON RAPHSON METHOD
NEWTON RAPHSON METHOD
NEWTON RAPHSON METHOD
NEWTON RAPHSON METHOD
NEWTON RAPHSON METHOD
NEWTON RAPHSON METHOD
NEWTON RAPHSON METHOD
NEWTON RAPHSON METHOD
NEWTON RAPHSON METHOD
NEWTON RAPHSON METHOD
NEWTON RAPHSON METHOD
NEWTON RAPHSON METHOD
NEWTON RAPHSON METHOD
NEWTON RAPHSON METHOD
NEWTON RAPHSON METHOD
NEWTON RAPHSON METHOD
NEWTON RAPHSON METHOD
BISECTION METHOD
BISECTION METHOD
BISECTION METHOD
BISECTION METHOD
BISECTION METHOD
BISECTION METHOD
BISECTION METHOD
BISECTION METHOD
Q-1. 𝑭 𝒙 = 𝒙𝟐 + 𝟓𝟒Τ𝒙 𝐫𝐚𝐧𝐠𝐞 𝟐, 𝟓 , 𝝐 = 𝟏𝟎−𝟑
Solution:
Iteration 1: Step 1:

𝑓 ′ 𝑥 = 2𝑥 − 54ൗ𝑥 2 (𝑖)

𝑓 ′ 1 = −9.5, 𝑓 ′ 5 = 7.840
𝑓 ′ 1 < 0, 𝑓′ 5 > 0
2+5
Step 2: 𝑍= = 3.5
2

𝑓 ′ 3.5 = 2.592 > 0 2 3.5 5

New Range is (2, 3.5), Move to Step 2


BISECTION METHOD

2+3.5
Iteration 2: 𝑍 = = 2.750
2

𝑓 ′ 2.750 = −1.640,
Step 3: 𝑓 ′ 2.750 = −1.640 = 1.640 ≮ 10−3
Set 𝑥1 = 2.750 2 3.5
2.750
New Range is (2.750, 3.5)
2.750+3.5
Iteration 3: 𝑍 = = 3.125
2

𝑓 ′ 3.125 = 0.720 > 0,


Step 3: 𝑓 ′ 3.125 = 0.720 = 0.720 ≮ 10−3
Set 𝑥2 = 3.125
New Range is (2.750, 3.125) 2.750 3.125 3.5
BISECTION METHOD
2.750+3.125
Iteration 4: 𝑍 = = 2.938
2

𝑓 ′ 2.938 = −0.383 < 0,


Step 3: 𝑓 ′ 2.938 = −0.383 = 0.383 ≮ 10−3
Set 𝑥1 =2.938
2.750 2.938 3.125
New Range is (2.938, 3.125)
2.938+3.125
Iteration 5: 𝑍 = = 3.082
2

𝑓 ′ 3.082 = 0.187 > 0,


Step 3: 𝑓 ′ 3.082 = 0.187 = 0.187 ≮ 10−3
Set 𝑥2 = 3.032
New Range is (2.938, 3.032) 2.938 3.125
3.032
BISECTION METHOD
2.938+3.032
Iteration 6: 𝑍 = = 2.985
2

𝑓 ′ 2.985 = −0.090 < 0,


Step 3: 𝑓 ′ 2.985 = −0.090 = 0.090 ≮ 10−3
Set 𝑥1 =2.985
New Range is (2.985, 3.032) 2.938 2.985 3.032

2.985+3.032
Iteration 7: 𝑍 = = 3.009
2

𝑓 ′ 3.009 = 0.051 > 0,


Step 3: 𝑓 ′ 3.009 = 0.051 = 0.051 ≮ 10−3
Set 𝑥2 = 3.009
New Range is (2.985, 3.009) 2.985 3.009 3.032
BISECTION METHOD

2.985+3.009
Iteration 8: 𝑍 = = 2.997
2

𝑓 ′ 2.997 = −0.018 < 0,


Step 3: 𝑓 ′ 2.997 = −0.018 = 0.018 ≮ 10−3
Set 𝑥1 =2.997
2.985 2.997 3.009
New Range is (2.997, 3.009)
2.997+3.009
Iteration 9: 𝑍 = = 3.003
2

𝑓 ′ 3.003 = 0.018 > 0,


Step 3: 𝑓 ′ 3.003 = 0.018 = 0.018 ≮ 10−3
Set 𝑥2 = 3.003
New Range is (2.997, 3.003) 2.997 3.003 3.009
BISECTION METHOD
2.997+3.003
Iteration 10: 𝑍 = =3
2

𝑓 ′ 3 = 0,
Step 3: 𝑓 ′ 3 = 0 = 0 < 10−3
𝐂𝐨𝐧𝐝𝐢𝐭𝐢𝐨𝐧 𝐢𝐬 𝐚𝐜𝐡𝐢𝐯𝐞𝐝 𝐍𝐨𝐰 𝐓𝐞𝐫𝐦𝐢𝐧𝐚𝐭𝐞 𝐭𝐡𝐞 𝐈𝐭𝐞𝐫𝐚𝐭𝐢𝐨𝐧
Z = 3 will minimize the function
BISECTION METHOD
SECANT METHOD
SECANT METHOD
SECANT METHOD
SECANT METHOD
BOX EVOLUTIONARY METHOD

Box’s Evolutionary Optimization Method

Algorithm require (2N+1) points of which 2N are corner points of N-

dimensional hypercube centered on other points.

All (2N+1) function values are compared and best points are identified.

In next iteration, another hypercube is formed around this best point.

If no improvement is found, size of hypercube is reduced.


BOX EVOLUTIONARY METHOD
4 (X1, X2) 3
Algorithm
Step 1:
Choose initial point 𝑥 (0)
Size reduction parameter ∆ for all design variable
𝑖 = 1,2,3 … … 𝑁
Termination parameter 𝜀, Set 𝑥ҧ = 𝑥 (0) 1 2
Step 2:

If ∆ < 𝜀, 𝑇𝑒𝑟𝑚𝑖𝑛𝑎𝑡𝑒 𝑤ℎ𝑒𝑟𝑒 ∆ = ∆12 + ∆22

Else create 2N points by adding and subtracting ∆𝑖ൗ2 from each variable at
the point 𝑥.ҧ
BOX EVOLUTIONARY METHOD

Step 3: Find the point having minimum function value. Designate the minimum point to be
𝑥ҧ
Step 4:
∆𝑖
If 𝑥ҧ = 𝑥 (0) reduce size parameter ∆ = ൗ2 and go to step 2
Else set 𝑥 (0) = 𝑥ҧ and go to step 2.
BOX EVOLUTIONARY METHOD

Q-1. Minimize 𝒇 𝒙𝟏 , 𝒙𝟐 = (𝒙𝟐𝟏 + 𝒙𝟐 − 𝟏𝟏) 𝟐 +(𝒙𝟏 + 𝒙𝟐𝟐 − 𝟕)𝟐 (1, 1)


P4 (0, 2) P3 (2, 2)
in the interval 𝟎 ≤ 𝒙𝟏 , 𝒙𝟐 ≤ 𝟓. 𝑻𝒂𝒌𝒆 𝒊𝒏𝒊𝒕𝒊𝒂𝒍 𝒑𝒐𝒊𝒏𝒕 𝒙 𝟎

= 𝟏, 𝟏 ; 𝒔𝒊𝒛𝒆 𝒓𝒆𝒅𝒖𝒄𝒕𝒊𝒐𝒏 𝒑𝒂𝒓𝒂𝒎𝒕𝒆𝒓 ∆= 𝟐, 𝟐 , 𝜺 = 𝟏𝟎−𝟑 .

Solution: Iteration 1

Step 1: Initial Point 𝑥ҧ = 𝑥 0 = 1,1 , ∆= 2,2 , 𝜀 = 10−3


P1 (0, 0) P2 (2, 0)
Step 2: ∆ = ∆12 + ∆22 = 22 + 22 = 8 = 2.828 > 10−3
BOX EVOLUTIONARY METHOD

Step 3: P4 (0, 2) (1, 1) P3 (2, 2)

𝑓 0,0 = (02 + 0 − 11)2 + 0 + 02 − 7 2 = 170


𝑓 2,0 = (22 + 0 − 11)2 + 2 + 02 − 7 2 = 74 1
1
𝑓 2,2 = (22 +2− 11)2 + 2+ 22 −7 2 = 𝟐𝟔
1
1
𝑓 0,2 = (02 + 2 − 11)2 + 0 + 22 − 7 2 = 90
𝑓 1,1 = (12 + 1 − 11)2 + 1 + 12 − 7 2 = 106
𝑥ҧ = 2,2 P1 (0, 0) P2 (2, 0)
Step 4: 𝑥ҧ ≠ 𝑥 0 Move to step 2
Iteration 2

Step 2: ∆ = ∆12 + ∆22 = 22 + 22 = 8 = 2.828 > 10−3


BOX EVOLUTIONARY METHOD

𝑓 1,1 = (12 + 1 − 11)2 + 1 + 12 − 7 2 = 106


P4 (1, 3) (2, 2) P3 (3, 3)
𝑓 3,1 = (32 +1− 11)2 + 3+ 12 −7 2 = 𝟏𝟎
𝑓 3,3 = (32 + 3 − 11)2 + 3 + 32 − 7 2 = 26
𝑓 1,3 = (12 + 3 − 11)2 + 1 + 32 − 7 2 = 58 1
1
𝑓 2,2 = (22 + 2 − 11)2 + 2 + 22 − 7 2 = 26
1
Step 4: 𝑥ҧ ≠ 𝑥 0 Setting the current best point is 1

𝑥 0 = (3,1)

P1 (1, 1) P2 (3, 1)

Iteration 3

Step 2: ∆ = ∆12 + ∆22 = 22 + 22 = 8 = 2.828 > 10−3


BOX EVOLUTIONARY METHOD

𝑓 3,1 = (32 + 1 − 11)2 + 3 + 12 − 7 2 = 𝟏𝟎


𝑓 2,0 = (22 + 0 − 11)2 + 2 + 02 − 7 2 = 74
P4 (2, 2) (3, 1) P3 (4, 2)
𝑓 4,0 = (42 +0− 11)2 + 4+ 02 −7 2 = 34
𝑓 4,2 = (42 + 2 − 11)2 + 4 + 22 − 7 2 = 50
𝑓 2,2 = (22 + 2 − 11)2 + 2 + 22 − 7 2 = 26 1
1
1
0 1
Step 4: 𝑥ҧ = 𝑥 i.e. the new point is same as previous
Step So, we will reduce the size parameter by ∆Τ2 i.e.
(1,1) and move to step 2. P1 (2, 0) P2 (4, 0)

Iteration 4

Step 2: ∆ = ∆12 + ∆22 = 12 + 12 = 2 = 1.414 > 10−3


BOX EVOLUTIONARY METHOD

𝑓 3,1 = 10
𝑓 2.5,0.5 = 36.125
𝑓 3.5,0.5 = 13.625 P4 (2.5, 1.5) (3, 1) P3 (3.5, 1.5)
𝑓 3.5,1.5 = 𝟗. 𝟏𝟐𝟓
𝑓 2.5,1.5 = 15.625
0.5
The minimum function value is obtained at (3.5, 1.5) 0.5
0.5
0.5
Step 4: 𝑥ҧ ≠ 𝑥 0 proceed to next iteration

Iteration 5 P1 (2.5, 0.5) P2 (3.5, 0.5)

Step 2: ∆ = ∆12 + ∆22 = 12 + 12 = 2 = 1.414 >


10−3
BOX EVOLUTIONARY METHOD

𝑓 3.5,1.5 = 9.125
𝑓 3,1 = 10 (3.5, 1.5)
P4 (3, 2) P3 (4, 2)
𝑓 4,1 = 40
𝑓 4,2 = 50
𝑓 3,2 = 𝟎 0.5
0.5
The minimum function value is obtained at (3, 2)
0.5
0.5

Step 4: 𝑥ҧ ≠ 𝑥 0 proceed to next iteration


Iteration 6 P1 (3, 1) P2 (4, 1)
Step 2: ∆ = ∆12 + ∆22 = 12 + 12 = 2 = 1.414 >
10−3
BOX EVOLUTIONARY METHOD

𝑓 3,2 = 𝟎
𝑓 2.5,1.5 = 15.625 (3, 2)
P4 (2.5, 2.5) P3 (3.5, 2.5)
𝑓 3.5,1.5 = 9.125
𝑓 3.5,2.5 = 21.625
0.5
𝑓 2.5,2.5 = 8.125 0.5
The minimum function value is obtained at (3, 2) 0.5
0.5

Since 𝑥ҧ = 𝑥 0 is same we will reduce search space by


∆Τ , which is equal to (0.5, 0.5) and continue the P1 (2.5, 1.5) P2 (3.5, 1.5)
2

iteration.

You might also like