4 Pattern Directions, 21-08-2024
4 Pattern Directions, 21-08-2024
Unconstrained Nonlinear
Optimization
• Let us consider various methods of solving
unconstrained minimization problem:
As x1=-1/4, x2=0
• Step 2: Choose the search direction S2 as S2 =
• Step 3: Since f2 = f(X2) = -(1/8) = −0.125, f+ = f(X2 + εS2) = f(−0.25,
0.01*1) = −0.1399 < f2
• f- = f(X2 - εS2) = f (−0.25,−0.01*1) = −0.1099 > f2
• +S2 is the correct direction for decreasing value of f from X2.
• Step 4: We minimize f(X2 + λ2S2) to find λ2* .
• Replace x1 by -0.25 and x2 by λ2, f(X2 + λ2S2) = f(−0.25, λ2 ) =
−0.25 − λ2 + 2(-0.25)2 + 2(-0.25)(λ2 ) + λ22 = λ2 2 − 1.5 λ2 − 0.125
• (df/dλ2) = 2λ2 − 1.5 = 0 at λ2* = 0.75
• Step 5: Set
(S"o
(s)'s)
(SX)
(1)
(l
(X)f
(h
Xs 2 X4 -X?
- 2( 2, 2) - : s ,|S)
- (2-5 2.5 )
&24
proaed
(2-$,2.S)
phnns
hove
Erplsay
(2·s,2) J5
fCxs)
4(25tD, 2-5)4-874
4ixs) J(2s-4,25) l4.06b
X 245
y-diteahir
4(2r2:s) 4. &18
values
(2-,2.2s)
Solve the following function using
Hooks and Jeeves
• Example:
• Minimize f(x)
• Given, x1 = 1, x2 =1, Δx1 = Δx2 = 0.5 and ε = 0.1, a = 2.
• Initialize:
• Initial point: x(0) = (1, 1), f(x(0)) = -16.
• Acceleration factor a = 2.
• Perturbation vector P0 = (0.5, 0.5).
• Perturbation tolerance vector T = (0.1, 0.1).
• P P0 .
• Note that these are not very good choices for P0 and T.
• They are chosen in this case just show that the
algorithm terminates after a small number of steps.
• The elements in T would normally be much smaller.
• Start/Restart: fbest = f(x(0)) = -16.
• Try x(1) = (1.5, 1), f(x(1)) = -18.25, keep
perturbation, update fbest = -18.25.
• Try x(1) = (1.5, 1.5), f(x(1)) = -21, keep perturbation
and update fbest = -21.
• The steps in the exploratory search are shown in
this first Start/Restart, but are omitted from here
forward.
• Pattern Move from x(0) = (1, 1) to x(1) = (1.5, 1.5):
• Tentative x(2) = 2x(1) – x(0) = 2[(1.5, 1.5)] – (1, 1) =
(2, 2), f(2, 2) = -24.
• Final x(2) after exploratory search around tentative
x(2) is (2.0, 2.5), f(x(2)) = -25.75 is better than
f(x(1)) = -21 so the move is accepted.
• Update points: x(0) x(1) = (1.5, 1.5) and x(1)
x(2) = (2.0, 2.5).
• Pattern Move from x(0) = (1.5, 1.5) through x(1) =
(2.0, 2.5):
• Tentative x(2) = 2(2.0, 2.5) – (2,2) = (2, 3), f(2,3) =
-27.
• Final x(2) after exploratory search around
tentative x(2) is (2,3)
• f(x(2)) = -28 is better than f(x(1)) = -25,75 so the
move is accepted.
• Update points: x(0)x(1) = (2.0, 2.5) and
x(1)x(2) = (2.0, 4.0).
• Pattern Move from x(0) = (2.0, 2.5) through x(1) = (2.0, 4.0):
Tentative x(2) = 2(2.0, 4.0) – (2.0, 2.5) = (2.0, 5.5), f(2.0, 5.5) = -
25.75.
• Final x(2) after exploratory search around tentative x(2) is
(2.0, 5.0), f(x(2)) = -27 is worse than f(x(1)) = -28 so the move
is rejected.
• Update points: x(0) x(1) = (2.0, 4.0).
• Start/Restart:
• Exploratory search around x(0) = (2.0, 4.0) fails at all levels of
perturbation size.
• Exit with solution x(0) = (2.0, 4.0) and f(x(0)) = -28.
• The data points are as follows :
• a: f(1, 1) = -16 e: f(2.5, 3.5) = -27
• b: f(1.5, 1.5) = -21 f: f(2, 4) = -28 [eventual solution]
• c: f(2, 2) = -24 g: f(2, 5.5) = -25.75
• d: f(2, 2.5) = -25.75 h: f(2,5) = -27
Indirect Search (Descent) Method :
Gradient of a Function
• Indirect Search (Descent) Method : Gradient of a Function
• The gradient of a function is an n-component vector
• Therefore,
• Thus the optimum point is reached in 2 iterations.
• Even if we do not know this point to be optimum, we will
not be able to move from this point in the next iteration.
• This can be verified as follows :