0% found this document useful (0 votes)
65 views12 pages

Smooth, Unconstrained Nonlinear Optimization Without Gradients

The Hooke Jeeves algorithm is an unconstrained nonlinear optimization method that does not require gradients. It uses an exploratory move followed by a pattern move to search for the minimum of a function. The algorithm proceeds in iterations, using exploratory moves to evaluate points in each coordinate direction and pattern moves to extrapolate to new points, reducing step sizes over iterations until the termination criteria is reached. The Hooke Jeeves method is useful when other gradient-based optimization techniques fail or are not applicable due to non-differentiable functions.

Uploaded by

Amr Kamal
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
65 views12 pages

Smooth, Unconstrained Nonlinear Optimization Without Gradients

The Hooke Jeeves algorithm is an unconstrained nonlinear optimization method that does not require gradients. It uses an exploratory move followed by a pattern move to search for the minimum of a function. The algorithm proceeds in iterations, using exploratory moves to evaluate points in each coordinate direction and pattern moves to extrapolate to new points, reducing step sizes over iterations until the termination criteria is reached. The Hooke Jeeves method is useful when other gradient-based optimization techniques fail or are not applicable due to non-differentiable functions.

Uploaded by

Amr Kamal
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 12

Smooth, Unconstrained Nonlinear

Optimization without Gradients


Hooke Jeeves
6/16/05

1
Hooke Jeeves or Pattern Search
Characteristics
• Zero order
• No derivatives
• No line searches
• Works in discontinuous domain
• No proof of convergence
• Tool when other tools fails
References:
• Evolution and Optimum Seeking by Hans-Paul Schwefel
• Mark Johnson code handout

2
Hooke Jeeves
With downhill simplex is the simplest algorithm in iSIGHT

Has essentially no formal diagnostics for outputs.


The algorithm is an unconstrained optimization algorithm
which can also be used in constrained situations.
Expected number of iterations =
StepSizeReductionFactor ** n < Termination Step Size

StepSizeReductionFactor between 0 – 1. Default .5


The larger the value – the slower the convergence.

3
4
5
Hooke Jeeves Algorithm
Termination step size = e
Step size reduction = rho

Step 0: Initialization
Choose a starting point, an accuracy bound e > 0,
and initial step lengths (current value * rho).
If current value = 0.0 make step length rho
Step 1 : Exploratory move
Construct x  x (k,i-1)  s i e i (discrete step in positive direction)
(k)

if F(x)  F(x (k,i-1) ), go to Step 2 (successful first trial)


otherwise replace x   x - 2s i e i (discrete step in negative direction)
(k)

if F(x)  F(x (k,i-1) ), go to Step 2 (success)


otherwise replace x   x  s i ei (back to original situation)
(k)
6
Step 2 : Retention and switch to next coordinate
Set x (k,i)  x 
If i  n, increase i  i  1 and go to step 1.
Step 3 : Test for total failure in all directions.
If F(x (k, n) )  F(x (k,0) ), set x (k 1,0)  x (k,0) and go to step 9
Step 4 : Pattern move
Set x (k 1,0)  2x (k, n) - x (k -1, n) (extrapolation)
(k 1) (k) (k, n) (k -1, n)
and s i  si sign(x i - xi ) for all i  1, n
increase k  k  1 and set i  1
(Observe : There is no success control of the pattern move so far)
Step 5 : Exploration after extrapolation
Construct x   x (k,i-1)  s i e i
(k)

If F(x )  F(x (k,i-1) ) go to step 6


otherwise replace x  x - 2s i e i
(k)

If F(x )  F(x (k,i-1) ) go to step 6


otherwise replace x  x  s i e i
(k) 7
Step 6 : Inner loop over coordinates
Set x (k,i)  x 
If i  n, increase i  i  1 and go to step 5
Step 7 : Test for failure of pattern move
If F(x (k,n) )  F(x (k -1,n) ) back to positon before pattern move
(k 1) (k)
set x (k 1,0)  x (k -1,n) , s i  si for all i  1, n and go to step 10
Step 8 : After successful pattern move, retention and first termination test
(k,n) (k -1,n) 1 (k )
If x i - xi 
s i for all i  1, n
2
set x (k 1,0)  x (k -1,n) and go to step 9
otherwise go to step 4 for another pattern move
Step 9 : Step size reduction and termination test
(k)
If s i   for all i  1, n, end the search with x (k,0)
(k 1) (k )
otherwise set s i  rho * s i for all i  1, n Note change
Step 10 : Iteration loop
Increase k  k  1, set i  1, and go to step 1
8
9
10
Lab
• Rerun the Spring_Start.desc file using
Hooke_Jeeves with the default step size.
• Does it reach the same optimum?
• How many function calls did it take?
• Is this more or less efficient then Steepest
Descent?
• On the next slide, label the X1 Step Size, X2 Step
Size and algorithm step number next to each row
for first 7 run counters.

11
Spring – Hooke Initial Steps

12

You might also like