0% found this document useful (0 votes)
74 views38 pages

Multi Variable Optimization: Min F (X, X, X, - X)

This document discusses several methods for multi-variable optimization including: 1. Unidirectional search which reduces the problem to single variable optimization by considering search along different directions. 2. Direct search methods which search through many directions, considering all combinations of altering each variable to generate 2n directions for n variables. 3. Evolutionary optimization methods which compare all 2n+1 points at each step and choose the best, reducing the increment size if there is no improvement. 4. Powell's conjugate direction method which iteratively finds conjugate search directions for efficient minimization. 5. Gradient based methods like Cauchy's steepest descent method which search along the steepest descent or negative gradient

Uploaded by

Saheera Hazarika
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
74 views38 pages

Multi Variable Optimization: Min F (X, X, X, - X)

This document discusses several methods for multi-variable optimization including: 1. Unidirectional search which reduces the problem to single variable optimization by considering search along different directions. 2. Direct search methods which search through many directions, considering all combinations of altering each variable to generate 2n directions for n variables. 3. Evolutionary optimization methods which compare all 2n+1 points at each step and choose the best, reducing the increment size if there is no improvement. 4. Powell's conjugate direction method which iteratively finds conjugate search directions for efficient minimization. 5. Gradient based methods like Cauchy's steepest descent method which search along the steepest descent or negative gradient

Uploaded by

Saheera Hazarika
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 38

MULTI VARIABLE OPTIMIZATION

Min f(x1, x2, x3,----- xn)

UNIDIRECTIONAL SEARCH Min Point r


s
- CONSIDER A DIRECTION S
r r
x(α ) = x + α s
- REDUCE TO
Min f (α )
- SOLVE AS
A SINGLE VARIABLE PROBLEM
Uni directional search (example)

Min f(x1, x2) = (x1-10)2 + (x2-10)2


S = (2, 5) (search direction)
X = (2, 1) (Initial guess)
DIRECT SERACH METHODS

- SEARCH THROUGH MANY DIRECTIONS


- FOR N VARIABLES 2N DIRECTIONS
-Obtained by altering each of the n values and taking all
combinations
EVOLUTIONARY OPTIMIZATION METHOD

- COMPARE ALL 2N+1 POINTS & CHOOSE THE BEST.


- CONTINUE TILL THERE IS AN IMPROVE MENT
- ELSE DECREASE INCREAMENT

STEP 1: x0 = INITIAL POINT

∆i = STEP REDUCTION PARAMETER FOR EACH VARIABLE


∈ = TERMINATION PARA METER
STEP 2 : IF ∆ <∈ STOP
ELSE CREATE 2N POINTS
xi = xi ± ∆i / 2
STEP 3 : x = Min (2 N + 1)
STEP 4 : IF x = x 0 − ∆ i = ∆ i / 2 ; GOTO 2
ELSE x0 = x & GOTO 2
Hooke Jeeves pattern search
• Pattern Search ---
– Create a set of search directions iteratively
– Should be linearly independent
• A combination of exploratory and pattern moves
– Exploratory – find the best point in the vicinity of the
current point
– Pattern – Jump in the direction of change, if better then
continue, else reduce size of exploratory move and
continue
Exploratory move
• Current solution is xc; set i = 1; x = xc
• S1: f = f(x), f+ = f(xi + ∆i), f- = f(xi - ∆i)
• S2: fmin = min (f, f+, f-); set x corresponding
to fmin
• S3: If i = N, go to 4; else i = i + 1, go to 1
• S4: If x ≠ xc, success, else failure
Pattern Move
• S1: Choose x(0), ∆I, for I = 1, 2, …N, ε, and
set k = 0
• S2: Perform exploratory move with xk as
base point;
– If success, xk+1 = x, go to 4 else goto 3
• S3: If |∆| < ε, terminate
– Else set ∆i = ∆i / α --- ∀i, go to 2
Pattern Move (contd)
• S4: k = k+1; xpk+1 = xk + (xk – xk-1)
• S5: Perform another exploratory move with
xpk+1 as the base point; Result = xk+1
• S6: If f(xk+1) < f(xk), goto S4
– Else goto S3
Example :
• Consider the Himmelblau function:

f ( x1 , x2 ) = ( x12 + x2 − 11) 2 + ( x1 + x22 − 7 ) 2


• Solution
Step 1 Selection of initial conditions
1. Initial Point : x ( 0 ) = (0,0) T
2. Increment vector : ∆ = (0.5,0.5) T
3. Reduction factor : α =2
4. Termination parameter : ε = 10
−3

5. Iteration counter : k=0


• Step 2
Perform an Iteration of the exploratory move
with base point as x = x ( 0 )

Thus we set x = x ( 0 ) = (0,0) T and i = 1

The exploratory move will be performed with the


following steps
Steps for the Exploratory move
Step 1 : Explore the vicinity of the variable x1
Calculate the function values at three points
+
( x ( 0) + ∆1 x ( 0 ) )T = (0.5,0.5)T f = f ((0.5,0.5) T ) = 157.81

x (0)
= (0,0) T f = f ((0,0) T ) = 170


(x (0)
− ∆1 x ) = (−0.5,0.5)
(0) T T
f = f ((−0.5,0.5) T ) = 171.81

Step 2 : Take the Minimum of above function and


corresponding point
Step 3 : As i ≠ 1 : all variables are not explored
Increment counter i=2 and explore second variable.
First iteration completed
Step1: At this point the base point is x = (0.5,0) T explore
the variable x2 and calculate the function values.
+
f = f ((0.5,0.5) T ) = 144.12
f = f ((0.5,0)T ) = 157.81

f = f ((0.5,−0.5) T ) = 165.62

Step 2 : fmin = 144.12 and point , x=(0.5,0.5)

Step 3 : As i=2 move to the step 4 of the exploratory move


Step 4 : ( of the Exploratory move )
Since x ≠ x c the move is success and we set
x = (0.5,0.5) T
• As the move is success, set x (1) = x = (0.5,0.5) T move
to step 4

STEP 4 : We set k=1 and perform Pattern move


x ( 2 ) p = ( x (1) + ( x (1) − x ( 0 ) ) T ) = 2(0.5,0.5) T − (0,0) T = (1,1) T
Step 5 : Perform another exploratory move as before
(2)
and with x p as the base point.
The new point is x = (1.5,1.5) T
Set the new point x ( 2) = x = (1.5,1.5) T
Step 6 : f ( x ( 2)
) = 63 . 12 is smaller than f ( x (1)
) = 144.12
Proceed to next step to perform another pattern move
STEP 4 : Set k=2 and create a new point
x ( 3) p = (2 x ( 2 ) − x (1) ) = (2.5,2.5) T
Note: as x ( 2 )is better than x (1) , a jump along the direction
( x ( 2 ) − x (1) ) is made, this will take search closer to
true minimum

STEP 5 : Perform another exploratory move to find any


better point around the new point.
Performing the move on both variables we have
New point
x (3) = (3.0,2.0) T
This point is the true minimum point
In the example the minimum of the Hookes-Jeeves
algorithm happen in two iterations : this may not be
the case always

Even though the minimum point is reached there is


no way of finding whether the optimum is reached
or not
The algorithm proceeds until the norm if the
increment vector is small.
STEP 6 : function value at new point
f ( x 3 ) = 0 < f ( x 2 ) = 63.12
Thus move on to step 4
STEP 4 : The iteration counter k = 3 and the new point
x ( 4 ) p = (2 x (3) − x ( 2 ) ) = (4.5,2.5) T
STEP 5 : With the new point as base the search is
success and x=(4.0,2.0) and thus we set
x ( 4 ) = (4.0,2.0) T
STEP 6 : The function value is 50, which is larger that
the earlier i.e. 0 . Thus we move to step 3
Step 3 : Since ∆ = 0.5 ⊄ ε we reduce the increment vector
∆ = (−0.5,0.5) T / 2 = (0.25,0.25) T
and proceed to Step 2 to perform the iterations
Step 2 : Perform an exploratory move with the
following as current point x (3) = (3.0,2.0) T
The exploratory move on both the variables is failure
and we obtain x (3) = (3.0,2.0) T
thus we proceed to Step 3
Step 3 : Since ∆ is not small reduce the increment
vector and move to Step 2.
The new increment vector is ∆ = (0.125,0.125) T
The algorithm now continues with step 2 and step 3
until ∆ is smaller than the termination factor.
The final solution is x * = (3.0,2.0) T with the function
value 0
POWELL’S CONJUGATE DIRECTION METHOD

For a quadratic function IN 2 VARIABLES


- TAKE 2 POINTS x1 & x2 AND
- A DIRECTION ‘d’

IF y1 IS A SOLUTION OF MIN f ( x1 + λd ) &


y2 IS A SOLUTION OF MIN f ( x 2 + λd )

THEN (y2- y1) IS CONJUGATE TO d

OPTIMUM LIES ALONG (y2- y1)


y2

x2 y1

x1
OR

FROM
x1 GET y1 ALONG (1,0)
y1 GET x2 ALONG (0,1)
x2 GET y2 ALONG (1,0)

TAKE (y2 - y1 )
FOR N VARIABLES

STEP 1: TAKE x0 & N LINEARLY INDEPENDENT DIRECTIONS


s1, s2, s3, s4,------- sN s i = ei

STEP 2: - MINIMIZE ALONG N UNI-DIRECTIONAL SEARCH


DIRECTIONS,USING PREVIOUS BEST EVERY TIME
- PERFORM ANOTHER SEARCH ALONG s1

STEP 3: FROM THE CONJUGATE DIRECTION “d”

STEP 4: IF d IS SMALL TERMINATE

ELSE s j = s j −1 ∀j &
s =d/d
1

GOTO 2
GRADIENT BASED METHODS
DESCENT DIRECTION

IF ∇f (x).d ≤ 0

IF d = −∇f (x)
∇f (x).d IS MAXIMALLY NEGATIVE

− ∇f (x) IS STEEPEST DESCENT IDRECTION

⎡∂ ∂ ∂ ∂ ⎤
WHERE ∇=⎢ −−−−− ⎥
⎣ ∂x1 ∂x2 ∂x3 ∂xn ⎦
CAUCHY’S METHOD (STEEPEST DESCENT)

STEP 1: CHOOSE M (MAX. NO. OF ITERATIONS)


∈ 1 ,∈ 2 , x 0
,k = 0
STEP 2: CALCULATE

∇ f (x k
)

STEP 3: IF ∇ f (x k
) ≤∈ 1
TERMINATE

IF k ≥ M TERMINATE

STEP 4: UNIDIRECTIONAL SEARCH USING ∈ 2


MIN. f (x ) = f (x −α∇f (x ))
k+1 k k

k+1
x −x k

≤∈1
STEP 5: IF
k
x
ELSE k =k +1 GOTO STEP 2

- METHOD WORKS WELL WHEN xk IS FAR FROM x* (OPTIMUM)


- IF POINT IS CLOSE TO x* THEN CHANGE IN GRADIENT VECTOR IS
VERY SMALL.
- OTHER METHODS USE VARIATION
-SECOND DERIVATIVES(NEWTON’S)
-COMBIMATION
-CONJUGATE GRADIENT METHOD
SOLVING SIMULTANEOUS EQUATIONS

X+Y=5
X-Y=2

MIN. [(X+Y-5)2+(X-Y-2)2]

PRACTICE
SOLVE PROBLEMS 3-1 TO 3-13 USING TECHNIQUES DONE
IN CLASS.
Penalty function approach
• Transformation method
- convert to a sequence of unconstrained
problems.
• Give penalty to (violated) constraints.
• Add to objective function.
• Solve.
• Use result as starting point for next iteration.
• Alter penalties ands repeat.
• Minimise ( x12 + x2 − 11) 2 + ( x1 + x22 − 7) 2
Subjected to ( x1 − 5) 2 + x22 − 26 ≥ 0
Penalty = 0.1 < ( x1 − 5) 2 + x22 − 26 ≥ 0 > 2

Feasible
region Infeasible region
Minimum
point

x2

x1
Process
1. Choose ε1 , ε 2 , R, Ω.
2. Form modified objective function
P ( x k , R k ) = f ( x k ) + Ω( R k , g ( x k ), h( x k ))

3. Start with xk . Find x k +1 so as to minimize P.


(use ε1 )
4. If P( x k +1 , R k ) − P( x k , R k −1 ) < ε 2
Terminate.
k +1
5. Else R = cR, k=k+1; go to step 2.
• At any stage minimize
P(x,R) = f(x)+Ω(R,g(x),h(x))
R = set of penalty parameters
Ω= penalty function
Types of penalty function
• Parabolic penalty
Ω = R{h(x)}2
- for equality constraints
- only for infeasible points
• Interior penalty functions
- penalize feasible points
• Exterior penalty functions
- penalize infeasible points
• Mixed penalty functions
- combination of both
• Infinite barrier penalty
Ω = R ∑ g j (x)
- inequality constraints.
- R is very large.
- Exterior.
• Log penalty
Ω=-R ln[g(x)]
-inequality constraints
- for feasible points
- interior.
-initially large R.
- larger penalty close to border.
• Inverse penalty
⎡ 1 ⎤
Ω = R⎢ ⎥
⎣ g ( x) ⎦
- interior.
- larger penalty close to border.
- initially large R
• Bracket order penalty
R < g ( x) > 2

- <A>=A if A<0.
- exterior.
- initially small R.
Direct search
• Variable elimination
- for equality constraints
- Express one variable as a function of others
and
- Eliminate one variable
- Remove all equality constraints
Complex search
- generate a set of points at random
- If a point is infeasible reflect beyond
centroid of remaining points
- Take worst point and push towards centroid
of feasible points.

You might also like