Algorithm For Unconstrained
Algorithm For Unconstrained
Muhammad Shafiq
Department of Industrial Engineering
University of Engineering and Technology
The idea of direct search methods is to locate the optimum by
iteratively narrowing a predefined interval of uncertainty to a desired
level of accuracy.
It should be noted that the optimum (if exists) should be the solution of
the set of equations defined by f x 0
*
This section focuses mainly on the direct search methods applied to
strictly unimodal single-variable functions.
x2 x1 x implies that f x1 f x2
*
*
where x is the minimum point
Unrestricted Search with Fixed Step Size
1. Set i=1, start with an initial guess point x x1 and a step size
2. Determine x2 x1
a. If f 2 f1 : update i i 1 and continuously checking the
condition fi1 fi until it is violated. At that point, the search
process is terminated and xi can be taken as the optimum point
b. If f 2 f1 : determine x2 x1 in which x1 x1
If f 2 f1 : update i i 1 and continuously checking the condition
fi1 fi until it is violated. At that point, the
search process is terminated and xi can be taken as the
optimum point
If f 2 f1 : the minimum point lies in between x1 and x2 and
either x1 or x2 can be taken as the optimum point
If f 2 f1 : the minimum point lies in between x2 and x2 and
either x2 or x2 can be taken as the optimum point
b. If f 2 f1 : the minimum point lies in between x1 and x2 and
either x1 or x2 can be taken as the optimum point
Dichotomous & Golden Section Methods
x1 xR xR xL and x2 xL xR xL
( 0 1)
xL xR xR xL xL xR xR xL
2 1 0
5 1
2
Consider the case I i x1 , xR , i.e., x2 is included in I i . In iteration
i 1 , if we want to reuse x2 in iteration i by setting:
x1 iteration i 1 x2 iteration i
xR xR xR xR xL xL xR xL
1 0
2
5 1
2
The golden search method converges more rapidly than the
dichotomous method because:
3x 0 x2
Maximize f x 1
3 x 20 2 x 3
Using 0.1
Dichotomous method
Iteration 1: I 0 xL , xR 0,3
x1 0.5 3 0 0.1 1.45 , f x1 4.35
x2 0.5 3 0 0.1 1.55 , f x2 4.65
f x2 f x1 xL 1.45 , I1 1.45,3
Iteration 2: I 0 xL , xR 1.45,3
x1 0.5 3 1.45 0.1 2.175 , f x1 5.942
x2 0.5 3 1.45 0.1 2.275 , f x2 5.908
f x1 f x2 xR 2.275 , I 2 1.45,2.275
…..
The optimal solution: x 2
Iteration 1: I 0 xL , xR 0,3
x1 3 0.618 3 0 1.146 , f x1 3.438
x2 0 0.618 3 0 1.854 , f x2 5.562
f x2 f x1 xL 1.146 , I1 1.146,3
Iteration 2: I1 xL , xR 1.146,3
x1 x2 iteration 0 1.854 , f x1 5.562
x2 1.146 0.618 3 1.146 2.292 , f x2 5.903
f x2 f x1 xL 1.854 , I1 1.854,3
…..
The optimal solution: x 2
Assignment:
point.
Newton-Raphson Method
1
f x f xi f xi x xi f xi x xi
2
f x f xi f xi x xi
So, f xi f xi x* xi 0
f xi
or x xi
*
f xi
If xi is an approximation of x* , the above can be rearranged to obtain
an improved approximation as follows:
f xi
xi1 xi
f xi
f x 3 x 2 2 x 3
2 2
f x 72 x3 234 x 2 241x 78 0
72 x3 234 x 2 241x 78
xk 1 xk
216 x 2 468 x 241
Starting with x0 10 , the successive iterations are presented below:
f xi
k xk xk 1
f xi
0 10.000000 2.978923 7.032108
1 7.032108 1.976429 5.055679
2 5.055679 1.314367 3.741312
3 3.741312 0.871358 2.869995
4 2.869995 0.573547 2.296405
5 2.296405 0.371252 1.925154
6 1.925154 0.230702 1.694452
7 1.694452 0.128999 1.565453
8 1.565453 0.054156 1.511296
9 1.511296 0.010864 1.500432
10 1.500432 0.000431 1.500001
f x f xi f xi x xi
f xk
xk 1 xk
f xk
f xi x f xi x
f xi
2x
f xi x 2 f xi f xi x
f xi
x 2
and hence, the iterative procedure will be performed based on:
x f xi x f xi x
xi1 xi
2 f xi x 2 f xi f xi x
f xi1 x f xi1 x
Stopping criterion: f xi1
2x
Example: Find the minimum of the function
0.75 1 1
f x 0.65 0.65 x tan
1 x 2
x
using Quasi-Newton method with x1 0.1 and x 0.01 (select
0.01)
Iteration 1:
f x1 0.188179
f x1 x 0.195512 f x1 x 0.180615
x2 0.377882
Convergence check: f x2 0.137300
Iteration 2:
f x2 0.303368
f x2 x 0.304662 f x2 x 0.301916
x3 0.465390
Convergence check: f x3 0.017700
Iteration 3:
f x3 0.309885
f x3 x 0.310004 f x3 x 0.309650
x4 0.480600
Convergence check: f x4 0.000350
Procedure: