0% found this document useful (0 votes)
18 views79 pages

3-Region Elimination Methods - Unrestricted Search,-11!01!2025

The document outlines methods for one-dimensional minimization in nonlinear programming, emphasizing an iterative approach to find optimal solutions through a sequence of approximations. It details various techniques including analytical and numerical methods, as well as elimination methods, and discusses the concept of unimodal functions which have a single minimum point. Additionally, it explains unrestricted search methods and the dichotomous search technique for narrowing down intervals to locate the optimum more efficiently.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views79 pages

3-Region Elimination Methods - Unrestricted Search,-11!01!2025

The document outlines methods for one-dimensional minimization in nonlinear programming, emphasizing an iterative approach to find optimal solutions through a sequence of approximations. It details various techniques including analytical and numerical methods, as well as elimination methods, and discusses the concept of unimodal functions which have a single minimum point. Additionally, it explains unrestricted search methods and the dichotomous search technique for narrowing down intervals to locate the optimum more efficiently.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 79

Optimization

Nonlinear programming
One dimensional
minimization methods
MODULE - 2
Introduction
The basic philosophy of most of the numerical methods of
optimization is to produce a sequence of improved approximations
to the optimum according to the following scheme:

1. Start with an initial trial point Xi


2. Find a suitable direction Si (i=1 to start with) which points in the
general direction of the optimum
3. Find an appropriate step length i* for movement along the direction
Si
4. Obtain the new approximation Xi+1 as
X i 1  X i  *i S i
5. Test whether Xi+1 is optimum. If Xi+1 is optimum, stop the procedure.
Otherwise set a new i=i+1 and repeat step (2) onward.
Iterative Process of Optimization
Introduction
• The iterative procedure indicated is valid for unconstrained as
well as constrained optimization problems.

• If f(X) is the objective function to be minimized, the problem


of determining i* reduces to finding the value i = i* that
minimizes f (Xi+1) = f (Xi+ i Si) = f (i ) for fixed values of Xi
and Si.

• Since f becomes a function of one variable i only, the methods


of finding i* in the previous slide are called one-dimensional
minimization methods.
One dimensional minimization
methods
• Analytical methods (differential calculus methods)
• Numerical methods
– Elimination methods / Region Elimination Methods
• Unrestricted search
• Exhaustive search
• Dichotomous search Syllabus highlighted with
• Fibonacci method RED
• Golden section method
– Interpolation methods
• Requiring no derivatives (quadratic)
• Requiring derivatives
– Cubic
– Direct root
» Newton
» Quasi-Newton
» Secant
One dimensional minimization
methods
Differential calculus methods:

• Analytical method

• Applicable to continuous, twice differentiable functions

• Calculation of the numerical value of the objective function is


virtually the last step of the process

• The optimal value of the objective function is calculated after


determining the optimal values of the decision variables
One dimensional minimization
methods
Numerical methods:

• The values of the objective function are first found at various combinations
of the decision variables

• Conclusions are then drawn regarding the optimal solution

• Elimination methods can be used for the minimization of even


discontinuous functions

• The quadratic and cubic interpolation methods involve polynomial


approximations to the given function

• The direct root methods are root finding methods that can be considered to
be equivalent to quadratic interpolation
Unimodal functions
Unimodal function
• A unimodal function is one that has only one peak (maximum)
or valley (minimum) in a given interval

• Thus a function of one variable is said to be unimodal if, given


that two values of the variable are on the same side of the
optimum, the one nearer the optimum gives the better
functional value (i.e., the smaller value in the case of a
minimization problem). This can be stated mathematically as
follows:
A function f (x) is unimodal if
– x1 < x2 < x* implies that f (x2) < f (x1) and
– x2 > x1 > x* implies that f (x1) < f (x2) where x* is the minimum point
Unimodal function
• Examples of unimodal functions:

• Thus, a unimodal function can be a nondifferentiable or


even a discontinuous function

• If a function is known to be unimodal in a given range, the


interval in which the minimum lies can be narrowed down
provided that the function values are known at two different
values in the range.
Unimodal function
• For example, consider the normalized interval [0,1] and two function
evaluations within the interval as shown:

• There are three possible outcomes:

– f1 < f2

– f1 > f2

– f1 = f2
Unimodal function

• If the outcome is f1 < f2, the minimizing x can not lie to the
right of x2

• Thus, that part of the interval [x2,1] can be discarded and a


new small interval of uncertainty, [0, x2] results as shown in
the figure
Unimodal function

• If the outcome is f (x1) > f (x2) , the interval [0, x1] can be
discarded to obtain a new smaller interval of uncertainty, [x1,
1].
Unimodal function

• If f1 = f2 , intervals [0, x1] and [x2,1] can both be discarded to


obtain the new interval of uncertainty as [x1,x2]
Unimodal function

• Furthermore, if one of the experiments (function evaluations in the


elimination method) remains within the new interval, as will be the
situation in Figs (a) and (b), only one other experiment need be placed
within the new interval in order that the process be repeated.

• In Fig (c), two more experiments are to be placed in the new interval in
order to find a reduced interval of uncertainty.
Unimodal function

• The assumption of unimodality is made in all the elimination


techniques

• If a function is known to be multimodal (i.e., having several


valleys or peaks), the range of the function can be subdivided
into several parts and the function treated as a unimodal
function in each part.
Elimination methods
 In most practical problems, the optimum solution is known to lie within
restricted ranges of the design variables.
 In some cases, this range is not known, and hence the search has to be
made with no restrictions on the values of the variables.

UNRESTRICTED SEARCH

• Search with fixed step size

• Search with accelerated step size


Unrestricted Search
Search with fixed step size
• The most elementary approach for such a problem is to use a
fixed step size and move from an initial guess point in a
favorable direction (positive or negative).

• The step size used must be small in relation to the final


accuracy desired.

• Simple to implement

• Not efficient in many cases


Unrestricted Search
Search with fixed step size
1. Start with an initial guess point, say, x1
2. Find f1 = f (x1)
3. Assuming a step size , find x2=x1+ 
4. Find f2 = f (x2)
5. If f2 < f1, and if the problem is one of minimization, the
assumption of unimodality indicates that the desired
minimum can not lie at x < x1. Hence the search can be
continued further along points x3, x4,….using the
unimodality assumption while testing each pair of
experiments. This procedure is continued until a point,
xi=x1+(i-1)s, shows an increase in the function value.
Unrestricted Search
Search with fixed step size (cont’d)

6. The search is terminated at xi, and either xi or xi-1 can be


taken as the optimum point
7. Originally, if f1 < f2 , the search should be carried in the
reverse direction at points x-2, x-3,…., where x-j=x1- ( j-1 ) 
8. If f2=f1 , the desired minimum lies in between x1 and x2, and
the minimum point can be taken as either x1 or x2.
9. If it happens that both f2 and f-2 are greater than f1, it implies
that the desired minimum will lie in the double interval
x-2 < x < x2
Unrestricted Search
Search with accelerated step size
• Although the search with a fixed step size appears to be very
simple, its major limitation comes because of the
unrestricted nature of the region in which the minimum can
lie.

• For example, if the minimum point for a particular function


happens to be xopt=50,000 and in the absence of knowledge
about the location of the minimum, if x1 and s are chosen as
0.0 and 0.1, respectively, we have to evaluate the function
5,000,001 times to find the minimum point. This involves a
large amount of computational work.
Unrestricted Search
Search with accelerated step size (cont’d)
• An obvious improvement can be achieved by increasing the
step size gradually until the minimum point is bracketed.
• A simple method consists of doubling the step size as long
as the move results in an improvement of the objective
function.
• One possibility is to reduce the step length after bracketing
the optimum in ( xi-1, xi). By starting either from xi-1 or xi, the
basic procedure can be applied with a reduced step size.
This procedure can be repeated until the bracketed interval
becomes sufficiently small.
Example (fixed step size )
Minimize f(x) = x(x-4) , x E [0,4] . Given f(x) is unimodel, start with x=1
and step size is 0.1
IDEA
F(1) = -3 F( 0.9) = -2.79 F(1.1) = -3.19
Example (accelerated step size )

-3.84
Example (accelerated step size )
Find the minimum of f = x (x-1.5) by starting from
0.0 with an initial step size of 0.05.
Example (accelerated step size )
Find the minimum of f = x (x-1.5) by starting from 0.0 with an initial step size of
0.05.
Solution:
The function value at x1 is f1=0.0. If we try to start moving in the negative x
direction, we find that x-2=-0.05 and f-2=0.0775. Since f-2>f1, the assumption of
unimodality indicates that the minimum can not lie toward the left of x-2. Thus,
we start moving in the positive x direction and obtain the following results:

I Value of s xi=x1+s fi = f (xi) Is fi >


" fi-1
1 - 0.0 0.0 -
2 0.05 0.05 -0.0725 No
3 0.10 0.10 -0.140 No
4 0.20 0.20 -0.260 No
5 0.40 0.40 -0.440 No
6 0.8 0.80 -0.560 No
7 1.60 1.60 +0.160 Yes
Example
Solution:
From these results, the optimum point can be seen to be xopt  x6=0.8.

In this case, the points x6 and x7 do not really bracket the minimum point but
provide information about it.

If a better approximation to the minimum is desired, the procedure can be


restarted from x5 with a smaller step size.
Dichotomous search
• The exhaustive search method is a simultaneous search method in
which all the experiments are conducted before any judgement is
made regarding the location of the optimum point.

• The dichotomous search method , as well as the Fibonacci and the


golden section methods discussed in subsequent sections, are
sequential search methods in which the result of any experiment
influences the location of the subsequent experiment.

• In the dichotomous search, two experiments are placed as close as


possible at the center of the interval of uncertainty.

• Based on the relative values of the objective function at the two


points, almost half of the interval of uncertainty is eliminated.
Dichotomous search
• Let the positions of the two
experiments be given by:

L0 
x1  
2 2
L 
x2  0 
2 2

where  is a small positive number


chosen such that the two
experiments give significantly
different results.
Dichotomous Search
• Then the new interval of uncertainty is given by (L0/2+/2).

• The building block of dichotomous search consists of conducting a pair


of experiments at the center of the current interval of uncertainty.

• The next pair of experiments is, therefore, conducted at the center of


the remaining interval of uncertainty.

• This results in the reduction of the interval of uncertainty by nearly a


factor of two.
Dichotomous Search
• The intervals of uncertainty at the ends of different pairs of
experiments are given in the following table.
Number of 2 4 6

experiments

Final interval
of uncertainty (L0+ )/2 1  L0     1  L0     
    
2 2  2 2 4 2 2

• In general, the final interval of uncertainty after conducting n


experiments (n even) is given by:
L0  1 
Ln  n / 2   1  n / 2 
2  2 
Dichotomous Search
Example: Find the minimum of f = x(x-1.5) in the interval (0.0,1.0) to within
10% of the exact value.

Solution: The ratio of final to initial intervals of uncertainty is given by:


Ln 1   1 
 n / 2  1  n / 2 
L0 2 L0  2 

where  is a small quantity, say 0.001, and n is the number of experiments.


If the middle point of the final interval is taken as the optimum point, the
requirement can be stated as:
1 Ln 1

2 L0 10
i.e.
1   1  1
 1  n / 2  
2n / 2 L0  2  5
Dichotomous Search
Solution: Since  = 0.001 and L0 = 1.0, we have

1 1  1  1
 1  n / 2  
2n / 2 1000  2  5
i.e.
999 1 995 999
n/2
 or 2 n/2
  5.0
1000 2 5000 199

Since n has to be even, this inequality gives the minimum admissable value
of n as 6. The search is made as follows: The first two experiments are
made at:
L0 
x1    0.5  0.0005  0.4995
2 2
L 
x2  0   0.5  0.0005  0.5005
2 2
Dichotomous Search
with the function values given by:

f1  f ( x1 )  0.4995 (1.0005 )  0.49975


f 2  f ( x2 )  0.5005 (0.9995 )  0.50025
Since f2 < f1, the new interval of uncertainty will be (0.4995,1.0). The
second pair of experiments is conducted at :

1.0  0.4995
x3  (0.4995  )  0.0005  0.74925
2
1.0  0.4995
x4  (0.4995  )  0.0005  0.75025
2
which gives the function values as:

f 3  f ( x3 )  0.74925 (0.75075 )  0.5624994375


f 4  f ( x4 )  0.75025 (0.74975 )  0.5624994375 -0.5624999375
Dichotomous Search
Since f3 > f4 , we delete (0.4995,x3) and obtain the new interval of
uncertainty as:
(x3,1.0)=(0.74925,1.0)
The final set of experiments will be conducted at:
1.0  0.74925
x3  (0.74925 
x5 )  0.0005  0.874125
2
1.0  0.74925
x
x64  ( 0.74925  )  0.0005  0.875125
2
which gives the function values as:
f 5  f ( x5 )  0.874125 (0.625875 )  0.5470929844
f 6  f ( x6 )  0.875125 (0.624875 )  0.5468437342
Dichotomous Search
Since f5 < f6 , the new interval of uncertainty is given by (x3, x6)
(0.74925,0.875125). The middle point of this interval can be taken as
optimum, and hence:
xopt  0.8121875
f opt  0.5586327148
Dichotomous search

• Minimize
• F(x) = 4x^3+x^2-7x+14
• Delta = 0.001
• Interval [0,1]
• N=8
Iteration 1
Iteration 2
Iteration 3
Final step

Minimum point

Minimum value of the function 10.9599524


Fibonacci method
The Fibonacci method can be used to find
the minimum of a function of one
variable even if the function is not
continuous. The limitations of the
method are:

• The initial interval of uncertainty, in


which the optimum lies, has to be
known.

• The function being optimized has to


be unimodal in the initial interval of
uncertainty.
Fibonacci method
This method makes use of the sequence of Fibonacci
numbers, {Fn}, for placing the experiments. These numbers
are defined as:

F0  F1  1
Fn  Fn 1  Fn  2 , n  2,3,4, 

which yield the sequence 1,1,2,3,5,8,13,21,34,55,89,...


Fibonacci method
Procedure:
Let L0 be the initial interval of
uncertainty defined by a x  b
and n be the total number of
experiments to be conducted.
Define
Fn2
L 
*
2 L0
Fn
and place the first two x1 x2
experiments at points x1 and x2,
which are located at a distance of
L2* from each end of L0.
in the interval [0,5] by the Fibonacci method
using n=3
Iteration 2
• Interval [2,5] 2---2.6------4.4-----5
• L= 3
• L3* = (3/5)=0.6
• X3= a+L3* = 2+0.6=2.6
• X4 = b-L3* = 5-0.6 = 4.4
• F(x3) = f(2.6) = 27.52
• F(x4)= f(4.4) = 31.63 , f(x3) <f(x4) , so
discard 4.4 – 5 now the new interval of
uncertainty is 2 – 4 a = 2 , b = 4.4
Iteration 3
• a = 2 , b = 4.4 , K=4
• N > K stop doing iteration.
• Minimum = 27.52
Fibonacci method
Procedure:
This gives
Fn 2
x1  a  L*2  a  L0
Fn
Fn 2 Fn1
x2  b  L  b 
*
2 L0  a  L0
Fn Fn
Discard part of the interval by using the unimodality
assumption. Then there remains a smaller interval of
uncertainty L2 given by:
 F  Fn 1
L2  L0  L*2  L0 1  n  2   L0
 Fn  Fn
Fibonacci method
Procedure:
The only experiment left in will be at a distance of
Fn2 F
L*2  L0  n 2 L2
Fn Fn1
from one end and
Fn 3 F
L2  L*2  L0  n3 L2
Fn Fn 1
from the other end. Now place the third experiment in the interval
L2 so that the current two experiments are located at a distance of:
Fn 3 F
L*3  L0  n 3 L2
Fn Fn 1
Fibonacci method
Procedure:
• This process of discarding a certain interval and placing a new
experiment in the remaining interval can be continued, so that the
location of the jth experiment and the interval of uncertainty at the end
of j experiments are, respectively, given by:

Fn  j
L 
*
j L j 1
Fn ( j  2 )
Fn ( j 1)
Lj  L0
Fn
Fibonacci method
Procedure:

• The ratio of the interval of uncertainty remaining after conducting


j of the n predetermined experiments to the initial interval of
uncertainty becomes:

Lj Fn ( j 1)

L0 Fn

and for j = n, we obtain

Ln F1 1
 
L0 Fn Fn
Fibonacci method
• The ratio Ln/L0 will permit us to determine n, the required number
of experiments, to achieve any desired accuracy in locating the
optimum point.Table gives the reduction ratio in the interval of
uncertainty obtainable for different number of experiments.
Fibonacci method
Position of the final experiment:
• In this method, the last experiment has to be placed with some
care. Equation
Fn  j
L 
*
j L j 1
Fn ( j 2)

gives
L*n F 1
 0  for all n
Ln 1 F2 2

• Thus, after conducting n-1 experiments and discarding the


appropriate interval in each step, the remaining interval will
contain one experiment precisely at its middle point.
Fibonacci method
Position of the final experiment:
• However, the final experiment, namely, the nth
experiment, is also to be placed at the center of the
present interval of uncertainty.
• That is, the position of the nth experiment will be the
same as that of ( n-1)th experiment, and this is true for
whatever value we choose for n.
• Since no new information can be gained by placing the
nth experiment exactly at the same location as that of
the (n-1)th experiment, we place the nth experiment
very close to the remaining valid experiment, as in the
case of the dichotomous search method.
Fibonacci method
Example:
Minimize
f(x)=0.65-[0.75/(1+x2)]-0.65 x tan-1(1/x) in the interval [0,3]
by the Fibonacci method using n=6.
Solution: Here n=6 and L0=3.0, which yield:
Fn  2 5
L2 *  L0  (3.0)  1.153846
Fn 13
Thus, the positions of the first two experiments are given by
x1=1.153846 and x2=3.0-1.153846=1.846154 with f1=f(x1)=-
0.207270 and f2=f(x2)=-0.115843. Since f1 is less than f2, we
can delete the interval [x2,3] by using the unimodality
assumption.
Fibonacci method
Solution:
Fibonacci method
Solution:
The third experiment is placed at x3=0+ (x2-x1)=1.846154-
1.153846=0.692308, with the corresponding function value of f3=-
0.291364. Since f1 is greater than f3, we can delete the interval [x1,x2]
Fibonacci method
Solution:
The next experiment is located at x4=0+ (x1-x3)=1.153846-
0.692308=0.461538, with f4=-0.309811. Noting that f4 is less than f3, we
can delete the interval [x3,x1]
Fibonacci method
Solution:
The location of the next experiment can be obtained as x5=0+ (x3-
x4)=0.692308-0.461538=0.230770, with the corresponding objective
function value of f5=-0.263678. Since f4 is less than f3, we can delete the
interval [0,x5]
Fibonacci method
Solution:
The final experiment is positioned at x6=x5+ (x3-x4)=0.230770+(0.692308-
0.461538)=0.461540 with f6=-0.309810. (Note that, theoretically, the
value of x6 should be same as that of x4; however,it is slightly different
from x4 due to the round off error). Since f6 > f4 , we delete the interval
[x6, x3] and obtain the final interval of uncertainty as L6 = [x5,
x6]=[0.230770,0.461540].
Fibonacci method
Solution:
The ratio of the final to the initial interval of uncertainty is
L6 0.461540  0.230770
  0.076923
L0 3.0

This value can be compared with


Ln F1 1
 
L0 Fn Fn

which states that if n experiments (n=6) are planned, a resolution


no finer than 1/Fn= 1/F6=1/13=0.076923 can be expected from
the method.
Golden Section Method
• The golden section method is same as the Fibonacci
method except that in the Fibonacci method, the total
number of experiments to be conducted has to be
specified before beginning the calculation, whereas this
is not required in the golden section method.
Golden Section Method
• In the Fibonacci method, the location of the first two
experiments is determined by the total number of
experiments, n.

• In the golden section method, we start with the


assumption that we are going to conduct a large number
of experiments.

• Of course, the total number of experiments can be


decided during the computation.
bw
aw=0.382
bw =0.618
Repeat this process until the difference is very small that is Lw is equal
to Epsilon --- Lw = 0.618 -0.382 = 0.256
X=5w
New a = 5 (0.382 ) = 1.91
New b= 5(0.618) = 3.09
Final interval – 1.91 and 3.09
Example 2

Using Golden Section Method


Golden Section Method
Using the relation:
FN  FN 1  FN 2
We obtain, after dividing both sides by FN-1,
FN FN  2
 1
FN 1 FN 1

By defining a ratio  as
 FN 
  lim  
N  F
 N 1 
Golden Section Method
The equation
FN FN  2
 1
FN 1 FN 1
can be expressed as:
1
 1

that is:
 2   1  0
Golden Section Method
This gives the root =1.618, and hence the equation
k 1
F 
Lk  lim  N 1  L0
N 
 FN 
yields: 1
k 1

Lk    L0  (0.618) k 1 L0
 
2
F 
In the equation L3  lim  N 1  L0
N 
 FN 
the ratios FN-2/FN-1 and FN-1/FN have been taken to be
same for large values of N. The validity of this
assumption can be seen from the table:
Value of N 2 3 4 5 6 7 8 9 10 

Ratio FN-1/FN 0.5 0.667 0.6 0.625 0.6156 0.619 0.6177 0.6181 0.6184 0.618
Golden Section Method
The ratio  has a historical background. Ancient Greek architects
believed that a building having the sides d and b satisfying the
relation
d b d
 
d b
will be having the most pleasing properties. It is also found in
Euclid’s geometry that the division of a line segment into two
unequal parts so that the ratio of the whole to the larger part is equal
to the ratio of the larger to the smaller, being known as the golden
section, or golden mean-thus the term golden section method.
Comparison of elimination methods
• The efficiency of an elimination method can be measured in terms of the ratio of the
final and the initial intervals of uncertainty, Ln/L0
• The values of this ratio achieved in various methods for a specified number of
experiments (n=5 and n=10) are compared in the Table below:

• It can be seen that the Fibonacci method is the most efficient method, followed by the
golden section method, in reducing the interval of uncertainty.
Comparison of elimination methods
• A similar observation can be made by considering the number of
experiments (or function evaluations) needed to achieve a specified
accuracy in various methods.

• The results are compared in the Table below for maximum permissable
errors of 0.1 and 0.01.

• It can be seen that to achieve any specified accuracy, the Fibonacci method
requires the least number of experiments, followed by the golden section
method.
Practical Considerations
Sometimes, the Direct Root Methods such as the
Newton, Quasi-Newton and the Secant method or the
interpolation methods such as the quadratic and the
cubic interpolation methods may be:
• very slow to converge,
• may diverge
• may predict the minimum of the function f() outside the
initial interval of uncertainty, especially when the
interpolating polynomial is not representative of the
variation of the function being minimized.
In such cases, we can use the Fibonacci or the golden
section method to find the minimum.
Practical Considerations
In some problems, it might prove to be more
efficient to combine several techniques. For
example:
• The unrestricted search with an accelerated step
size can be used to bracket the minimum and
then the Fibonacci or the golden section method
can be used to find the optimum point.
• In some cases, the Fibonacci or the golden
section method can be used in conjunction with
an interpolation method.
Comparison of methods
• The Fibonacci method is the most efficient elimination technique in finding
the minimum of a function if the initial interval of uncertainty is known.

• In the absence of the initial interval of uncertainty, the quadratic


interpolation method or the quasi-Newton method is expected to be more
efficient when the derivatives of the function are not available.

• When the first derivatives of the function being minimized are available,
the cubic interpolation method or the secant method are expected to be
very efficient.

• On the other hand, if both the first and the second derivatives of the
function are available, the Newton method will be the most efficient one in
finding the optimal step length, *.

You might also like