L8 Single Variable Optimization Algorithms
L8 Single Variable Optimization Algorithms
Minimize f(x), where f(x) is the objective function and x is a real variable.
The purpose of an optimization algorithm is to find a solution x, for which the function f(x) is
minimum.
A) Direct Methods:
i) Bracketing Methods:
Page 1 of 9
b) Bounding Phase Method
Bounding phase method is used to bracket the minimum of a function. This method
guarantees to bracket the minimum of a unimodal function.
The algorithm begins with an initial guess and thereby finds a search direction based on
two more function evaluations in the vicinity of the initial guess. Thereafter, an
exponential search strategy is adopted to reach the optimum.
Once the minimum point is bracketed, a more sophisticated algorithm needs to be used
to improve the accuracy of the solution.
The fundamental rule for region-elimination methods is as follows:
Let us consider two points x1 and x2 which lie in the interval (a, b) and satisfy x1 < x2.
• If f(x1) > f(x2) then the minimum does not lie in (a, x1).
• If f(x1) < f(x2) then the minimum does not lie in (x2, b).
• If f(x1) = f(x2) then the minimum does not lie in (a, x1) and (x2, b).
Page 2 of 9
Consider a unimodal function drawn in Figure. If the function value at x1 is larger than
that at x2, the minimum point x∗ cannot lie on the left-side of x1. Thus, we can
eliminate the region (a, x1) from further consideration.
Therefore, we reduce our interval of interest from (a, b) to (x1, b). Similarly, the second
possibility (f(x1) < f(x2)) can be explained.
If the third situation occurs, that is, when f(x1) = f(x2) (this is a rare situation, especially
when numerical computations are performed), we can conclude that regions (a, x1) and
(b, x2) can be eliminated with the assumption that there exists only one local minimum
in the search space (a, b).
In this algorithm, the search space (a, b) is first linearly mapped to a unit interval search
space (0, 1). Thereafter, two points at τ from either end of the search space are chosen
so that at every iteration the eliminated region is (1 − τ ) to that in the previous iteration
(Figure).
This can be achieved by equating 1 – τ with (τ × τ ). This yields the golden number: τ =
0.618. Figure can be used to verify that in each iteration one of the two points x1 and x2
is always a point considered in the previous iteration.
Algorithm:
Page 3 of 9
Step 1 Choose a lower bound a and an upper bound b. Also choose a small number ϵ.
Normalize the variable x by using the equation w = (x−a)/(b−a). Thus, aw = 0, bw = 1, and
Lw = 1. Set k = 1.
Else Terminate.
In this algorithm, the interval reduces to (0.618)n−1 after n function evaluations. Thus, the
number of function evaluations n required to achieve a desired accuracy ϵ is calculated by
solving the following equation:
(0.618) n−1(b − a) = ϵ.
The effective region elimination per function evaluation is exactly 38.2 per cent, which is
higher than that in the interval halving method and Fibonacci search
Example problem:
Consider the following function and perform two iterations and report the solution
considering an initial interval for x as [0,5].
f ( x )=x 2 /54 x
Step 1
We choose a = 0 and b = 5.
Thus, aw = 0, bw = 1, and Lw = 1.
Since the golden section method works with a transformed variable w, it is convenient to
work with the transformed function:
Step 2
We set w1 = 0 + (0.618)1 = 0.618 and w2 = 1 − (0.618)1 or w2 = 0.382.
The corresponding function values are f(w1) = 27.02 and f(w2) = 31.92.
Since f(w1) < f(w2), the minimum cannot lie in any point smaller than w = 0.382.
Page 4 of 9
Thus, we eliminate the region (a,w2) or (0, 0.382).
At this stage, Lw = 1 − 0.382 = 0.618. The region being eliminated after this iteration is
shown in Figure.
Step 3
Since |Lw| is not smaller than ϵ, we set k = 2 and move to Step 2. This completes one
iteration of the golden section search method.
Figure: Iteration 1
Step 2
For the second iteration, we set
w2 = 1 − (0.618)0.618 = 0.618.
We observe that the point w2 was computed in the previous iteration. Thus, we only need to
compute the function value at w1: f(w1) = 28.73.
Using the fundamental region-elimination rule and observing the relation f(w1) > f(w2), we
eliminate the interval (0.764, 1).
Thus, the new bounds are aw = 0.382 and bw = 0.764, and the new interval is Lw =
0.764−0.382 = 0.382, which is incidentally equal to (0.618)2 !
Figure shows the final region after two iterations of this algorithm.
Step 3
Since the obtained interval is not smaller than ϵ, we continue to proceed to Step 2 after
incrementing the iteration counter k to 3.
Page 5 of 9
Step 2
Here, we observe that w1 = 0.618 and w2 = 0.528, of which the point w1 was evaluated
before.
We also observe that f(w1) < f(w2) and we eliminate the interval (0.382, 0.528).
The new interval is (0.528, 0.764) and the new range is Lw = 0.764 − 0.528 = 0.236, which is
exactly equal to (0.618)3 !
Step 3
Thus, at the end of the third iteration, Lw = 0.236. This way, Steps 2 and 3 may be continued
until the desired accuracy is achieved.
We observe that at each iteration, only one new function evaluation is necessary. After three
iterations, we have performed only four function evaluations.
B) Gradient-based Methods
Page 6 of 9
i) Newtons Method
f ' ( x ( t ))
x (t +1)=x t −
f ' ' ( x (t ) )
Algorithm
Step 1
Choose initial guess x(1) and a small number ϵ. Set k = 1. Compute f′(x(1)).
Step 2
Compute f′′(x(k)).
Step 3
Calculate x(k+1) = x(k) − f′(x(k)) / f′′(x(k)). Compute f′(x(k+1)).
Step 4
If |f′(x(k+1))| < ϵ, Terminate;
Convergence of the algorithm depends on the initial point and the nature of the objective
function. For mathematical functions, the derivative may be easy to compute, but in practice,
the gradients have to be computed numerically.
Example problem
Consider the following function and perform two iterations and report the solution
considering starting point as x (1) =[ 1 ]
f ( x )=x 2 /54 x
Step 1
We choose an initial guess x(1) = 1, a termination factor ϵ = 10−3, and an iteration counter k =
1.
We compute the derivative using ‘diff’ function in Matlab or use Numerical method
The computed derivative is −52.005, whereas the exact derivative at x(1) is found to be −52.
Page 7 of 9
We accept the computed derivative value and proceed to Step 2.
Step 2
The exact second derivative of the function at x(1) = 1 is found to be 110.
The second derivative computed as f′′(x(1)) = 110.011, which is close to the exact value.
Step 3
We compute the next guess,
= 1 − (−52.005)/(110.011),
= 1.473.
Step 4
Step 2
We begin the second iteration by computing the second derivative numerically at x(2): f′′
(x(2)) = 35.796.
Step 3
Step 4
Step 2
Step 3
The new point is calculated as x(4) = 2.679 and the derivative is f′(x(4)) = −2.167.
Nine function evaluations were required to obtain this point.
Step 4
Page 8 of 9
Since the absolute value of this derivative is not smaller than ϵ, the search proceeds to Step 2.
Note
After three more iterations, we find that x(7) = 3.0001 and the derivative is f′(x(7)) =
−4(10)−8, which is small enough to terminate the algorithm. Since, at every iteration the first
and second-order derivatives are calculated at a new point, a total of three function values are
evaluated at every iteration.
Summary:
The problem of finding the minimum point can be divided into two phases. At first, the
minimum point of the function needs to be bracketed between a lower and an upper
bound. Secondly, the minimum needs to be found as accurately as possible by keeping
the search effort enclosed in the bounds obtained in the first phase.
The exhaustive search method requires, in general, more function evaluations to bracket
the minimum but the user has a control over the final bracketing range. On the other
hand, the bounding phase method can bracket the minimum very fast (usually
exponentially fast) but the final bracketing range may be poor.
Once the minimum is bracketed, region-elimination methods, point estimation methods,
or gradient-based methods may be used to find a close estimate of the minimum.
Region-elimination methods exclude some portion of the search space at every iteration
by comparing function values at two points. Among the region-elimination methods the
golden section search is the most economical.
Gradient-based methods use derivative information, which may be computed
numerically.
For well-behaved objective functions, the convergence to the optimum point is faster
with gradient method than with the region-elimination method.
However, for any arbitrary unimodal objective function, the golden section search
method is more reliable than other methods described in this chapter.
Exercise Problem
Use three iterations and find the value of ‘x’ which maximize the given function :
f(x) = 10 + x3 − 2x − 5 exp (x)
a) Golden section search method in the interval (−5, 5).
b) Newtons method with a start point x 1=[0 ]
$$$
Page 9 of 9