0% found this document useful (0 votes)
84 views

L8 Single Variable Optimization Algorithms

The document discusses single variable optimization algorithms. It describes two types of algorithms - direct search methods that use only objective function values, and gradient-based methods that use derivatives of the objective function. Specific direct search methods discussed include bracketing methods like the exhaustive search method and bounding phase method, and region elimination methods like the golden section search method. Gradient-based methods discussed include Newton's method, which iteratively estimates the next point using a Taylor series approximation. An example problem applying the golden section search method and Newton's method is also presented.

Uploaded by

Sudipta Maity
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
84 views

L8 Single Variable Optimization Algorithms

The document discusses single variable optimization algorithms. It describes two types of algorithms - direct search methods that use only objective function values, and gradient-based methods that use derivatives of the objective function. Specific direct search methods discussed include bracketing methods like the exhaustive search method and bounding phase method, and region elimination methods like the golden section search method. Gradient-based methods discussed include Newton's method, which iteratively estimates the next point using a Taylor series approximation. An example problem applying the golden section search method and Newton's method is also presented.

Uploaded by

Sudipta Maity
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 9

21ED602 Optimization Techniques in Engineering

Class Notes (Internal Circulation)

8. Single Variable Optimization Algorithms

8.1 General Structure of the problem


Solve minimization problems of the following type:

Minimize f(x), where f(x) is the objective function and x is a real variable.

The purpose of an optimization algorithm is to find a solution x, for which the function f(x) is
minimum.

8.2 Types of algorithms


Two distinct types of algorithms are presented in this chapter: Direct search methods use
only objective function values to locate the minimum point, and Gradient-based methods
use the first and/or the second-order derivatives of the objective function to locate the
minimum point.

A) Direct Methods:

i) Bracketing Methods:

a) Exhaustive Search Method

 In the exhaustive search method, the optimum of a function is bracketed by calculating


the function values at several equally spaced points (Figure).
 Usually, the search begins from a lower bound on the variable and three consecutive
function values are compared at a time based on the assumption of unimodality of the
function.
 Based on the outcome of comparison, the search is either terminated or continued by
replacing one of the three points by a new point. The search continues until the
minimum is bracketed.

Page 1 of 9
b) Bounding Phase Method

 Bounding phase method is used to bracket the minimum of a function. This method
guarantees to bracket the minimum of a unimodal function.
 The algorithm begins with an initial guess and thereby finds a search direction based on
two more function evaluations in the vicinity of the initial guess. Thereafter, an
exponential search strategy is adopted to reach the optimum.

ii) Region-Elimination Methods

 Once the minimum point is bracketed, a more sophisticated algorithm needs to be used
to improve the accuracy of the solution.
 The fundamental rule for region-elimination methods is as follows:

Let us consider two points x1 and x2 which lie in the interval (a, b) and satisfy x1 < x2.

For unimodal functions for minimization, we can conclude the following:

• If f(x1) > f(x2) then the minimum does not lie in (a, x1).

• If f(x1) < f(x2) then the minimum does not lie in (x2, b).

• If f(x1) = f(x2) then the minimum does not lie in (a, x1) and (x2, b).

Fundamental region elimination rule is explained with the following Figure:

Page 2 of 9
 Consider a unimodal function drawn in Figure. If the function value at x1 is larger than
that at x2, the minimum point x∗ cannot lie on the left-side of x1. Thus, we can
eliminate the region (a, x1) from further consideration.
 Therefore, we reduce our interval of interest from (a, b) to (x1, b). Similarly, the second
possibility (f(x1) < f(x2)) can be explained.
 If the third situation occurs, that is, when f(x1) = f(x2) (this is a rare situation, especially
when numerical computations are performed), we can conclude that regions (a, x1) and
(b, x2) can be eliminated with the assumption that there exists only one local minimum
in the search space (a, b).

a) Golden Section Search Method

 In this algorithm, the search space (a, b) is first linearly mapped to a unit interval search
space (0, 1). Thereafter, two points at τ from either end of the search space are chosen
so that at every iteration the eliminated region is (1 − τ ) to that in the previous iteration
(Figure).

 This can be achieved by equating 1 – τ with (τ × τ ). This yields the golden number: τ =
0.618. Figure can be used to verify that in each iteration one of the two points x1 and x2
is always a point considered in the previous iteration.

Algorithm:

Page 3 of 9
Step 1 Choose a lower bound a and an upper bound b. Also choose a small number ϵ.
Normalize the variable x by using the equation w = (x−a)/(b−a). Thus, aw = 0, bw = 1, and
Lw = 1. Set k = 1.

Step 2 Set w1 = aw +(0.618) Lw and w2 = bw −(0.618) Lw. Compute f(w1) or f(w2),


depending on whichever of the two was not evaluated earlier. Use the fundamental region-
elimination rule to eliminate a region. Set new aw and bw.

Step 3 Is |Lw| < ϵ small? If no, set k = k + 1, go to Step 2;

Else Terminate.

In this algorithm, the interval reduces to (0.618)n−1 after n function evaluations. Thus, the
number of function evaluations n required to achieve a desired accuracy ϵ is calculated by
solving the following equation:

(0.618) n−1(b − a) = ϵ.

The effective region elimination per function evaluation is exactly 38.2 per cent, which is
higher than that in the interval halving method and Fibonacci search

Example problem:

Consider the following function and perform two iterations and report the solution
considering an initial interval for x as [0,5].
f ( x )=x 2 /54 x

Step 1
We choose a = 0 and b = 5.

Normalize the variable x by using the equation w = (x−a)/(b−a)

The transformation equation becomes w = x/5 .

Thus, aw = 0, bw = 1, and Lw = 1.

Since the golden section method works with a transformed variable w, it is convenient to
work with the transformed function:

f(w) = 25w2 + 54/(5w).

We set an iteration counter k = 1.

Step 2
We set w1 = 0 + (0.618)1 = 0.618 and w2 = 1 − (0.618)1 or w2 = 0.382.

The corresponding function values are f(w1) = 27.02 and f(w2) = 31.92.

Since f(w1) < f(w2), the minimum cannot lie in any point smaller than w = 0.382.

Page 4 of 9
Thus, we eliminate the region (a,w2) or (0, 0.382).

Thus, aw = 0.382 and bw = 1.

At this stage, Lw = 1 − 0.382 = 0.618. The region being eliminated after this iteration is
shown in Figure.

Step 3
Since |Lw| is not smaller than ϵ, we set k = 2 and move to Step 2. This completes one
iteration of the golden section search method.

Figure: Iteration 1

Step 2
For the second iteration, we set

w1 = 0.382 + (0.618)0.618 = 0.764,

w2 = 1 − (0.618)0.618 = 0.618.

We observe that the point w2 was computed in the previous iteration. Thus, we only need to
compute the function value at w1: f(w1) = 28.73.

Using the fundamental region-elimination rule and observing the relation f(w1) > f(w2), we
eliminate the interval (0.764, 1).

Thus, the new bounds are aw = 0.382 and bw = 0.764, and the new interval is Lw =
0.764−0.382 = 0.382, which is incidentally equal to (0.618)2 !

Figure shows the final region after two iterations of this algorithm.

Step 3
Since the obtained interval is not smaller than ϵ, we continue to proceed to Step 2 after
incrementing the iteration counter k to 3.

Page 5 of 9
Step 2
Here, we observe that w1 = 0.618 and w2 = 0.528, of which the point w1 was evaluated
before.

Thus, we compute f(w2) only: f(w2) = 27.43.

We also observe that f(w1) < f(w2) and we eliminate the interval (0.382, 0.528).

The new interval is (0.528, 0.764) and the new range is Lw = 0.764 − 0.528 = 0.236, which is
exactly equal to (0.618)3 !

Step 3

Thus, at the end of the third iteration, Lw = 0.236. This way, Steps 2 and 3 may be continued
until the desired accuracy is achieved.

We observe that at each iteration, only one new function evaluation is necessary. After three
iterations, we have performed only four function evaluations.

Thus, the interval reduces to (0.618)3 or 0.236.

B) Gradient-based Methods

 In many real-world problems, it is difficult to obtain the information about derivatives,


either due to the nature of the problem or due to the computations involved in
calculating the derivatives.
 Despite these difficulties, gradient-based methods are popular and are often found to be
effective. However, it is recommended to use these algorithms in problems where the
derivative information is available or can be calculated easily.
 The optimality property that at a local or a global optimum the gradient is zero can be
used to terminate the search process.

Page 6 of 9
i) Newtons Method

 The goal of an unconstrained local optimization method is to achieve a point having as


small a derivative as possible.
 In the Newton method, a linear approximation to the first derivative of the function is
made at a point using the Taylor’s series expansion. That expression is equated to zero
to find the next guess.
 If the current point at iteration t is x(t), the point in the next iteration is governed by the
following simple equation (obtained by considering up to the linear term in Taylor’s
series expansion):

f ' ( x ( t ))
x (t +1)=x t −
f ' ' ( x (t ) )
Algorithm

Step 1
Choose initial guess x(1) and a small number ϵ. Set k = 1. Compute f′(x(1)).

Step 2
Compute f′′(x(k)).

Step 3
Calculate x(k+1) = x(k) − f′(x(k)) / f′′(x(k)). Compute f′(x(k+1)).

Step 4
If |f′(x(k+1))| < ϵ, Terminate;

Else set k = k + 1 and go to Step 2.

Convergence of the algorithm depends on the initial point and the nature of the objective
function. For mathematical functions, the derivative may be easy to compute, but in practice,
the gradients have to be computed numerically.

Example problem

Consider the following function and perform two iterations and report the solution
considering starting point as x (1) =[ 1 ]
f ( x )=x 2 /54 x

Step 1
We choose an initial guess x(1) = 1, a termination factor ϵ = 10−3, and an iteration counter k =
1.

We compute the derivative using ‘diff’ function in Matlab or use Numerical method

The computed derivative is −52.005, whereas the exact derivative at x(1) is found to be −52.

Page 7 of 9
We accept the computed derivative value and proceed to Step 2.

Step 2
The exact second derivative of the function at x(1) = 1 is found to be 110.

The second derivative computed as f′′(x(1)) = 110.011, which is close to the exact value.

Step 3
We compute the next guess,

x(2) = x(1) − f′(x(1)) / f′′(x(1)),

= 1 − (−52.005)/(110.011),

= 1.473.

The derivative computed at this point is found to be f′(x(2)) = −21.944.

Step 4

Since |f′(x(2))| not equal to ϵ, we increment k to 2 and go to Step 2.

This completes one iteration of the Newtons method.

Step 2

We begin the second iteration by computing the second derivative numerically at x(2): f′′
(x(2)) = 35.796.

Step 3

The next guess, as computed using Equation (2.6), is x(3) = 2.086

and f′(x(3)) = −8.239 computed numerically.

Step 4

Since |f′(x(3))| not equal to ϵ, we set k = 3 and move to Step 2.


This is the end of the second iteration.

Step 2

The second derivative at the point is f′′(x(3)) = 13.899.

Step 3

The new point is calculated as x(4) = 2.679 and the derivative is f′(x(4)) = −2.167.
Nine function evaluations were required to obtain this point.

Step 4

Page 8 of 9
Since the absolute value of this derivative is not smaller than ϵ, the search proceeds to Step 2.

Note

After three more iterations, we find that x(7) = 3.0001 and the derivative is f′(x(7)) =
−4(10)−8, which is small enough to terminate the algorithm. Since, at every iteration the first
and second-order derivatives are calculated at a new point, a total of three function values are
evaluated at every iteration.

ii) Other Methods


 Secant Method
 Cubic search method

Summary:

 The problem of finding the minimum point can be divided into two phases. At first, the
minimum point of the function needs to be bracketed between a lower and an upper
bound. Secondly, the minimum needs to be found as accurately as possible by keeping
the search effort enclosed in the bounds obtained in the first phase.
 The exhaustive search method requires, in general, more function evaluations to bracket
the minimum but the user has a control over the final bracketing range. On the other
hand, the bounding phase method can bracket the minimum very fast (usually
exponentially fast) but the final bracketing range may be poor.
 Once the minimum is bracketed, region-elimination methods, point estimation methods,
or gradient-based methods may be used to find a close estimate of the minimum.
 Region-elimination methods exclude some portion of the search space at every iteration
by comparing function values at two points. Among the region-elimination methods the
golden section search is the most economical.
 Gradient-based methods use derivative information, which may be computed
numerically.
 For well-behaved objective functions, the convergence to the optimum point is faster
with gradient method than with the region-elimination method.
 However, for any arbitrary unimodal objective function, the golden section search
method is more reliable than other methods described in this chapter.

Exercise Problem

Use three iterations and find the value of ‘x’ which maximize the given function :
f(x) = 10 + x3 − 2x − 5 exp (x)
a) Golden section search method in the interval (−5, 5).
b) Newtons method with a start point x 1=[0 ]

$$$

Page 9 of 9

You might also like