Optimization Open Method - 3
Optimization Open Method - 3
CHAPTER 13
One-Dimensional Unconstrained
Optimization
This section will describe techniques to find the minimum or maximum of a function of
a single variable, f(x). A useful image in this regard is the one-dimensional, “roller coaster”–
like function depicted in Fig. 13.1. Recall from Part Two that root location was complicated
by the fact that several roots can occur for a single function. Similarly, both local and
global optima can occur in optimization. Such cases are called multimodal. In almost all
instances, we will be interested in finding the absolute highest or lowest value of a func-
tion. Thus, we must take care that we do not mistake a local result for the global optimum.
Distinguishing a global from a local extremum can be a very difficult problem for
the general case. There are three usual ways to approach this problem. First, insight into
the behavior of low-dimensional functions can sometimes be obtained graphically. Sec-
ond, finding optima based on widely varying and perhaps randomly generated starting
guesses, and then selecting the largest of these as global. Finally, perturbing the starting
point associated with a local optimum and seeing if the routine returns a better point or
always returns to the same point. Although all these approaches can have utility, the fact
is that in some problems (usually the large ones), there may be no practical way to
ensure that you have located a global optimum. However, although you should always
FIGURE 13.1
A function that asymptotically approaches zero at plus and minus q and has two maximum and
two minimum points in the vicinity of the origin. The two points to the right are local optima,
whereas the two to the left are global.
f (x)
Global Local
maximum maximum
x
Global
minimum Local
minimum
355
cha9792x_ch13_355-369.indd Page 356 23/10/13 5:45 PM F-468 /207/MH02101/cha9792x_disk1of1/007339792x/cha9792x_pagefiles
be sensitive to the issue, it is fortunate that there are numerous engineering problems
where you can locate the global optimum in an unambiguous fashion.
Just as in root location, optimization in one dimension can be divided into bracket-
ing and open methods. As described in the next section, the golden-section search is an
example of a bracketing method that depends on initial guesses that bracket a single
optimum. This is followed by an alternative approach, parabolic interpolation, which
often converges faster than the golden-section search, but sometimes diverges.
Another method described in this chapter is an open method based on the idea from
calculus that the minimum or maximum can be found by solving f9(x) 5 0. This reduces
the optimization problem to finding the root of f9(x) using techniques of the sort described
in Part Two. We will demonstrate one version of this approach—Newton’s method.
Finally, an advanced hybrid approach, Brent’s method, is described. This ap-
proach combines the reliability of the golden-section search with the speed of para-
bolic interpolation.
Parabolic
approximation
True maximum of maximum
f (x) True function
Parabolic
function
x0 x1 x3 x2 x
FIGURE 13.6
Graphical description of parabolic interpolation.
We should mention that just like the false-position method, parabolic interpolation
can get hung up with just one end of the interval converging. Thus, convergence can
be slow. For example, notice that in our example, 1.0000 was an endpoint for most of
the iterations.
This method, as well as others using third-order polynomials, can be formulated into
algorithms that contain convergence tests, careful selection strategies for the points to
retain on each iteration, and attempts to minimize round-off error accumulation.
A similar open approach can be used to find an optimum of f(x) by defining a new
function, g(x) 5 f 9(x). Thus, because the same optimal value x* satisfies both
f ¿(x*) 5 g(x*) 5 0
f ¿(xi )
xi11 5 xi 2 (13.8)
f –(xi )
as a technique to find the minimum or maximum of f(x). It should be noted that this
equation can also be derived by writing a second-order Taylor series for f(x) and setting
the derivative of the series equal to zero. Newton’s method is an open method similar to
Newton-Raphson because it does not require initial guesses that bracket the optimum. In
addition, it also shares the disadvantage that it may be divergent. Finally, it is usually a
good idea to check that the second derivative has the correct sign to confirm that the
technique is converging on the result you desire.
5
1
f –(x) 5 22 sin x 2
5
cha9792x_ch13_355-369.indd Page 366 09/12/13 9:11 AM F-468 /207/MH02101/cha9792x_disk1of1/007339792x/cha9792x_pagefiles
Thus, within four iterations, the result converges rapidly on the true value.
Although Newton’s method works well in some cases, it is impractical for cases
where the derivatives cannot be conveniently evaluated. For these cases, other approaches
that do not involve derivative evaluation are available. For example, a secant-like version
of Newton’s method can be developed by using finite-difference approximations for the
derivative evaluations.
A bigger reservation regarding the approach is that it may diverge based on the
nature of the function and the quality of the initial guess. Thus, it is usually employed
only when we are close to the optimum. As described next, hybrid techniques that use
bracketing approaches far from the optimum and open methods near the optimum attempt
to exploit the strong points of both approaches.
PROBLEMS
13.1 Given the formula 13.8 Employ the following methods to find the maximum of the
2
function from Prob. 13.7:
f (x) 5 2x 1 8x 2 12 (a) Golden-section search (xl 5 22, xu 5 1, es 5 1%).
(a) Determine the maximum and the corresponding value of x for (b) Parabolic interpolation (x0 5 22, x1 5 21, x2 5 1, itera-
this function analytically (i.e., using differentiation). tions 5 4). Select new points sequentially as in the secant
(b) Verify that Eq. (13.7) yields the same results based on initial method.
guesses of x0 5 0, x1 5 2, and x2 5 6. (c) Newton’s method (x0 5 21, es 5 1%).
13.2 Given 13.9 Consider the following function: