Golden Section Search
Golden Section Search
When a function is not differentiable, or is difficult to differentiate, then we may end up using a search
technique to identify the function minimum/maximum over a range of interest. Search techniques are
generally straightforward, but can be computationally intense. For instance, if we wanted to search
f ′( x) = −( x − 4.1) 2 over the range [3, 5], we could arbitrarily split the range up into ten sections and
evaluate the function at each point.
After 10 iterations, we could guess that the function has a local maximum of approximately –0.01
somewhere in the interval between 4.0 and 4.2. Of course, we could sub-divide this interval of
uncertainty further and continue to iterate until such time as we found our local maximum with sufficient
accuracy.
To make things a bit simpler, let us assume that we are solving the following problem:
Maximise f(x)
subject to a ≤ x ≤ b
Unimodal
A function f(x) is unimodal on [a, b] if for some point x* on [a, b], f(x) is strictly increasing on [a, x*] and
strictly decreasing on [x*, b].
Note
Assuming unimodal functions will make the search easier for us, since we know that we are looking for
only one maximum (or minimum) point.
Searches are generally simple – but we want to be smart about how we do them.
We start with an interval of uncertainty (the interval in which we know that our
maximum must lie) equal to [a, b], whose length is b – a.
We pick two points in the interval [a, b] and evaluate the function at these points.
x2
x1
a b
If f ( x1 ) < f ( x 2 ) , then we know that in the range [x1, x2] that the function is increasing.
Therefore, at worst, we know that the function value must be greater than f ( x1 ) . Since
the function is unimodal, then we know that the maximum cannot be less than x1. Thus,
we may conclude that the maximum is in the range of (x1, b].
Note
We cannot assume that the maximum is in the range [x1, x2]. See below:
x2
x1
a b
If f ( x1 ) > f ( x 2 ) , then we know the lower bound on the function is f(x2). Since the
function is unimodal, the function’s maximum must be greater than x2. Therefore the
maximum must lie in the range [a, x2).
If f ( x1 ) = f ( x 2 ) , then we know the maximum must lie in the range (x1, x2) since the
points x1 and x2 have to be on either side of the maximum.
Given this information, all we have to do is update our interval of uncertainty and to
restart the process. For example, if f ( x1 ) < f ( x 2 ) , and the interval of uncertainty is [a,
b], the new interval becomes (x1, b]. If f ( x1 ) > f ( x 2 ) , and the interval of uncertainty is
[a, b], the new interval becomes [a, x2). If f ( x1 ) = f ( x 2 ) , and the interval of uncertainty
is [a, b], the new interval becomes [x1, x2).
The values of x1 and x2 are not picked at random. In a golden search, the x1 and x2 are picked such that
each point sub-divides the interval of uncertainty into two parts where:
1 r
=
r 1− r
1 – r = r2
r2 + r – 1 = 0
Taking only the positive root from the quadratic equation, we find
− 1+ 5
r=
2
To select x1, we subtract r(b – a) from b. (x1 is 0.618 of the interval away from b).
To select x2, we add r(b – a) to a. (x2 is 0.618 of the interval away from a).
Dividing the line segment up in this way gives us one obvious benefit. Each time we update the interval
of uncertainty, we can re-use one of the two test points, if it turns out to be computationally expensive to
calculate the function value.
For example, assume a search on [0, 1].
0 .382 .618 1
a=0
b=1
x1 = b – 0.618(b – a)
= 1 – 0.618
= .382
x2 = a + .618(b – a)
= 0 + 0.618
= 0.618
a=0
b = 0.618
x1 = b – 0.618(b – a)
= 0.618 – 0.618(0.618)
= 0.236
x2 = 0 + 0.618(b – a)
= 0 + 0.618(0.618)
= 0.382
a = 0.382
b=1
x1 = b – 0.618(b – a)
= 1 – 0.618(0.618)
= 0.618
x2 = a + 0.618(b – a)
= 0.382 + 0.618(0.618)
= 0.764
The Golden Search Algorithm
Note that, at each iteration, the uncertainty interval Lk = r k (b0 − a 0 ) . Thus, we can determine the number
of iterations necessary to find x* before starting the process.
Step 1
Step 2
Step 3
ak+1 = x1k
bk+1 = bk
ak+1 = ak
bk+1 = x2k
Step 4
Example
Maximise f ( x) = −( x − 4.1) 2
subject to 3≤ x ≤5
Note
ln(0.001 / 2)
k =
ln(0.618)
Thus, we would conclude that the function reaches a maximum of (0.0000, 0.0000) in the range (4.0998,
4.1007).
Note
The Golden Section Search may not be the most efficient search technique. It works well when f(x) is
complicated (we can take advantage of the fact that we need only do one function determination at each
interval other than iteration 0) and unimodal. You should also note that the golden section search could
be tailored to work on functions that are not unimodal and not maximization problems. However, the
way in which we have structured the algorithm above requires a unimodal maximization problem.