T2 NonlinearEqsAndOptimization
T2 NonlinearEqsAndOptimization
Numerical Analysis
1
Nonlinear equations in 1 variable
• Bracketing methods
𝑓(𝑥) = 0, 𝑥 ∈ [𝑎, 𝑏]
• Fixed point methods
𝑔 𝑥 = 𝑥, 𝑥 ∈ [𝑎, 𝑏]
• Newton/secant method
𝑓 𝑥 = 0, given 𝑥 0 near a solution
• Optimization
min 𝑓 𝑥 by setting 𝑓 ′ 𝑥 = 0
2
Bracketing methods
3
Nonlinear equations in 1 variable
• Our problem is to solve
𝑓 𝑥 =0
where 𝑥 ∈ [𝑎, 𝑏], and 𝑓 ∈ ∁0 ( 𝑎, 𝑏 )
4
Examples
• Easy: solve
𝑥 2 − 𝑥 = 0 on 𝑥 ∈ [−0.5, 2]
• Hard: solve
𝑥 4 + 𝑥 3 − 2𝑥 = −0.5 on 𝑥 ∈ [0, 1]
• Impossible: solve
𝑒 𝑥 = 𝑥 𝑒 on 𝑥 ∈ [1, 2]
5
Plots
𝑥2 − 𝑥
𝑥 4 + 𝑥 3 − 2𝑥 + 0.5
𝑒𝑥 − 𝑥𝑒
6
Bracketing Methods
7
The Bisection Method
• To solve 𝑓 𝑥 = 0,
• Given 𝑎0 < 𝑏 0 with 𝑓 𝑎0 𝑓 𝑏 0 < 0
For 𝑘 = 0,1, 2, … lastk
1
Examine 𝑓 𝑐𝑘 at 𝑐𝑘 = (𝑎𝑘 + 𝑏𝑘 )
2
If 𝑓 𝑐 𝑘 = 0, then stop
If 𝑓 𝑎𝑘 𝑓 𝑐 𝑘 > 0, then set 𝑎𝑘+1 = 𝑐 𝑘 , 𝑏 𝑘+1 = 𝑏 𝑘
Else (𝑓 𝑏 𝑘 𝑓 𝑐 𝑘 > 0), set 𝑎𝑘+1 = 𝑎𝑘 , 𝑏 𝑘+1 = 𝑐 𝑘
9
Bisection Theorem
• Let 𝑓 ∈ 𝐶 0 [𝑎, 𝑏] and suppose 𝑓 𝑎 𝑓 𝑏 < 0.
• Suppose the Bisection method is applied with
𝑎0 = 𝑎 and 𝑏 0 = 𝑏.
• Then 𝑎𝑘 , 𝑏 𝑘 , and 𝑐 𝑘 all converge to the same
point 𝑝∗ .
• Moreover,
∗ ∗ 𝑘 1
𝑓 𝑝 = 0 and 𝑝 − 𝑐 ≤ |𝑏 − 𝑎| .
2𝑘+1
10
Example
• Use the Bisection method to seek point
𝑥 ∈ [0,2] such that
𝑥 sin 𝑥 ≈ 1
11
Example
• Suppose you are using the Bisection method
to seek point 𝑥 ∈ [0,2] such that
𝑥 sin 𝑥 ≈ 1
• How many iterations would be needed
(assuming no premature termination) to
achieve 6 significant digits on 𝑥?
12
Accelerating the bisection method
• Suppose we are using the bisection method
and we come to an iteration where
𝑓(𝑎𝑘 ) = 0.0003 and 𝑓(𝑏 𝑘 ) = −20.9432
13
Regula Falsi Method
• Let
𝑓 𝑏 − 𝑓(𝑎)
𝐿 𝑥 = 𝑥−𝑎 +𝑓 𝑎
𝑏−𝑎
• Solve 𝐿 𝑥 = 0
𝑓(𝑎)(𝑏 − 𝑎)
𝑥ො = 𝑎 −
𝑓 𝑏 − 𝑓(𝑎)
• Set 𝑐 = 𝑥ො
14
The Regula Falsi Method
• To solve 𝑓 𝑥 = 0,
• Given 𝑎0 < 𝑏 0 with 𝑓 𝑎0 𝑓 𝑏 0 < 0
For 𝑘 = 1, 2, … lastk
𝑓(𝑎𝑘 )(𝑏 𝑘 −𝑎𝑘 )
Examine 𝑓 𝑐𝑘 at 𝑐𝑘 = 𝑎𝑘 −
𝑓 𝑏𝑘 −𝑓(𝑎𝑘 )
If 𝑓 𝑐 𝑘 = 0, then stop
If 𝑓 𝑎𝑘 𝑓 𝑐 𝑘 > 0, then set 𝑎𝑘+1 = 𝑐 𝑘 , 𝑏 𝑘+1 = 𝑏 𝑘
Else (𝑓 𝑏 𝑘 𝑓 𝑐 𝑘 > 0), set 𝑎𝑘+1 = 𝑎𝑘 , 𝑏 𝑘+1 = 𝑐 𝑘
15
Remark
𝑘 𝑘 𝑘
𝑘 𝑘
𝑓(𝑏 )(𝑏 − 𝑎 )
𝑐 =𝑏 −
𝑓 𝑏 𝑘 − 𝑓(𝑎𝑘 )
𝑓 𝑏 −𝑓 𝑎
𝐿 𝑥 = 𝑥 − 𝑏 + 𝑓(𝑏)
𝑏−𝑎
16
Regula Falsi Theorem
• Let 𝑓 ∈ 𝐶 0 [𝑎, 𝑏] and suppose 𝑓 𝑎 𝑓 𝑏 < 0.
• Suppose the Regula Falsi method is applied
with 𝑎0 = 𝑎 and 𝑏 0 = 𝑏.
• Then 𝑐 𝑘 converges to some point 𝑝∗ .
• Moreover,
𝑓 𝑝∗ = 0.
18
Fixed Point Methods
19
Nonlinear equations in 1 variable
• We continue to look at solving
𝑓 𝑥 =0
where 𝑓 ∈ ∁0 [𝑎, 𝑏]
20
Definition: A fixed point of the function 𝑔 is a
point 𝑝 such that
𝑔 𝑝 =𝑝
21
Fixed points
• Any problem
Find 𝑓 𝑥 = 0
• Can be rewritten as
Find 𝑔 𝑥 = 𝑥
22
Example
• Show that solving
𝑥2 − 3 = 0
is the same as solving:
a) 𝑥 2 − 3 + 𝑥 = 𝑥
3
b) 𝑥
=𝑥
𝑥+3 𝑥−1
c) 2
=𝑥
23
The Fixed Point Algorithm
• Suppose we seek
𝑔 𝑥 =𝑥
• Given 𝑝0 , one thing to check is
𝑝1 = 𝑔(𝑝0 )
• If 𝑝1 = 𝑝0 , then we solved the problem
• If not, then maybe try
𝑝2 = 𝑔 𝑝1 , etc …
24
The Fixed Point Method
• To solve g 𝑥 = 𝑥,
• Given 𝑝0
For 𝑘 = 1, 2, … lastk
Set
𝑝𝑘+1 = 𝑔(𝑝𝑘 )
If 𝑝𝑘+1 = 𝑝𝑘 , then stop
25
Example
1
• Let 𝑔 𝑥 = 𝑥 + 1, and 𝑝0 = 0
2
26
Example
• Let 𝑔 𝑥 = 𝑥 3 + 𝑥 + 1, and 𝑝0 = 0
27
Theorem
• Let 𝑔 ∈ 𝐶 0 [𝑎, 𝑏].
• Suppose 𝑝𝑘+1 = 𝑔 𝑝𝑘 and 𝑝𝑘 → 𝑝∗ ∈ [𝑎, 𝑏]
• Then 𝑔 𝑝∗ = 𝑝∗
28
Example
• Consider the function
𝑔 𝑥 = 𝑥 2 + 𝑥 + 1.
• Prove that the fixed point algorithm cannot
converge regardless of starting point.
Proof: Note 𝑔 ∈ 𝐶 0 , so
if 𝑝𝑘 → 𝑝∗ , then 𝑔 𝑝∗ = 𝑝∗ .
This would imply
𝑝∗ 2 + 𝑝∗ + 1 = 𝑝∗
𝑝∗ 2 + 1 = 0
𝑝∗ 2 = −1 29
Example
Consider the problem of solving
𝑥 2 − 𝑥 − 3 = 0.
a) Prove that the fixed points of
3
𝑔 𝑥 = 𝑥2 − 3 and ℎ 𝑥 = 1 +
𝑥
can both be used to solve the problem.
b) Use the fixed point method on 𝑔 starting at 𝑝0 = 1
and 𝑝0 = 2.5 for 10 iterations.
c) Use the fixed point method on ℎ starting at 𝑝0 = 1
and 𝑝0 = 2.5 for 10 iterations.
30
Theorem
• Let 𝑔 ∈ 𝐶 0 [𝑎, 𝑏].
• If 𝑔 𝑥 ∈ 𝑎, 𝑏 for all 𝑥 ∈ 𝑎, 𝑏 ,
then 𝑔 has a fixed point 𝑝∗ ∈ 𝑎, 𝑏 .
31
Theorem
• Let 𝑔 ∈ 𝐶 1 [𝑎, 𝑏].
• Suppose 𝑔 𝑥 ∈ 𝑎, 𝑏 for all 𝑥 ∈ 𝑎, 𝑏 .
• If there exists 𝜌 such that
0<𝜌<1
and
𝑑
𝑔(𝑥) ≤ 𝜌 for all 𝑥 ∈ [𝑎, 𝑏],
𝑑𝑥
then 𝑔 has a unique fixed point in 𝑎, 𝑏 .
32
The Fixed Point Theorem
• Let 𝑔 ∈ 𝐶 1 [𝑎, 𝑏].
• Suppose 𝑔 𝑥 ∈ 𝑎, 𝑏 for all 𝑥 ∈ 𝑎, 𝑏 .
• If there exists 𝜌 ∈ (0,1) such that
𝑑
𝑔(𝑥) < 𝜌 for all 𝑥 ∈ [𝑎, 𝑏],
𝑑𝑥
then the fixed point iteration
𝑝𝑘+1 = 𝑔 𝑝𝑘
converges to the unique fixed point.
33
Corollary
• Let 𝑔 ∈ 𝐶 1 [𝑎, 𝑏].
• Suppose 𝑔 𝑥 ∈ 𝑎, 𝑏 for all 𝑥 ∈ 𝑎, 𝑏 .
• If there exists 𝜌 ∈ (0,1) such that
𝑑
𝑔(𝑥) ≤ 𝜌 for all 𝑥 ∈ [𝑎, 𝑏],
𝑑𝑥
then the fixed point iteration satisfies
𝑝𝑘 − 𝑝∗ ≤ 𝜌 𝑘
𝑝0 − 𝑝∗ .
Furthermore, 𝑝0 − 𝑝∗ ≤ max 𝑏 − 𝑝0 , 𝑝0 − 𝑎
34
Corollary
• Let 𝑔 ∈ 𝐶 1 [𝑎, 𝑏].
• Suppose 𝑔 𝑝∗ = 𝑝∗ for some 𝑝∗ ∈ (𝑎, 𝑏).
𝑑
• Suppose 𝑔(𝑝∗ ) < 1.
𝑑𝑥
Then the fixed point iteration
𝑝𝑘+1 = 𝑔 𝑝𝑘
will converge to 𝑝∗ provided that 𝑝0 is
sufficiently close to 𝑝∗ .
35
Attractor/Repeller
𝑑
Definition: If 𝑔 𝑝∗ < 1,
𝑑𝑥
then we call 𝑝∗ an attractor.
𝑑
Definition: If 𝑔 𝑝∗ > 1,
𝑑𝑥
then we call 𝑝∗ a repeller.
36
Problem
Consider the problem of solving
𝑥 2 − 𝑥 − 2 = 0.
Consider the fixed point reformulations
2 2
𝑔 𝑥 = 𝑥 − 2 and ℎ 𝑥 = 1 + .
𝑥
Assume 𝑝0 is close to the solution, which method is
more likely to succeed?
37
Visualizing fixed point iterations
Draw the fixed point algorithm starting at 0.1, 0.5, and 0.75 on the graph below.
𝑦
𝑦=𝑥
𝑦 = 𝑔(𝑥)
𝑥
𝑏=1
𝑎=0
38
Fixed Point vrs Bisection
Bisection method we saw
∗ 𝑘 1
𝑝 −𝑐 ≤ |𝑏 − 𝑎|
2𝑘+1
Fixed Point method we saw
𝑑
𝑔(𝑥) < 𝜌 implies 𝑝𝑘 − 𝑝∗ ≤ 𝜌𝑘 𝑝0 − 𝑝∗
𝑑𝑥
≤ 𝜌𝑘 |𝑏 − 𝑎|
1
• So, if 𝜌 > , then Bisection is faster
2
1
• If 𝜌 < , then Fixed Point is faster
2
39
Rate of convergence
Example: For what 𝑘 can we ensure
𝑘 ∗ 1
𝑝 −𝑝 < 𝑏−𝑎 ?
10
40
Newton-Raphson and Secant Methods
41
Nonlinear equations in 1 variable
• We continue to look at solving
𝑓 𝑥 =0
where 𝑓 ∈ ∁0 [𝑎, 𝑏]
42
Joseph Raphson
• The mystery man of Mathematics; depending on
the source,
– Date of birth ranges from 1648 to 1668
– Date of death ranges from 1712-1718
• Some sources say portrait survives, but it might
not be him, and I cannot find it
• Rumoured to be the only mathematician that
Newton truly trusted
• Definitely became a member of the Royal Society
of London on Dec 4th 1689
43
Newton-Raphson
𝐿 𝑥 = 𝑓 𝑝𝑘 + 𝑓′(𝑝𝑘 )(𝑥 − 𝑝𝑘 )
𝐿 𝑥 =0
𝑓 𝑝𝑘 + 𝑓 ′ 𝑝𝑘 𝑥 − 𝑝𝑘 = 0
𝑘
𝑘
𝑓(𝑝 )
𝑥=𝑝 −
𝑓′(𝑝𝑘 )
𝑝𝑘+1 𝑝𝑘
𝑓(𝑥)
44
Newton-Raphson Method
• To solve 𝑓 𝑥 = 0
• Given 𝑝0
For 𝑘 = 0, 1, 2, … lastk
Set
𝑘
𝑘+1 𝑘
𝑓(𝑝 )
𝑝 =𝑝 −
𝑓′(𝑝𝑘 )
If 𝑓(𝑝𝑘+1 ) = 0, then stop (success)
If 𝑓 ′ 𝑝𝑘+1 = 0, then stop (failure)
45
Theorem
• Suppose 𝑓 ∈ 𝐶 2 [𝑎, 𝑏].
• Suppose 𝑝∗ ∈ [𝑎, 𝑏] with
𝑓 𝑝∗ = 0 and 𝑓 ′ 𝑝∗ ≠ 0
• If 𝑝0 is sufficiently close to 𝑝∗ , then Newton-
Raphson converges
𝑝𝑘 → 𝑝∗ .
46
Remarks
• In the proof we saw 𝑔′ 𝑝∗ = 0.
• Thus 𝑔′(𝑥) will be very small near 𝑝∗ .
• This suggests very fast convergence…
47
Example
• Apply the Newton-Raphson method to find a
solution to 𝑥 2 − 8 = 0 starting at 𝑝0 = 1.
• Compute the number of significant digits
obtained at
𝑝0 , 𝑝1 , 𝑝2 , … 𝑝6
48
Theorem
• Suppose 𝑓 ∈ 𝐶 2 [𝑎, 𝑏].
• Suppose 𝑝∗ ∈ [𝑎, 𝑏] with
𝑓 𝑝∗ = 0 and 𝑓 ′ 𝑝∗ ≠ 0
• If 𝑝0 is sufficiently close to 𝑝∗ , then Newton-
Raphson converges quadratically to 𝑝∗
𝑘+1 ∗ 𝑘 ∗ 2
𝑝 −𝑝 ≤𝑀 𝑝 −𝑝
49
Remarks
• Newton-Raphson needs 𝑓′(𝑥) to work.
• We can avoid this by using
𝑘 𝑘−1
′ 𝑘
𝑓 𝑝 − 𝑓(𝑝 )
𝑓 𝑝 ≈
𝑝𝑘 − 𝑝𝑘−1
• This produces the Secant method
𝑘 𝑘 𝑘−1
𝑘+1 𝑘
𝑓(𝑝 )(𝑝 − 𝑝 )
𝑝 =𝑝 −
𝑓 𝑝𝑘 − 𝑓(𝑝𝑘−1 )
50
Secant Method
• To solve 𝑓 𝑥 = 0
• Given 𝑝0 and 𝑝1
For 𝑘 = 1, 2, … lastk
Set
𝑘 𝑘 𝑘−1
𝑘+1 𝑘
𝑓(𝑝 )(𝑝 − 𝑝 )
𝑝 =𝑝 −
𝑓 𝑝𝑘 − 𝑓(𝑝𝑘−1 )
If 𝑓(𝑝𝑘+1 ) = 0, then stop (success)
If 𝑓 𝑝𝑘 = 𝑓(𝑝𝑘+1 ), then stop (failure)
51
Theorem
• Suppose 𝑓 ∈ 𝐶 2 [𝑎, 𝑏].
• Suppose 𝑝∗ ∈ (𝑎, 𝑏) with
𝑓 𝑝∗ = 0 and 𝑓 ′ 𝑝∗ ≠ 0
• If 𝑝0 and 𝑝1 (𝑝0 ≠ 𝑝1 ) are sufficiently close to
𝑝∗ , then the Secant Method converges
𝑝𝑘 → 𝑝∗ .
52
Example
• Apply the Secant method to find a solution to
𝑥 2 − 8 = 0 starting at 𝑝0 =1, 𝑝1 =2.
• Compute the number of significant digits
obtained at
𝑝2 , 𝑝3 , … 𝑝6
53
Comparison
Relative Error solving 𝑥 2 − 8 = 0 via different methods
Iteration Bisection Regula Falsi Secant Newton-
𝒂𝟎 , 𝒃𝟎 = [𝟏, 𝟑] 𝒂𝟎 , 𝒃𝟎 = [𝟏, 𝟑] 𝒑𝟎 = 𝟏, 𝒑𝟏 = 𝟐 Raphson 𝒑𝟎 = 𝟏
1 2.93e-01 2.77e-02 𝑁𝐴 5.91e-01
2 1.16e-01 8.27e-04 1.79e-01 1.10e-01
3 2.77e-02 2.44e-05 2.77e-02 5.43e-03
4 1.65e-02 7.17e-07 2.30e-03 1.47e-05
5 5.63e-03 2.11e-08 3.24e-05 1.07e-10
6 5.42e-03 6.22e-10 3.73e-08 1.57e-16
7 1.07e-04 1.83e-11 6.04e-13 NA
8 2.66e-03 5.39e-13 1.57e-16 NA
Note, bisection error is based on the location of 𝑐 𝑘 , which actually gets further from
√8 at iteration 8
54
Remarks
• Newton-Raphson and Secant both need to be
close to the solution to work.
– This can be dealt with by starting with a Bisection
method or Regula Falsi, then switching methods
as you get close to a root.
• Newton-Raphson and Secant both need
𝑓 ′ 𝑝∗ ≠ 0.
𝑓(𝑥)
– This can be dealt with by seeking a root of
𝑓′ (𝑥)
– See text pages 81-84
55
Remarks
• Bisection and Regula Falsi are linearly
convergent
• Fixed point methods are (usually) linearly
convergent
• Newton-Raphson is quadratically convergent
– provided you start close enough to a root
• Secant is super linearly convergent
– provided you start close enough to a root
56
Example
• Consider the problem of finding a root to
𝑥2
−𝑥 −1
3
• Solve the problem:
– Using bisection with 𝑎0 = −2, 𝑏 0 = 0
– Using regula falsi with 𝑎0 = −2, 𝑏 0 = 0
𝑥2
– Using fixed point of 𝑥 = − 1 and 𝑝0 = −1
3
– Using Newton-Raphson with 𝑝0 = −1
– Using Secant with 𝑝0 = −1, 𝑝1 = −1.5
57
Relative error at various iterations
Iteration Bisection Regula Fixed Point Newton- Secant
Falsi Raphson
58
Optimization
Minimizing 𝑓 𝑥
59
Optimization
62
Example
Consider
min 𝑥 − 1 + 𝑥 + 1 ∶ 𝑥 ∈ [−3,3]
and
argmin 𝑥 − 1 + 𝑥 + 1 ∶ 𝑥 ∈ [−3,3]
https://www.toulouse.fr/
64
Fermat’s Theorem
Suppose that 𝑓 ∈ 𝐶 1 [𝑎, 𝑏]
65
Minimization via critical points
• If we have access to 𝑓′(𝑥), then we can use
any of the root finding methods to solve
𝑓′ 𝑥 = 0
66
Bisection method applied to 𝑓′
• To solve min { 𝑓 𝑥 ∶ 𝑥 ∈ 𝑎0 , 𝑏 0 }
• Given 𝑓 ′ 𝑎0 < 0 and 𝑓 ′ 𝑏0 > 0
For 𝑘 = 0, 1, … lastk
1
Examine 𝑓′ 𝑐𝑘 at 𝑐𝑘 = (𝑎𝑘 + 𝑏𝑘 )
2
If 𝑓′ 𝑐 𝑘 = 0, then stop
If 𝑓′ 𝑐 𝑘 < 0, then set 𝑎𝑘+1 = 𝑐 𝑘 , 𝑏 𝑘+1 = 𝑏 𝑘
Else (𝑓′ 𝑐 𝑘 > 0), set 𝑎𝑘+1 = 𝑎𝑘 , 𝑏 𝑘+1 = 𝑐 𝑘
67
Regula Falsi applied to 𝑓′
• To solve min { 𝑓 𝑥 ∶ 𝑥 ∈ 𝑎0 , 𝑏 0 }
• Given 𝑓 ′ 𝑎0 < 0 and 𝑓 ′ 𝑏 0 > 0
For 𝑘 = 0, 1, 2, … lastk
𝑓′ (𝑎𝑘 )(𝑏 𝑘 −𝑎𝑘 )
Examine 𝑓′ 𝑐𝑘 at 𝑐𝑘 = 𝑎𝑘 − ′ 𝑘 ′ 𝑘
𝑓 𝑏 −𝑓 (𝑎 )
If 𝑓 ′ 𝑐 𝑘 = 0, then stop
If 𝑓 ′ 𝑐 𝑘 < 0, then set 𝑎𝑘+1 = 𝑐 𝑘 , 𝑏 𝑘+1 = 𝑏 𝑘
Else (𝑓 ′ 𝑐 𝑘 > 0), set 𝑎𝑘+1 = 𝑎𝑘 , 𝑏 𝑘+1 = 𝑐 𝑘
68
Newton’s Method
• If 𝑓′′ is given, then we can use Newton-Raphson on 𝑓′
• To solve min { 𝑓 𝑥 ∶ 𝑥 ∈ 𝑎0 , 𝑏0 }
• Given 𝑝0 ∈ (𝑎0 , 𝑏0 )
For 𝑘 = 0, 1, 2, … lastk
Set
𝑓 ′ (𝑝 𝑘 )
𝑝𝑘+1 = 𝑝𝑘 −
𝑓′′(𝑝𝑘 )
If 𝑓′(𝑝𝑘+1 ) = 0, then stop (success)
If 𝑓 ′′ 𝑝𝑘+1 = 0, then stop (failure)
69
Secant Method applied to 𝑓′
• To solve min { 𝑓 𝑥 ∶ 𝑥 ∈ 𝑎0 , 𝑏0 }
• Given 𝑝0 ∈ (𝑎0 , 𝑏0 ) and 𝑝1 ∈ (𝑎0 , 𝑏0 )
For 𝑘 = 1, 2, … lastk
Set
′ 𝑘 𝑘 𝑘−1
𝑘+1 𝑘
𝑓 (𝑝 )(𝑝 − 𝑝 )
𝑝 =𝑝 − ′ 𝑘
𝑓 𝑝 − 𝑓 ′ (𝑝𝑘−1 )
If 𝑓 ′ (𝑝𝑘+1 ) = 0, then stop (success)
If 𝑓 ′ 𝑝𝑘 = 𝑓′(𝑝𝑘+1 ), then stop (failure)
70
What if we do not have 𝑓′
(𝑏, 𝑓 𝑏 ) • Evaluate
(𝑎, 𝑓 𝑎 )
𝑓 𝑐 + 𝛿 and 𝑓 𝑐 − 𝛿 ,
– If 𝑓 𝑐 − 𝛿 > 𝑓 𝑐 + 𝛿 ,
(𝑐 + 𝛿, 𝑓 𝑐 + 𝛿 )
then (local) minimizer is
likely to the right
(𝑐 − 𝛿, 𝑓 𝑐 − 𝛿 ) – If 𝑓 𝑐 − 𝛿 < 𝑓 𝑐 + 𝛿 ,
then (local) minimizer is
likely to the left
71
Dichotomous Method without derivatives
• To solve min { 𝑓 𝑥 ∶ 𝑥 ∈ 𝑎0 , 𝑏 0 }
For 𝑘 = 0, 1, 2, … lastk
Examine 𝑓 𝑐 𝑘 − 𝛿 and 𝑓 𝑐 𝑘 + 𝛿
𝑘 1
at 𝑐 = (𝑎𝑘 + 𝑏𝑘 )
2
If 𝑓 𝑐 𝑘 − 𝛿 > 𝑓 𝑐 𝑘 + 𝛿 ,
then set 𝑎𝑘+1 = 𝑐 𝑘 − 𝛿, 𝑏 𝑘+1 = 𝑏 𝑘
Else, set 𝑎𝑘+1 = 𝑎𝑘 , 𝑏 𝑘+1 = 𝑐 𝑘 + 𝛿
72
Remarks
• Dichotomous Method without derivatives is
basically using an approximate derivative to guide
the bracketing
• In Dichotomous Method without derivatives,
clearly 𝛿 should be small
• Assuming 𝛿 is small, each iteration reduces the
bracket by a factor of about 2. Thus, after 𝑘
iterations, 𝑐 𝑘 will be approximately within
1 0 0
𝑘+1 𝑏 − 𝑎 of a local minimizer.
2
73
Problem
• Suppose a Dichotomous method without
derivatives is used to seek a (local) minimizer
of 𝑓(𝑥) over [−3,7].
• How many iterations are required to
guarantee a solution with 𝐴𝐸 < 10−6 ?
• How many function evaluations are required
to guarantee a solution with 𝐴𝐸 < 10−6 ?
(assume 𝛿 is small enough that it can be ignored)
74
Bracketing without derivatives
• To solve min { 𝑓 𝑥 ∶ 𝑥 ∈ 𝑎0 , 𝑏 0 }
For 𝑘 = 1, 2, …
Examine 𝑓 𝑐 𝑘 and 𝑓 𝑑𝑘 , where
𝑎𝑘 < 𝑐 𝑘 < 𝑑𝑘 < 𝑏𝑘
If 𝑏 𝑘 − 𝑎𝑘 < 𝜖𝑆𝑇𝑂𝑃 , then stop (return best found point)
If 𝑓 𝑐 𝑘 > 𝑓 𝑑𝑘 ,
then set 𝑎𝑘+1 = 𝑐 𝑘 , 𝑏 𝑘+1 = 𝑏 𝑘
Else, set 𝑎𝑘+1 = 𝑎𝑘 , 𝑏 𝑘+1 = 𝑑𝑘
75
𝑎0 𝑐0 𝑑0 𝑏0
𝑎1 𝑐1 𝑑1 𝑏1
𝑎2 𝑐2 𝑑2 𝑏2
76
Golden section
• In order to get the `smoothest’ decrease, we would
like the intervals to reduce in a nice ratio
𝐼𝑘 = 𝑏 𝑘 − 𝑎𝑘
𝑎𝑘 𝑐𝑘 𝑑𝑘 𝑏𝑘
𝐼𝑘+1
• To solve min { 𝑓 𝑥 ∶ 𝑥 ∈ 𝑎0 , 𝑏 0 }
For 𝑘 = 0, 1, 2, … lastk
Examine 𝑓 𝑐 𝑘 and 𝑓 𝑑𝑘 , where
𝑑𝑘 − 𝑎𝑘 𝑏 𝑘 − 𝑐 𝑘 −1 + 5
𝑘 𝑘
= 𝑘 𝑘
=
𝑏 −𝑎 𝑏 −𝑎 2
If 𝑓 𝑐 𝑘 > 𝑓 𝑑𝑘 ,
then set 𝑎𝑘+1 = 𝑐 𝑘 , 𝑏 𝑘+1 = 𝑏 𝑘
Else (𝑓 𝑐 𝑘 < 𝑓 𝑑𝑘 ), set 𝑎𝑘+1 = 𝑎𝑘 , 𝑏 𝑘+1 = 𝑑𝑘
78
Remarks
• The key treat in Golden section search is that you
get to reuse 1 function evaluation each iteration.
• Thus, each iteration requires only 1 function call*
(instead of the 2 used in Dichotomous search)
• Each iteration of Golden search reduces the
bracket by a factor of about 𝜌 ≈ 0.618.
• After 𝑘 iterations, 𝑐 𝑘 will be within
𝜌𝑘+1 𝑏0 − 𝑎0 of a local minimizer.
81
Remark
• A good general strategy is to start with a
bracketing method, and then switch to Secant
or Newton once you get close to a solution
82