Solutions of Nonlinear Equations
Solutions of Nonlinear Equations
Oliver Mhlanga
f (x) = 0 (1)
f (x) = x 6 − x − 1 = 0 (2)
n a b c b-c f(c)
1 1.0000 2.0000 1.5000 0.5000 8.8906
2 1.0000 1.5000 1.2500 0.2500 1.5647
3 1.0000 1.2500 1.1250 0.1250 -0.0977
4 1.1250 1.2500 1.1875 0.0625 0.6167
5 1.1250 1.1875 1.1562 0.0312 0.2333
6 1.1250 1.1562 1.1406 0.0156 0.0616
7 1.1250 1.1406 1.1328 0.0078 -0.0196
8 1.1328 1.1406 1.1367 0.0039 0.0206
9 1.1328 1.1367 1.1348 0.0020 0.0004
10 1.1328 1.1348 1.1338 0.00098 -0.0096
1
|r − cn | ≤ cn − an = bn − cn = (bn − an ) (4)
2
Bisection Method.
This is the error bound for cn that is used in step 2 of the earlier
algorithm. Combining it with (??), we obtain the further bound
1
|r − cn | ≤ (b − a)
2n
This shows that the iterates cn → r as n → ∞.
To see how many iterations will be necessary, suppose we want to
have
|r − cn | ≤
This will be satisfied if
1
(b − a) ≤
2n
Solving for n we get
ln( b−a
)
n≥ (5)
ln 2
Bisection Method.
y − f (x0 ) y − f (x0 )
= f 0 (x0 ) ⇒ x = x0 +
x − x0 f 0 (x0 )
f (x0 )
x1 = x0 −
f 0 (x0 )
Newton-Raphson Method.
f (xn )
xn+1 = xn − , n = 0, 1, 2, .... (6)
f 0 (xn )
xn6 − xn − 1
xn+1 = xn − (7)
6xn5 − 1
Newton-Raphson Method.
Convergence analysis
Let α be the root so that f (α) = 0.
f 0 (xk )ek −f (xk )
Define Error: ek+1 = xk+1 − α = xk − ff 0(x k)
(xk ) − α = f 0 (xk )
Using Taylor’s theorem
0 = f (α) = f (xk − ek ) = f (xk ) − f 0 (xk )ek + 12 f 00 (k )ek2 ,
where k is between xk and α,
f 0 (xk )ek − f (xk ) = 21 f 00 (k )ek2 .
Hence
|f 00 (k )| 2 |f 00 (α)| 2 2
|ek+1 | = 2|f 0 (x )| |ek | ≈ 2|f 0 (α)| |ek | ≡ C |ek |
k
This is called quadratic convergence.
Newton-Raphson Method.
Theorem
2.2 Suppose f 00 is continuous and α is a simple zero of f . Then
∃ α ∈ D & C ∈ <: when Newton’s method is applied with starting
point 0 ∈ D, the sequence xk generated converges to α and
satisfies
f (xn ) − f (xn−1 )
f 0 (xn ) = (9)
xn − xn−1
This method now requires two initial guesses, x0 and x1 , but unlike
the bisection method, the two initial guesses do not need to
bracket the root of the equation.
When secant method converges, it will typically converge faster
than the bisection method. However, since the derivative is
approximated as given by equation (??), it typically converges
slower than the Newton-Raphson method.
Example 2.3. Solve f (x) = x 6 − x − 1
The Secant Method.
f (x) = 0 (11)
x = g (x) (12)
1
We can write the equation in the form x = 1+x 2
, thus
1
xn+1 =
1 + xn2
Fixed point iteration.
Let g (x) = 1
1+x 2
, |g 0 (x)| = 2x
(1+x 2 )2
, |g 0 (1)| = 1
2 <1
n xn
0 1.00000
1 0.50000
2 0.80000
3 0.61975
4 0.72897
5 0.65300
- -
- -
28 0.82329
Convergence Analysis
Our iteration is xk+1 = g (xk ). Let r be the exact root, s.t.
r = g (r ).
Define Error: ek = |xk − r |.
Observation
-if |g 0 ()| < 1, then ek+1 < ek , error decreases, linear convergence.
-if |g 0 ()| > 1, then ek+1 > ek , error increases, iteration diverges.
Fixed point iteration.
Pseudo Code
r=fixedpoint(’g’,x,tol,nmax)
r=g(x); % first iteration
nit=1; while (abs(r-g(r))>tol and nit<nmax) do
r=g(r);
nit=nit+1;
end
end
Fixed point iteration.
Fixed point iteration.
Corollary
2.1 Suppose f satisfies the conditions of the preivious theorem and
the sequence xk k≥0 is produced by the function iteration
xk+1 = g (xk ) with and arbitrary initial point x0 . Then
|ek | = |xk − r | ≤ M k maxx0 − a, b − x0
and
Mk Mk
|ek | = |xk − r | ≤ 1−M |x0 − x1 | = 1−M |e0 |
Proof
Exercise
Fixed point iteration.
Fixed point iteration.