0% found this document useful (0 votes)
15 views8 pages

Ch2 Anal Num

Uploaded by

Mar Ou
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views8 pages

Ch2 Anal Num

Uploaded by

Mar Ou
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

National Higher School of Autonomous Systems Technology

Unit: Numerical Analysis 1

Sect.1 Solution of nonlinear equations


In this chapter we consider one of the most basic problems of numerical approximation,
the root-finding problem. This process involves finding a root (or solution) of an equation
of the form f (x) = 0, for a given function f . A root of this equation is also called a zero of
the function f .

The process of solving an equation numerically is different from the procedure used to
find an analytical solution. The latter is obtained by deriving an expression that has an
exact numerical value. While the former is obtained in a process that starts by finding
an approximate solution followed by a numerical procedure in which a more accurate
solution is determined.

The methods used for solving equations f (x) f (x)


numerically can be divided into two actual solution
groups: bracketing methods and open actual solution
methods. In bracketing methods, an in- a First
b x estimate
terval that includes the solution is iden-
tified. Therefore, the endpoints of the First interval
interval are the upper bound and lower
bound of the solution. Then, by using a f (x) f (x)
numerical scheme, the size of the inter- actual solution
actual solution
val is successively reduced until the dis-
a Second
tance between the endpoints is less than estimate
b x
the desired accuracy of the solution. In Second interval
open methods, an initial estimate (one
point) for the solution is assumed. The
f (x) f (x)
value of this initial guess for the solution
should be close to the actual solution. actual solution
actual solution
Then, by using a numerical scheme, bet- a
ter values for the solution are calculated. b x Third
estimate
It is worthy to mention that Bracket- Third interval
ing methods always converge to the so-
lution while open methods are usually
more efficient but sometimes might not Bracketing method Open method
yield the solution.

1/13
National Higher School of Autonomous Systems Technology

Unit: Numerical Analysis 1

We shall consider two bracketing methods: the bisection method and the false position
method; and three open methods: Newton’s method, secant method, and fixed-point it-
eration.

A root of an equation is usually computed in two stages. First, we find the location of a
root in the form of a crude approximation of the root. Next we use an iterative technique
for computing a better value of the root to a desired accuracy in successive approxima-
tions/computations. This is done by using an iterative function.

Sect.2 The bisection method


Based on the Intermediate Value Theorem, the bisection method is a root finding method
which repeatedly bisects an interval and then selects a subinterval in which a root must
lie for further processing. It is an extremely simple and robust method, but it is relatively
slow.

Thus, the bisection method is the simplest method for finding a root to an equation. It
needs two initial estimates x a and x b which bracket the root. Let f is a continuous func-
tion defined on the interval [a, b], with f (a) and f (b) of opposite sign. Since f (a) f (b) < 0,
the function f changes sign on the interval [a, b] and, therefore, it has at least one zero
in the interval, i.e. a number p exists in ]a, b[ with f (p) = 0. This is a consequence of the
Intermediate-Value Theorem for Continuous Functions, which asserts that if f is contin-
uous on [a, b], and if f (a) < y < f (b), then f (x) = y, for some x ∈]a, b[.

 The method calls for a repeated halving of subintervals of [a, b] and, at each step,
locating the half containing p.

To begin, set a 1 = a and b 1 = b, and let p 1 be the midpoint of [a, b], that is,
b1 − a1 a1 + b1
p 1 = a1 + =
2 2
✓ If f (p 1 ) = 0, then p = p 1 , and we are done.
✓ If f (p 1 ) ̸= 0, then f (p 1 ) has the same sign as either f (a1 ) or f (b1 ).

% If f (p 1 ) and f (a 1 ) have the same sign, then p ∈]p 1 , b1 [. Set a2 = p 1 and b2 = b1 .


% If f (p 1 ) and f (a 1 ) have opposite signs, then p ∈]a1 , p 1 [. Set a2 = a 1 and b2 = p 1 .

✓ Then reapply the process to the interval [a 2 , b2 ].

2/13
National Higher School of Autonomous Systems Technology

Unit: Numerical Analysis 1

An interval [a n+1 , b n+1 ] containing an approximation to a root of f (x) = 0 is constructed


from an interval [a n , b n ] containing the root by first letting
bn − an
p n = an +
2
Then set
¡ ¢
a n+1 = a n and b n+1 = p n if f (a n ) f p n < 0

otherwise we set

a n+1 = p n and b n+1 = b n .

We repeat this process until the latest interval (which contains the root) is as small as de-
sired, say ϵ. It is clear that the interval width is reduced by a factor of one-half at each step
|b − a|
and at the end of the nth step, the new interval will be [a n , b n ] of length . We then
2n
have
µ ¶
|b − a| 1 |b − a|
<ϵ =⇒ ln Én (1)
2n ln(2) ϵ
Equation (1) gives the number of iterations required to achieve an accuracy ϵ. For exam-
ple, if |b − a| = 1 and ϵ = 0.001, then it can be seen that n Ê 10.
Theorem 1.
Suppose that f continuous on [a, b] and f (a) f (b) < 0. The Bisection method generates a
© ª∞
sequence p n n=1 approximating a zero p of f with
¯ ¯
¯p n − p ¯ É b − a , n Ê 1.
2n
Proof. For each p Ê 1, we have
1
|b n − a n | = (b − a) , with p ∈]a n , b n [
2n−1
Since p n = (a n + b n )/2 for all n Ê 1, it follows that
¯ ¯
¯p n − p ¯ É b n − a n = b − a .
2 2n

It is worthy to notice that the bisection method has the important property that it always
converges to a solution, and for that reason it is often used as a starter for the more efficient
methods.

3/13
National Higher School of Autonomous Systems Technology

Unit: Numerical Analysis 1

Example 1.
Find a real root of the equation f (x) = x 3 − x − 1 = 0.
Since f (1) = −1 is negative and f (2) = 5 positive, a root lies between 1 and 2 and, therefore,
we take x 0 = 3/2. Then
27 3 7
f (x 0 ) = − −1 = > 0
8 2 8
Hence the root lies between 1 and 1.5 and we obtain
1 + 1.5
x1 = = 1.25
2
We find f (1.25) = −19/64, which is negative. We, therefore, conclude that the root lies be-
tween 1.25 and 1.5. If follows that
1.25 + 1.5
x2 = = 1.375
2
The procedure is repeated and the successive approximations are
x 3 = 1.3125, x 4 = 1.34375, x 5 = 1.328125, x 6 = 1.3203125, ...
Example 2.
Find a real root of the following equations
2 −2
a 2x 3 − x 2 + x = 1,
⃝ b x 4 − 2x 3 − 4x 2 + 4x = −4,
⃝ c (x + 1)2 e x
⃝ =1

Algorithm 1: Given a function f continuous on the [a 0 , b 0 ] and such that f (a 0 ) f (b 0 ) < 0.

Algorithm 1: Bisection Method


f (a)
f (c) × f (b) < 0
input : f , a, b, ϵ initialization f pl ot ( f , [a, b])
while abs(a − b)/2 > ϵ do f (c)
b
c = (a + b)/2 a c
if f (a) ∗ f (b) > 0 then
b=c f (b)

else
a =c f (b)
end f (c)
a
end c b
output: display estRoot= (a + b)/2,
¡ ¢
% check that f (estRoot) is indeed small f (c) × f (a) < 0
f (a)

4/13
National Higher School of Autonomous Systems Technology

Unit: Numerical Analysis 1

The bound for the number of iterations for the Bisection method assumes that the calcu-
lations are performed using infinite-digit arithmetic. When implementing the method on
a computer, we need to consider the effects of round-off error. For example, the compu-
tation of the midpoint of the interval [a n , b n ] should be found from the equation
bn − an bn + an
p n = an + instead of
2 2
in order to adhere to the general stratagem that in numerical calculations it is best to com-
pute a quantity by adding a small correction term to a previous approximation.

Sect.3 Fixed-Point Iteration


A fixed point for a function is a number at which the value of the function does not change
when the function is applied. Fixed-point results occur in many areas of mathematics,
and are a major tool of economists for proving results concerning equilibria.
Definition 1.
The number p is a fixed point for a given function f if f (p) = p.
Root-finding problems and fixed-point problems are equivalent classes in the following
sense:

✓ Given a root-finding problem f (p) = 0, we can define functions g with a fixed point at
p in a number of ways, for example, as

g (x) = x − f (x), or g (x) = x + a f (x)

✓ Conversely, if the function g has a fixed point at p, then the function defined by
f (x) = x − g (x)

has a zero at p.

Although the problems we wish to solve are in the root-finding form, the fixed-point form
is easier to analyze, and certain fixed-point choices lead to very powerful root-finding
techniques.
Example 3.
Determine any fixed points of the function g (x) = x 2 + 3x − 3.
A fixed point p for g has the property that

p = g (p) which implies that p 2 + 3p − 3 = p ⇐⇒ (p + 3)(p − 1) = 0

5/13
National Higher School of Autonomous Systems Technology

Unit: Numerical Analysis 1

A fixed point for g occurs precisely when the graph of y = g (x) intersects the graph of y = x,
so g has two fixed points, one at p = 1 and the other at p = −3.
The following theorem gives sufficient conditions for the existence and uniqueness of a
fixed point.
Theorem 2.
1) If g is continuous on [a, b] and g (x) ∈ [a, b] for all x ∈ [a, b], then g has at least one fixed
point in [a, b].
2) If, in addition, g ′ (x) exists on ]a, b[ and a positive constant k < 1 exists with
¯ ′ ¯
¯g (x)¯ É k, ∀x ∈]a, b[

then there is exactly one fixed point in [a, b].


Proof. 1) If g (a) = a and g (b) = b, then g has a fixed point at an endpoint. If not, then
g (a) > a and g (b) < b. The function h(x) = g (x) − x is continuous on [a, b], with

h(a) = g (a) − a > 0 and h(b) = g (b) − b < 0,

because g (x) ∈ [a, b]. The Intermediate Value Theorem implies that there exists p ∈]a, b[
for which h(p) = 0. This number p is a fixed point for g because

0 = h(p) = g (p) − p implies that g (p) = p


¯ ′ ¯
2) Suppose, in addition, that ¯g (x)¯ É k < 1 and that p and q are both fixed points in [a, b].
If p ̸= q, then the Mean Value Theorem implies that a number ξ exists between p and q,
and hence in [a, b], with
g (a) − g (b)
= g ′ (ξ)
a −b
Thus

|p − q| = |g (p) − g (q)| = |g ′ (ξ)||p − q| É k|p − q| < |p − q|.

which is a contradiction. This contradiction must come from the only supposition, p ̸= q.
Hence p = q and the fixed point in [a, b] is unique.

Example 4.
Consider the function f (x) = e x − 2x − 1 on [1, 2]. We have f (1) < 0 and f (2) > 0, then f
has a zero in [1, 2]. Let g (x) = (e x − 1)/2. Therefore, the equation g (x) = x is equivalent to
f (x) = 0. We see that g is continuous but g (2) ̸∈ [1, 2]. Set g (x) = ln(2x + 1), this function is
continuous, increasing and g ([1, 2]) ⊂ [1, 2] and then satisfies the hypothesis of Theorem 2.

6/13
National Higher School of Autonomous Systems Technology

Unit: Numerical Analysis 1

Example 5.
Show that g (x) = (x 2 − 1)/3 has a unique fixed point on the interval [−1, 1].

Since g ′ (x) = 2x/3, the function g is continuous and g ′ (x) exists on [−1, 1]. The maximum
and minimum values of g (x) occur at x = −1, x = 0, x = 1. But g (−1) = 0, g (1) = 0 and
g (0) = −1/3, so an absolute maximum for g (x) on [−1, 1] occurs at x = −1 and x = 1, and
an absolute minimum at x = 0. Moreover
¯ ¯ ¯ ¯
¯ ′ ¯ ¯ 2x ¯ ¯ 2 ¯
¯g (x)¯ = ¯ ¯ É ¯ ¯ , ∀x ∈] − 1, 1[
¯ 3 ¯ ¯3¯

So g satisfies all the hypotheses of the Theorem 2 and has a unique fixed point in [−1, 1].

For the function in latter Example, the unique fixed point p in the interval [−1, 1] can be
determined algebraically. If

p2 − 1
p = g (p) = then p 2 − 3p − 1 = 0
3
which, by the quadratic formula, implies, that
1³ p ´
p = 3 − 13 .
2
¡ p ¢
Note that g also has a unique fixed point p = 12 3 + 13 for the interval [3, 4]. However,
g (4) = 5 and g ′ (4) = 83 > 1, so g does not satisfy the hypotheses of above Theorem on [3, 4].
This demonstrates that the hypotheses of the latter Theorem 2 are sufficient to guarantee
a unique fixed point but are not necessary.

Example 6.
Show that Theorem 2 does not ensure a unique fixed point of g (x) = 3−x on the interval
[0, 1], even though a unique fixed point on this interval does exist.
g ′ (x) = −3−x ln(3) < 0 on [0, 1] the function g is strictly decreasing on [0, 1]. So
1
g (1) = É g (x) É 1 = g (0), 0Éx É1
3
Thus, for x ∈ [0, 1], we have g (x) ∈ [0, 1]. The first part of the Theorem ensures that there is
at least one fixed point in [0, 1]. However,

g ′ (0) = − ln(3) < −1

|g ′ (x)| ̸É 1 on [0, 1] and Theorem 2 cannot be used to determine uniqueness. But g is always
decreasing, and it is clear from its graph that the fixed point must be unique.

7/13
National Higher School of Autonomous Systems Technology

Unit: Numerical Analysis 1

The convergence
To find a root p of f (x) = 0, we must therefore construct a sequence (x r ) which satisfies
two criteria:
i the sequence (x r ) converges to p.

ii x r +1 depends directly only on its predecessor x r .

ii , we need to find some function g , so that the sequence (x r ) may be computed
From ⃝
from

x r +1 = g (x r ), r = 0, 1, ... (2)

given an initial value x 0 .

Example 7.
Consider the equation x = e −x which is already of the form x = g (x). The graphs of y = x and
y = e −x intersect at only one point x = p ∈]0, 1[, which is therefore the only root of x = e −x .
Following (2), let us choose x 0 = 0.5 and compute x r , r = 0, 1, ... Rounding the iterates to
three decimal places at each stage, we obtain the numbers displayed in the following table.
r 0 1 2 3 4 5 6 7 8 9
xr 0.5 0.607 0.545 0.580 0.560 0.571 0.565 0.568 0.567 0.567
x r − p -0.067 0.040 -0.022 0.013 -0.007 0.004 -0.002 0.001 0.000 0.000

To investigate the behaviour of errors in the general case, we return to (2) and subtract
from it the equation p = g (p). Thus

x r +1 − p = g (x r ) − g (p) = (x r − p)g ′ (ξ), xr < ξ < p (3)

This last step follows from the mean value theorem, assuming that g ′ exists on an interval
containing x r and p. It is easy to see how to impose conditions on g to ensure convergence
of the one-point iterative method. Indeed, the uniqueness condition of Theorem 2, i.e.
|g ′ (x)| É k < 1, shows that

|x r − p| É k|x r −1 − p| É ... É k r |x 0 − p|, r Ê 0. (4)

Since 0 < k < 1, then k r → 0 as r → ∞. Therefore, we deduce from (4) that |x r − p| → 0 as


r → ∞ and the sequence (x r ) converges to the root p. Thus, we have the following

Corollary 1.
If g satisfies the hypotheses of Theorem 2, then

8/13

You might also like