0% found this document useful (0 votes)
49 views28 pages

Week 4: Numerical Analysis Solutions of Equations in One Variable

The document describes the bisection method for finding roots of equations. It converges by repeatedly bisecting the interval that contains the root. The method is implemented in MATLAB code. Newton's method and fixed point iteration are also discussed. Newton's method finds better approximations of roots by using derivatives, while fixed point iteration looks for fixed points of a related function. Both converge quadratically near the root under certain conditions.

Uploaded by

Gede Widiastawan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
49 views28 pages

Week 4: Numerical Analysis Solutions of Equations in One Variable

The document describes the bisection method for finding roots of equations. It converges by repeatedly bisecting the interval that contains the root. The method is implemented in MATLAB code. Newton's method and fixed point iteration are also discussed. Newton's method finds better approximations of roots by using derivatives, while fixed point iteration looks for fixed points of a related function. Both converge quadratically near the root under certain conditions.

Uploaded by

Gede Widiastawan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 28

Week 4: NUMERICAL ANALYSIS

Solutions of Equations in One Variable


The Bisection Method

The Bisection Method


Suppose f continuous on [a, b], and f (a), f (b) opposite signs
There exists an x in (a, b) with f (x) = 0
Divide the interval [a, b] by computing the midpoint

p = (a + b)/2

If f (p) has same sign as f (a), consider new interval [p, b]


If f (p) has same sign as f (b), consider new interval [a, p]
Repeat until interval small enough to approximate x well
The Bisection Method – Implementation

MATLAB Code
function p=bisection(f,a,b,tol)

while 1
p=(a+b)/2;
if p-a<tol, break; end
if f(a)*f(p)>0
a=p;
else
b=p;
end
end
Bisection Method

Termination Criteria
Many ways to decide when to stop:

|pN − pN −1 | < ε
|pN − pN −1 |

|pN |
|f (pN )| < ε

None is perfect, use a combination in real software


Convergence

Theorem
Suppose that f ∈ C[a, b] and f (a) · f (b) < 0. The Bisection
method generates a sequence {pn }∞n=1 approximating a zero p of f
with
b−a
|pn − p| ≤ , when n ≥ 1.
2n

Convergence Rate
The sequence {pn }∞
n=1 converges to p with rate of
convergence O(1/2n ):
 
1
pn = p + O .
2n
Fixed Points

Fixed Points and Root-Finding


A number p is a fixed point for a given function g if g(p) = p
Given a root-finding problem f (p) = 0, there are many g with
fixed points at p:

g(x) = x − f (x)
g(x) = x + 3f (x)
...

If g has fixed point at p, then f (x) = x − g(x) has a zero at p


Existence and Uniqueness of Fixed Points

Theorem
a. If g ∈ C[a, b] and g(x) ∈ [a, b] for all x ∈ [a, b], then g has a
fixed point in [a, b]
b. If, in addition, g 0 (x) exists on (a, b) and a positive constant
k < 1 exists with

|g 0 (x)| ≤ k, for all x ∈ (a, b),

then the fixed point in [a, b] is unique.


Fixed-Point Iteration

Fixed-Point Iteration
For initial p0 , generate sequence {pn }∞
n=0 by pn = g(pn−1 ).
If the sequence converges to p, then
 
p = lim pn = lim g(pn−1 ) = g lim pn−1 = g(p).
n→∞ n→∞ n→∞

MATLAB Implementation
function p=fixedpoint(g,p0,tol)

while 1
p=g(p0);
if abs(p-p0)<tol, break; end
p0=p;
end
Convergence of Fixed-Point Iteration
Theorem Fixed-Point Theorem
Let g ∈ C[a, b] be such that g(x) ∈ [a, b], for all x in [a, b].
Suppose, in addition, that g 0 exists on (a, b) and that a constant
0 < k < 1 exists with

|g 0 (x)| ≤ k, for all x ∈ (a, b).

Then, for any number p0 in [a, b], the sequence defined by


pn = g(pn−1 ) converges to the unique fixed point p in [a, b].

Corollary
If g satisfies the hypotheses above, then bounds for the error are
given by

|pn − p| ≤ k n max{p0 − a, b − p0 }
kn
|pn − p| ≤ |p1 − p0 |
1−k
Newton’s Method

Taylor Polynomial Derivation


Suppose f ∈ C 2 [a, b] and p0 ∈ [a, b] approximates solution p of
f (x) = 0 with f 0 (p0 ) 6= 0. Expand f (x) about p0 :

(p − p0 )2 00
f (p) = f (p0 ) + (p − p0 )f 0 (p0 ) + f (ξ(p))
2
Set f (p) = 0, assume (p − p0 )2 neglibible:

f (p0 )
p ≈ p1 = p0 −
f 0 (p0 )

This gives the sequence {pn }∞


n=0 :

f (pn−1 )
pn = pn−1 −
f 0 (pn−1 )
Newton’s Method

MATLAB Implementation
function p=newton(f,df,p0,tol)

while 1
p=p0-f(p0)/df(p0);
if abs(p-p0)<tol, break; end
p0=p;
end
Newton’s Method – Convergence

Fixed Point Formulation


Newton’s method is fixed point iteration pn = g(pn−1 ) with

f (x)
g(x) = x −
f 0 (x)

Theorem
Let f ∈ C 2 [a, b]. If p ∈ [a, b] is such that f (p) = 0 and f 0 (p) 6= 0,
then there exists a δ > 0 such that Newton’s method generates a
sequence {pn }∞ n=1 converging to p for any initial approximation
p0 ∈ [p − δ, p + δ].
Variations without Derivatives

The Secant Method


Replace the derivative in Newton’s method by

f (pn−2 ) − f (pn−1 )
f 0 (pn−1 ) ≈
pn−2 − pn−1
to get

f (pn−1 )(pn−1 − pn−2 )


pn = pn−1 −
f (pn−1 ) − f (pn−2 )

The Method of False Position (Regula Falsi)


Like the Secant method, but with a test to ensure the root is
bracketed between iterations.
Order of Convergence

Definition
Suppose {pn }∞ n=0 is a sequence that converges to p, with pn 6= p
for all n. If positive constants λ and α exist with

|pn+1 − p|
lim = λ,
n→∞ |pn − p|α

then {pn }∞
n=0 converges to p of order α, with asymptotic error
constant λ.
An iterative technique pn = g(pn−1 ) is said to be of order α if the
sequence {pn }∞n=0 converges to the solution p = g(p) of order α.

Special cases
If α = 1 (and λ < 1), the sequence is linearly convergent
If α = 2, the sequence is quadratically convergent
Fixed Point Convergence

Theorem
Let g ∈ C[a, b] be such that g(x) ∈ [a, b], for all x ∈ [a, b].
Suppose g 0 is continuous on (a, b) and that 0 < k < 1 exists with
|g 0 (x)| ≤ k for all x ∈ (a, b). If g 0 (p) 6= 0, then for any number p0
in [a, b], the sequence pn = g(pn−1 ) converges only linearly to the
unique fixed point p in [a, b].

Theorem
Let p be solution of x = g(x). Suppose g 0 (p) = 0 and g 00
continuous with |g 00 (x)| < M on open interval I containing p.
Then there exists δ > 0 s.t. for p0 ∈ [p − δ, p + δ], the sequence
defined by pn = g(pn−1 ) converges at least quadratically to p, and

M
|pn+1 − p| < |pn − p|2 .
2
Newton’s Method as Fixed-Point Problem

Derivation
Seek g of the form

g(x) = x − φ(x)f (x).

Find differentiable φ giving g 0 (p) = 0 when f (p) = 0:

g 0 (x) = 1 − φ0 (x)f (x) − f 0 (x)φ(x)


g 0 (p) = 1 − φ0 (p) · 0 − f 0 (p)φ(p)

and g 0 (p) = 0 if and only if φ(p) = 1/f 0 (p). This gives Newton’s
method
f (pn−1 )
pn = g(pn−1 ) = pn−1 −
f 0 (pn−1 )
Multiplicity of Zeros

Definition
A solution p of f (x) = 0 is a zero of multiplicity m of f if for
x 6= p, we can write f (x) = (x − p)m q(x), where limx→p q(x) 6= 0.

Theorem
f ∈ C 1 [a, b] has a simple zero at p in (a, b) if and only if f (p) = 0,
but f 0 (p) 6= 0.

Theorem
The function f ∈ C m [a, b] has a zero of multiplicity m at point p
in (a, b) if and only if

0 = f (p) = f 0 (p) = f 00 (p) = · · · = f (m−1) (p), but f (m) (p) 6= 0.


Variants for Multiple Roots

Newton’s Method for Multiple Roots


Define µ(x) = f (x)/f 0 (x). If p is a zero of f of multiplicity m and
f (x) = (x − p)m q(x), then

q(x)
µ(x) = (x − p)
mq(x) + (x − p)q 0 (x)

also has a zero at p. But q(p) 6= 0, so

q(p) 1
0
= 6= 0,
mq(p) + (p − p)q (p) m

and p is a simple zero of µ. Newton’s method can then be applied


to µ to give

f (x)f 0 (x)
g(x) = x −
[f 0 (x)]2− f (x)f 00 (x)
Aitken’s ∆2 Method

Accelerating linearly convergent sequences


Suppose {pn }∞
n=0 linearly convergent with limit p
Assume that
pn+1 − p pn+2 − p

pn − p pn+1 − p

Solving for p gives

pn+2 pn − p2n+1 (pn+1 − pn )2


p≈ = · · · = pn −
pn+2 − 2pn+1 + pn pn+2 − 2pn+1 + pn

Use this for new more rapidly converging sequence {p̂n }∞


n=0 :

(pn+1 − p)2
p̂n = pn −
pn+2 − 2pn+1 + pn
Delta Notation

Definition
For a given sequence {pn }∞
n=0 , the forward difference ∆pn is
defined by

∆pn = pn+1 − pn , for n ≥ 0

Higher powers of the operator ∆ are defined recursively by

∆k pn = ∆(∆k−1 pn ), for k ≥ 2

Aitken’s ∆2 method using delta notation


Since ∆2 pn = pn+2 − 2pn+1 + pn , we can write

(∆pn )2
p̂n = pn − , for n ≥ 0
∆2 p n
Convergence of Aitken’s ∆2 Method

Theorem
Suppose that {pn }∞
n=0 converges linearly to p and that

pn+1 − p
lim <1
n→∞ pn − p

Then {p̂n }∞ ∞
n=0 converges to p faster than {pn }n=0 in the sense that

p̂n − p
lim =0
n→∞ pn − p
Steffensen’s Method

Accelerating fixed-point iteration


Aitken’s ∆2 method for fixed-point iteration gives

p0 , p1 = g(p0 ), p2 = g(p1 ), p̂0 = {∆2 }(p0 ),


p3 = g(p2 ), p̂1 = {∆2 }(p1 ), . . .

Steffensen’s method assumes p̂0 is better than p2 :


(0) (0) (0) (0) (0) (1) (0)
p0 , p1 = g(p0 ), p2 = g(p1 ), p0 = {∆2 }(p0 ),
(1) (1)
p1 = g(p0 ), . . .

Theorem
Suppose x = g(x) has solution p with g 0 (p) 6= 1. If exists δ > 0
s.t. g ∈ C 3 [p − δ, p + δ], then Steffensen’s method gives quadratic
convergence for p0 ∈ [p − δ, p + δ].
Steffensen’s Method

MATLAB Implementation
function p=steffensen(g,p0,tol)

while 1
p1=g(p0);
p2=g(p1);
p=p0-(p1-p0)^2/(p2-2*p1+p0);
if abs(p-p0)<tol, break; end
p0=p;
end
Zeros of Polynomials
Polynomial
A polynomial of degree n has the form P (x) = an xn + an−1 xn−1 +
· · · + a1 x + a0 with coefficients ai and an 6= 0.

Theorem Fundamental Theorem of Algebra


If P (x) polynomial of degree n ≥ 1, with real or complex
coefficients, P (x) = 0 has at least one root.

Corollary
Pk
Exists unique x1 , . . . , xk and m1 , . . . , mk , with i=1 mi = n and

P (x) = an (x − x1 )m1 (x − x2 )m2 · · · (x − xk )mk .

Corollary
P (x), Q(x) polynomials of degree at most n. If P (xi ) = Q(xi ) for
i = 1, 2, . . . , k, with k > n, then P (x) = Q(x).
Horner’s Method

Theorem Horner’s Method


Let P (x) = an xn + an−1 xn−1 + · · · + a1 x + a0 . If bn = an and

bk = ak + bk+1 x0 , for k = n − 1, n − 2, . . . , 1, 0,

then b0 = P (x0 ). Moreover, if

Q(x) = bn xn−1 + bn−1 xn−2 + · · · + b2 x + b1 ,

then P (x) = (x − x0 )Q(x) + b0 .

Computing Derivatives
Differentiation gives

P 0 (x) = Q(x) + (x − x0 )Q0 (x) and P 0 (x0 ) = Q(x0 ).


Horner’s Method

MATLAB Implementation
function [y,z]=horner(a,x)

n=length(a)-1;
y=a(1);
z=a(1);
for j=2:n
y=x*y+a(j);
z=x*z+y;
end
y=x*y+a(n+1);
Deflation

Deflation
Compute approximate root x̂1 using Newton. Then

P (x) ≈ (x − x̂1 )Q1 (x).

Apply recursively on Q1 (x) until the quadratic factor Qn−2 (x)


can be solved directly.
Improve accuracy with Newton’s method on original P (x).
Müller’s Method

Müller’s Method
Similar to the Secant method, but parabola instead of line
Fit quadratic polynomial P (x) = a(x − p2 )2 + b(x − p2 ) + c
that passes through (p0 , f (p0 )),(p1 , f (p1 )),(p2 , f (p2 )).
Solve P (x) = 0 for p3 , choose root closest to p2 :
2c
p3 = p2 − √ .
b + sgn(b) b2 − 4ac

Repeat until convergence


Relatively insensitive to initial p0 , p1 , p2 , but e.g.
f (pi ) = f (pi+1 ) = f (pi+2 ) 6= 0 gives problems

You might also like