0% found this document useful (0 votes)
3 views91 pages

NA3 On 5

The document discusses methods for solving nonlinear equations, focusing on the Bisection Method and Newton's Method. It outlines the assumptions, algorithms, and advantages/disadvantages of each method, including convergence analysis and stopping criteria. Examples are provided to illustrate the application of these methods in finding roots of functions.

Uploaded by

anashabibiofc
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views91 pages

NA3 On 5

The document discusses methods for solving nonlinear equations, focusing on the Bisection Method and Newton's Method. It outlines the assumptions, algorithms, and advantages/disadvantages of each method, including convergence analysis and stopping criteria. Examples are provided to illustrate the application of these methods in finding roots of functions.

Uploaded by

anashabibiofc
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 91

Numerical Analysis

Hassania School for Public Works


SALEM NAFIRI
Email: [email protected]
Siteweb: https://sites.google.com/site/nafirisalem/

01/03/2025
Solving nonlinear equations
F(x)=0
Analytical Solutions

Analytical solutions are available for special equations only.

Analytical solution of a x 2  b x  c  0
 b  b  4ac
2
roots 
2a

No analytical solution is available for : x  e  x  0

4
Graphical Illustration
• Graphical illustration are useful to provide an initial guess to
be used by other methods

x
Solve e Root
2
x
xe x
The root  [0,1] 1

root  0.6
1 2

5
Solution Methods
Many methods are available to solve nonlinear equations
Bisection Method
Newton’s Method
Fixed point iterations
These will be
covered.
• Secant method
• False position Method
• Muller’s Method
• Bairstow’s Method
• ……….

6
Bisection Method
Bisection Method
• The Bisection method is one of the simplest methods to find a zero of a
nonlinear function.
• To use the Bisection method, one needs an initial interval that is known to
contain a zero of the function.
• The method systematically reduces the interval. It does this by dividing the
interval into two equal parts, performs a simple test and based on the result of
the test half of the interval is thrown away.
• The procedure is repeated until the desired interval size is obtained.

8
Intermediate Value Theorem
• Let f(x) be defined on the interval
[a,b],
f(a)
• Intermediate value theorem:
if a function is continuous and f(a)
and f(b) have different signs then the a b
function has at least one zero in the
interval [a,b] f(b)

9
Bisection Algorithm
Assumptions:
• f(x) is continuous on [a,b]
• f(a) f(b) < 0 f(a)

Algorithm:
Loop
c b
1. Compute the mid point c=(a+b)/2 a
2. Evaluate f(c )
3. If f(a) f(c) < 0 then new interval [a, c]
f(b)
If f(a) f( c) > 0 then new interval [c, b]
End loop

10
Bisection Method

Assumptions:
Given an interval [a,b]
f is continuous on [a,b]
f(a) and f(b) have opposite signs.

These assumptions ensures the existence of at least one zero in


the interval [a,b] and the bisection method can be used to
obtain a smaller interval that contains the zero.

11
Bisection Method
Bisection Method

b0
a0 a1 a2

13
Example
+ + -

+ - -

+ + -

14
Algorithm of Bisection Method
Data: f, a, b, eps
Output: alpha (approximation of the root of f on [a,b])
Step 1: c= (a+b)/2 (generation of the sequence (c_n))
Step 2: If |b-c|<eps then alpha:=c Stop.
Step 3: If f(a).f(b)<=0 then a:=c
else If b:=c
Step 4: go to step 1

15
Example:
Can you use Bisection method to find a zero of
f ( x)  x 3  3x  1 in the interval [0,1]?

Answer:
f ( x) is continuous on [0,1]
f(0) * f(1)  (1)(-1)  1  0
Assumption s are satisfied
Bisection method can be used

16
Best Estimate and Error Level
Bisection method obtains an interval that is guaranteed to contain a
zero of the function.

Questions:
• What is the best estimate of the zero of f(x)?
• What is the error level in the obtained estimate?

17
Best Estimate and Error Level
The best estimate of the zero of the function f(x) after the first
iteration of the Bisection method is the mid point of the initial
interval:

ba
Estimate of the zero : r 
2
ba
Error 
2

18
Stopping Criteria
Two common stopping criteria

1. Stop after a fixed number of iterations


2. Stop when the absolute error is less than a specified value

How are these criteria related?

19
Stopping Criteria

cn : is the midpoint of the interval at the n th iteration


( cn is usually used as the estimate of the root).
r: is the zero of the function.

After n iterations :
b  a x 0
error  r - cn  Ean  n  n
2 2

20
Stopping Criteria
Let TOL >0 be a small number
• One can use either
|xn+1 - xn|<TOL or |xn+1 - xn|/| xn |<TOL
Or
f(xn+1)<TOL
• However, I recommend to STOP after BOTH
|xn+1 - xn|<TOL and f(xn+1)<TOL
Or
• Number of iterations (iter<=itmax)
• Initial Guess x0
NOTE: most methods for non-linear equations are SENSITIVE w.r.t. the initial
guess...
(In particular, Newton’s method ...)
Convergence Analysis
Given f ( x), a, b, and 
How many iterations are needed such that: x - r  
where r is the zero of f(x) and x is the
bisection estimate (i.e., x  ck ) ?

log(b  a)  log( )
n
log(2)
22
Convergence Analysis – Alternative Form

log(b  a)  log( )
n
log(2)

 width of initial interval  ba


n  log 2    log 2  
 desired error    

23
Example
a  6, b  7,   0.0005
How many iterations are needed such that : x - r   ?

log( b  a)  log(  ) log(1)  log( 0.0005)


n   10.9658
log( 2) log( 2)

 n  11

24
Example
• Use Bisection method to find a root of the equation x = cos (x)
with (b-a)/2n<0.02
(assume the initial interval [0.5,0.9])

Question 1: What is f (x) ?


Question 2: Are the assumptions satisfied ?

25
26
Bisection Method
Initial Interval

f(a)=-0.3776 f(b) =0.2784

a =0.5 c= 0.7 b= 0.9

27
-0.3776 -0.0648 0.2784
(0.9-0.7)/2 = 0.1
0.5 0.7 0.9

-0.0648 0.1033 0.2784


(0.8-0.7)/2 = 0.05
0.7 0.8 0.9

28
-0.0648 0.0183 0.1033
(0.75-0.7)/2=
0.7 0.75 0.8 0.025

-0.0648 -0.0235 0.0183


(0.75-0.725)/2=
0.70 0.725 0.75 .0125

29
Bisection Method Programming in Scilab
c=
a=.5; b=.9; 0.7000
u=a-cos(a); fc =
v= b-cos(b); -0.0648
for i=1:5 c=
c=(a+b)/2
0.8000
fc =
fc=c-cos(c)
0.1033
if u*fc<0 c=
b=c ; v=fc; 0.7500
else fc =
a=c; u=fc; 0.0183
end c=
end 0.7250
fc =
-0.0235
30
Bisection Method
• Advantage:
• A global method: it always converge no matter how far you
start from the actual root.
• Disadvantage:
• It cannot be used to find roots when the function is
tangent is the axis and does not pass through the axis.
• For example:
• It converges slowly compared with other methods.

31
Newton’s Method
Newton-Raphson Method
(also known as Newton’s Method)

Given an initial guess of the root x0 , Newton-


Raphson method uses information about the function
and its derivative at that point to find a better guess
of the root.

Assumptions:
• f (x) is continuous and first derivative is known
• An initial guess x0 such that f ’(x0) ≠0 is given

33
Newton’s Method

34
Recurrence formula
Newton’s Method
• Choose some initial guess x0 such that f’(x0) ≠ 0
• Generate the sequence by xn+1=xn+vn+1
where f(xn)+ vn+1 f’(xn)=0
Example
Use Newton' s Method to find a root of
f ( x )  e  x  x, f ' ( x)  e  x  1. Use the initial points x0  1
Stop after three iterations
FN function [ FN ]  FN ( X )
FN  exp( X )  X
Given f ( x), f ' ( x), x0
Assumpution f ' ( x0 )  0 FNP
function [ FNP ]  FNP ( X )
FNP   exp( X )  1
__________ __________ __
for i  0 : n // Scilab Program
f ( xi ) X=1;
xi 1  xi 
f ' ( xi ) For i=1:3
X=X-FN(X)/FNP(X);
end
FN(X);
end
37
Results

• X = 0.5379
FNX =0.0461

• X =0.5670
FNX =2.4495e-004

• X = 0.5671
FNX =6.9278e-009

38
Newton’s Method
• Advantage:
• Very fast
• Disadvantage:
• Not a global method
• For example: Figure 3.3 (root x = 0.5)

• Another example: Figure 3.4 (root x = 0.05)


• In these example, the initial point should be carefully chosen.
• Newton’s method will cycle indefinitely.
• Newton’s method will just hop back and forth between two values.
• For example: Consider (root x = 0)

39
How to find the initial value?
• Choose the midpoint of the interval
• For example:
If ,

• Using linear interpolation


• For example:

b is known

42
Newton’s Method for n dimensional systems
Systems of Non-linear Equations: n-dimensional case

• Example: Plane intersected with a


x yz 3 sphere, intersected with a
more complex function.
x2  y 2  z 2  5
e x  xy  xz  1

• Conservation of mass coupled with conservation of energy, coupled


with solution to complex problem.

44
Vector Notation
• We can rewrite this using vector notation:

r r r
f ( x)  0
f   f1 , f 2 ,K , f n 
x   x1 , x2 ,K , xn 

March 22, 2025 OSU/CIS 541 45


Newton’s Method for
Non-linear Systems
• Newton’s method for non-linear systems can be written as:

 f   x   
1
x ( k 1)
x (k ) (k )
 f x (k )

where f  x  is the Jacobian matrix


 (k )

46
The Jacobian Matrix
• The Jacobian contains all the partial derivatives of the set of
functions.
 f1 f1 f1 
 x x L
xn 
 1 2

 f 2 f 2 f 2 
L
J   x1 x2 xn 

 M M O M
 
 f n f n L
f n 
 x1 x2 xn 
• Note, that these are all functions and need to be evaluated at a point
to be useful.
47
Newton’s Method
• If the Jacobian is non-singular, such that its inverse exists, then we can
apply this to Newton’s method.
• We rarely want to compute the inverse, so instead we look at the
problem.

 x  f   x   
1
x ( i 1) (i ) (i )
 f x (i )

 x(i )  h(i )

48
Newton’s Method
• Now, we have a linear system and we solve for h.

 J  x( k )   h ( k )  f  x( k ) 
 
• Repeat until h goes to zero.
x(i 1)  x(i )  h(i )

We will look at solving linear systems later in the course.

49
Initial Guess
• How do we get an initial guess for the root vector in higher-
dimensions?
• In 2D, I need to find a region that contains the root.
• Steepest Decent is a more advanced topic not covered in this course.
It is more stable and can be used to determine an approximate root.

51
Fixed Point Iteration Method
Fixed Point Iteration Method
Other Examples:
f ( x)  x  x  2
2
x 0
g ( x)  x  2
2

or
g ( x)  x  2
or
2
g ( x)  1 
x

Example:

root
Theorem of FPI
Theorem FPI(con.)

58
59
Multiple Roots
• So far our study of root-finding methods has assumed that the derivative of
the function does not vanish at the root:

• What happens if the derivative does vanish at the root?

60
Example:

-1

61
Iterative Solution
Find the root of: f(x) = e-x – x

1. Start with a guess say x1=1,


2. Generate
 xn
a) x2=e-x1 = e-1= 0.368 xn1  e
b) x3=e-x2= e-0.368 = 0.692
c) x4=e-x3= e-0.692=0.500

In general:
After a few more iteration we will get 0.567  e 0.567
Problem f x   2 x 2  4 x  1
• Find a root near x=1.0 and x=2.0
• Solution:

• Starting at x=1, x=0.292893 at 15th iteration


• Starting at x=2, it will not converge x  g  x   1 x2  1
2 4

• Why? Relate to g'(x)=x. for convergence g'(x) < 1

• Starting at x=1, x=1.707 at iteration 19


• Starting at x=2, x=1.707 at iteration 12 x  g x   2 x  1 2
• Why? Relate to
g x   2 x  
1  12
2
Examples
Fixed Point Iteration
The equation f(x) = 0, where f(x) = x3  7x + 3, may be re-arranged to give
x = (x3 + 3)/7.

Intersection of the graphs of y = x and y = (x3 + 3)/7 represent


roots of the original equation x3  7x + 3 = 0.

4
y = (x3 + 3)/7
3

0
y

-5 -4 -3 -2 -1 0 1 2 3 4 5
-1

y=x -2

-3

-4
x
Fixed Point Iteration
The rearrangement x = (x3 + 3)/7 leads to the iteration
x 3
3
xn 1  n , n  0, 1, 2, 3, ...
7
To find the middle root a, let initial approximation x0 = 2.

x0  3 23  3
3
x1    1.57143
7 7
x1  3 1.571433  3
3
x2    0.98292
7 7

x2  3 0.98292 3  3
3
x3    0.56423
7 7

x3  3 0.564233  3
3
x4    0.45423 etc.
7 7

The iteration slowly converges to give a = 0.441 (to 3 s.f.)


Fixed Point Iteration
The rearrangement x = (x3 + 3)/7 leads to the iteration
xn  3
3
xn 1  , n  0, 1, 2, 3, ...
7
For x0 = 2 the iteration will converge on the middle root a, since g’(a)
< 1.
2
n xn
0 2
1 1.57143 1.5

2 0.98292 y=x
3 0.56423 y = (x3 + 3)/7
y 1
4 0.45423
5 0.44196
6 0.4409 0.5

7 0.44082
8 0.44081 0
0
a0.5x3 1
x
1.5
x1
2
x0
x2

a = 0.441 (to 3 s.f.)


Fixed Point Iteration - breakdown
The rearrangement x = (x3 + 3)/7 leads to the iteration
x 3
3
xn 1  n , n  0, 1, 2, 3, ...
7
For x0 = 3 the iteration will diverge from the upper root a.
10

n xn
8
0 3
1 4.28571 6
y
2 11.6739
4
3 227.702
4 1686559 2

5 6.9E+17
0

a x0
0 2 4 6 8 10
x1 x

The iteration diverges because g’(a) > 1.


Example: fixed point problems
Examples: FPI
Example: FPI
Convergence of FPI
Simple Fixed-Point Iteration Convergence

• Fixed-point iteration converges if :

g (x )  1 (slope of the line f (x )  x )


• When the method converges, the error
is roughly proportional to or less than the
error of the previous step, therefore it is
called “linearly convergent.”
Simple Fixed-Point Iteration-Convergence
More on Convergence
• Graphically the solution is at the
intersection of the two curves. We
identify the point on y2 corresponding to
the initial guess and the next guess
corresponds to the value of the argument
x where y1 (x) = y2 (x).

• Convergence of the simple fixed-point


iteration method requires that the
derivative of g(x) near the root has a
magnitude less than 1.
a) Convergent, 0≤g’<1
b) Convergent, -1<g’≤0
c) Divergent, g’>1
d) Divergent, g’<-1
75
Method Pros Cons
Summary Bisection - Easy, Reliable, Convergent
- One function evaluation per
- Slow
- Needs an interval [a,b]
iteration containing the root, i.e.,
- A global method: it always f(a)f(b)<0.
converge no matter how far - It cannot be used to find
you start from the actual roots when the function is
root. tangent to the axis and does
- No knowledge of derivative is not pass through the axis
needed (f(x)=x^2)

Newton - Very Fast (if near the root) - May diverge


- Two function evaluations per - Not a global method
iteration - Needs derivative and an
initial guess x0 such that
f’(x0) is nonzero
Fixed - Fast (depends on the choice - Divergent when g’>1
of g) -Needs an initial guess of x0
Iteration
- One function evaluation per
method iteration
- Convergent when g’<1
-No knowledge of derivative is
needed
Method Pros Cons
Summary Bisection - Easy, Reliable, Convergent
- One function evaluation per
- Slow
- Needs an interval [a,b]
iteration containing the root, i.e.,
- No knowledge of derivative is f(a)f(b)<0
needed
Newton - Fast (if near the root) - May diverge
- Two function evaluations per - Needs derivative and an
iteration initial guess x0 such that
f’(x0) is nonzero

Fixed - Fast (depends on the choice - Divergent when g’>1


of g) -Needs an initial guess of x0
Iteration
- One function evaluation per
method iteration
- Convergent when g’<1
-No knowledge of derivative is
needed
Secant - Fast (slower than Newton) - May diverge
There also - One function evaluation per - Needs two initial points
iteration guess x0, x1 such that
- No knowledge of derivative is f(x0)- f(x1) is nonzero
needed
Python TP: Solving f(x)=0
Algorithm of Bisection Method
Data: f, a, b, eps
Output: alpha (approximation of the root of f on [a,b])
Step 1: c= (a+b)/2 (generation of the sequence (c_n))
Step 2: If |b-c|<eps then alpha:=c Stop.
Step 3: If f(a).f(b)<=0 then a:=c
else If b:=c
Step 4: go to step 1

80
TP1: Exercise 1
Program with Python the Bisection method for f(x)=x-cos(x) on [a,b]=[0.5,0.9]
a=.5; b=.9; c=
0.7000
u=a-cos(a); fc =
v= b-cos(b); -0.0648
c=
for i=1:5
0.8000
c=(a+b)/2 fc =
0.1033
fc=c-cos(c)
c=
if u*fc<0 0.7500
fc =
b=c ; v=fc;
0.0183
else c=
0.7250
a=c; u=fc;
fc =
end -0.0235
81
end
Exercice 2:
Exercice 3: Solve f(x)=0 for
• f(x)=x2-2, [a,b]=[1,2], TOL= 10-5
• f(x)=exp(x)-4x, [a,b]=[1,2.5], TOL=10-8
• f(x)=(x-1) 20, [a,b]=[1,2], TOL= 10-6
• f(x)=x3+4x2-10, [a,b]=[1,2], TOL= 10-5
Algorithm of Newton Method
Data: f, f’, x0,eps,iter, itmax
Output: x1 (approximation of the root of f on [a,b])
1. iter=1
2. fpm:=f’(x0)
3. fpm=0, iter=2 et sortir
4. x1=x0-f(x0)/fpm
5. If |x1-x0|<eps then iter=0 root:=x1 stop.
6. If iter=itmax the method is divergent, iter=1
7. Iter=iter+1; x0=x1 go to step 2.
84
TP2:
1.Show that the equation f(x)=x6-x-1 has an alpha root in ]1,2[.
2.Explain the Newton-Raphston iteration.
3.Write a Python program to approximate the alpha root using
Newton's method.
4.For x0=2, give the results in a table with the following columns:
xn; f(xn);alpha- xn; xn+1- xn.
5.Compare with the bisection method and interpret the results.
Algorithm of Fixed Point Iteration Method
Data: g, x0,eps,iter, itmax
Output: alpha (approximation of the FPT of g on [a,b])
1. iter=1; err=1+eps
2. while (iter<=itmax and err>eps) do
a. xk=g(x0)
b. err=|xk-x0|
c. iter=iter+1
d. x0=xk
3. If (iter>itmax) then the method is divergent
else if alpha=xk.
TP3:
We consider the two functions f(x)=x-x3 and g(x)=x+x3
1. Determine the fixed points of f and g in the interval [-1,1]
2. Study the convergence of the fixed point iteration for f then for g
for an initial point x0 in [-1,1].
3. Write a Python program that performs the fixed point iteration
and illustrates the results obtained in the previous question.
TP4:
We consider the following equation x’ + x = tanh(wx).
To find the equilibrium points relative to this equation, we solve x’=0.
1. Find the fixed points of this equation for w<=1.
2. Find the fixed points of this equation for w>1.
3. Are these fixed points stable or unstable?
4. Conclude.

H. Sebastian Seung, Daniel D. Lee, Ben Y. Reis, and David W. Tank. The autapse: a simple illustration of short-term
analog memory storage by tuned synaptic feedback. Journal of Computational Neuroscience, 9:171–85, 2000.
TP5: Wilkinson polynomial
The goal of this lab is to study how very small perturbations of a
polynomial's coefficients can dramatically affect its roots. This is actually
(for many people) one of the most surprising results in mathematics.
Most of the time, this won't be the case, but it still poses a difficult
problem in numerical analysis.
TP5: Wilkinson polynomial

Observe the following polynomial, whose roots are 1, 2, …, 20.

Now observe this family of polynomials


TP5: The Wilkinson polynomial
The behavior of the roots is illustrated in this image (red is for negative
coefficient perturbation, black for positive)

A summary and further information on this topic can be found here:


https://www.maa.org/sites/default/files/pdf/upload_library/22/Chauvenet/Wilkinson.pdf

You might also like