NA3 On 5
NA3 On 5
01/03/2025
Solving nonlinear equations
F(x)=0
Analytical Solutions
Analytical solution of a x 2 b x c 0
b b 4ac
2
roots
2a
4
Graphical Illustration
• Graphical illustration are useful to provide an initial guess to
be used by other methods
x
Solve e Root
2
x
xe x
The root [0,1] 1
root 0.6
1 2
5
Solution Methods
Many methods are available to solve nonlinear equations
Bisection Method
Newton’s Method
Fixed point iterations
These will be
covered.
• Secant method
• False position Method
• Muller’s Method
• Bairstow’s Method
• ……….
6
Bisection Method
Bisection Method
• The Bisection method is one of the simplest methods to find a zero of a
nonlinear function.
• To use the Bisection method, one needs an initial interval that is known to
contain a zero of the function.
• The method systematically reduces the interval. It does this by dividing the
interval into two equal parts, performs a simple test and based on the result of
the test half of the interval is thrown away.
• The procedure is repeated until the desired interval size is obtained.
8
Intermediate Value Theorem
• Let f(x) be defined on the interval
[a,b],
f(a)
• Intermediate value theorem:
if a function is continuous and f(a)
and f(b) have different signs then the a b
function has at least one zero in the
interval [a,b] f(b)
9
Bisection Algorithm
Assumptions:
• f(x) is continuous on [a,b]
• f(a) f(b) < 0 f(a)
Algorithm:
Loop
c b
1. Compute the mid point c=(a+b)/2 a
2. Evaluate f(c )
3. If f(a) f(c) < 0 then new interval [a, c]
f(b)
If f(a) f( c) > 0 then new interval [c, b]
End loop
10
Bisection Method
Assumptions:
Given an interval [a,b]
f is continuous on [a,b]
f(a) and f(b) have opposite signs.
11
Bisection Method
Bisection Method
b0
a0 a1 a2
13
Example
+ + -
+ - -
+ + -
14
Algorithm of Bisection Method
Data: f, a, b, eps
Output: alpha (approximation of the root of f on [a,b])
Step 1: c= (a+b)/2 (generation of the sequence (c_n))
Step 2: If |b-c|<eps then alpha:=c Stop.
Step 3: If f(a).f(b)<=0 then a:=c
else If b:=c
Step 4: go to step 1
15
Example:
Can you use Bisection method to find a zero of
f ( x) x 3 3x 1 in the interval [0,1]?
Answer:
f ( x) is continuous on [0,1]
f(0) * f(1) (1)(-1) 1 0
Assumption s are satisfied
Bisection method can be used
16
Best Estimate and Error Level
Bisection method obtains an interval that is guaranteed to contain a
zero of the function.
Questions:
• What is the best estimate of the zero of f(x)?
• What is the error level in the obtained estimate?
17
Best Estimate and Error Level
The best estimate of the zero of the function f(x) after the first
iteration of the Bisection method is the mid point of the initial
interval:
ba
Estimate of the zero : r
2
ba
Error
2
18
Stopping Criteria
Two common stopping criteria
19
Stopping Criteria
After n iterations :
b a x 0
error r - cn Ean n n
2 2
20
Stopping Criteria
Let TOL >0 be a small number
• One can use either
|xn+1 - xn|<TOL or |xn+1 - xn|/| xn |<TOL
Or
f(xn+1)<TOL
• However, I recommend to STOP after BOTH
|xn+1 - xn|<TOL and f(xn+1)<TOL
Or
• Number of iterations (iter<=itmax)
• Initial Guess x0
NOTE: most methods for non-linear equations are SENSITIVE w.r.t. the initial
guess...
(In particular, Newton’s method ...)
Convergence Analysis
Given f ( x), a, b, and
How many iterations are needed such that: x - r
where r is the zero of f(x) and x is the
bisection estimate (i.e., x ck ) ?
log(b a) log( )
n
log(2)
22
Convergence Analysis – Alternative Form
log(b a) log( )
n
log(2)
23
Example
a 6, b 7, 0.0005
How many iterations are needed such that : x - r ?
n 11
24
Example
• Use Bisection method to find a root of the equation x = cos (x)
with (b-a)/2n<0.02
(assume the initial interval [0.5,0.9])
25
26
Bisection Method
Initial Interval
27
-0.3776 -0.0648 0.2784
(0.9-0.7)/2 = 0.1
0.5 0.7 0.9
28
-0.0648 0.0183 0.1033
(0.75-0.7)/2=
0.7 0.75 0.8 0.025
29
Bisection Method Programming in Scilab
c=
a=.5; b=.9; 0.7000
u=a-cos(a); fc =
v= b-cos(b); -0.0648
for i=1:5 c=
c=(a+b)/2
0.8000
fc =
fc=c-cos(c)
0.1033
if u*fc<0 c=
b=c ; v=fc; 0.7500
else fc =
a=c; u=fc; 0.0183
end c=
end 0.7250
fc =
-0.0235
30
Bisection Method
• Advantage:
• A global method: it always converge no matter how far you
start from the actual root.
• Disadvantage:
• It cannot be used to find roots when the function is
tangent is the axis and does not pass through the axis.
• For example:
• It converges slowly compared with other methods.
31
Newton’s Method
Newton-Raphson Method
(also known as Newton’s Method)
Assumptions:
• f (x) is continuous and first derivative is known
• An initial guess x0 such that f ’(x0) ≠0 is given
33
Newton’s Method
34
Recurrence formula
Newton’s Method
• Choose some initial guess x0 such that f’(x0) ≠ 0
• Generate the sequence by xn+1=xn+vn+1
where f(xn)+ vn+1 f’(xn)=0
Example
Use Newton' s Method to find a root of
f ( x ) e x x, f ' ( x) e x 1. Use the initial points x0 1
Stop after three iterations
FN function [ FN ] FN ( X )
FN exp( X ) X
Given f ( x), f ' ( x), x0
Assumpution f ' ( x0 ) 0 FNP
function [ FNP ] FNP ( X )
FNP exp( X ) 1
__________ __________ __
for i 0 : n // Scilab Program
f ( xi ) X=1;
xi 1 xi
f ' ( xi ) For i=1:3
X=X-FN(X)/FNP(X);
end
FN(X);
end
37
Results
• X = 0.5379
FNX =0.0461
• X =0.5670
FNX =2.4495e-004
• X = 0.5671
FNX =6.9278e-009
38
Newton’s Method
• Advantage:
• Very fast
• Disadvantage:
• Not a global method
• For example: Figure 3.3 (root x = 0.5)
39
How to find the initial value?
• Choose the midpoint of the interval
• For example:
If ,
b is known
42
Newton’s Method for n dimensional systems
Systems of Non-linear Equations: n-dimensional case
44
Vector Notation
• We can rewrite this using vector notation:
r r r
f ( x) 0
f f1 , f 2 ,K , f n
x x1 , x2 ,K , xn
f x
1
x ( k 1)
x (k ) (k )
f x (k )
46
The Jacobian Matrix
• The Jacobian contains all the partial derivatives of the set of
functions.
f1 f1 f1
x x L
xn
1 2
f 2 f 2 f 2
L
J x1 x2 xn
M M O M
f n f n L
f n
x1 x2 xn
• Note, that these are all functions and need to be evaluated at a point
to be useful.
47
Newton’s Method
• If the Jacobian is non-singular, such that its inverse exists, then we can
apply this to Newton’s method.
• We rarely want to compute the inverse, so instead we look at the
problem.
x f x
1
x ( i 1) (i ) (i )
f x (i )
x(i ) h(i )
48
Newton’s Method
• Now, we have a linear system and we solve for h.
J x( k ) h ( k ) f x( k )
• Repeat until h goes to zero.
x(i 1) x(i ) h(i )
49
Initial Guess
• How do we get an initial guess for the root vector in higher-
dimensions?
• In 2D, I need to find a region that contains the root.
• Steepest Decent is a more advanced topic not covered in this course.
It is more stable and can be used to determine an approximate root.
51
Fixed Point Iteration Method
Fixed Point Iteration Method
Other Examples:
f ( x) x x 2
2
x 0
g ( x) x 2
2
or
g ( x) x 2
or
2
g ( x) 1
x
Example:
root
Theorem of FPI
Theorem FPI(con.)
58
59
Multiple Roots
• So far our study of root-finding methods has assumed that the derivative of
the function does not vanish at the root:
60
Example:
-1
61
Iterative Solution
Find the root of: f(x) = e-x – x
In general:
After a few more iteration we will get 0.567 e 0.567
Problem f x 2 x 2 4 x 1
• Find a root near x=1.0 and x=2.0
• Solution:
4
y = (x3 + 3)/7
3
0
y
-5 -4 -3 -2 -1 0 1 2 3 4 5
-1
y=x -2
-3
-4
x
Fixed Point Iteration
The rearrangement x = (x3 + 3)/7 leads to the iteration
x 3
3
xn 1 n , n 0, 1, 2, 3, ...
7
To find the middle root a, let initial approximation x0 = 2.
x0 3 23 3
3
x1 1.57143
7 7
x1 3 1.571433 3
3
x2 0.98292
7 7
x2 3 0.98292 3 3
3
x3 0.56423
7 7
x3 3 0.564233 3
3
x4 0.45423 etc.
7 7
2 0.98292 y=x
3 0.56423 y = (x3 + 3)/7
y 1
4 0.45423
5 0.44196
6 0.4409 0.5
7 0.44082
8 0.44081 0
0
a0.5x3 1
x
1.5
x1
2
x0
x2
n xn
8
0 3
1 4.28571 6
y
2 11.6739
4
3 227.702
4 1686559 2
5 6.9E+17
0
a x0
0 2 4 6 8 10
x1 x
80
TP1: Exercise 1
Program with Python the Bisection method for f(x)=x-cos(x) on [a,b]=[0.5,0.9]
a=.5; b=.9; c=
0.7000
u=a-cos(a); fc =
v= b-cos(b); -0.0648
c=
for i=1:5
0.8000
c=(a+b)/2 fc =
0.1033
fc=c-cos(c)
c=
if u*fc<0 0.7500
fc =
b=c ; v=fc;
0.0183
else c=
0.7250
a=c; u=fc;
fc =
end -0.0235
81
end
Exercice 2:
Exercice 3: Solve f(x)=0 for
• f(x)=x2-2, [a,b]=[1,2], TOL= 10-5
• f(x)=exp(x)-4x, [a,b]=[1,2.5], TOL=10-8
• f(x)=(x-1) 20, [a,b]=[1,2], TOL= 10-6
• f(x)=x3+4x2-10, [a,b]=[1,2], TOL= 10-5
Algorithm of Newton Method
Data: f, f’, x0,eps,iter, itmax
Output: x1 (approximation of the root of f on [a,b])
1. iter=1
2. fpm:=f’(x0)
3. fpm=0, iter=2 et sortir
4. x1=x0-f(x0)/fpm
5. If |x1-x0|<eps then iter=0 root:=x1 stop.
6. If iter=itmax the method is divergent, iter=1
7. Iter=iter+1; x0=x1 go to step 2.
84
TP2:
1.Show that the equation f(x)=x6-x-1 has an alpha root in ]1,2[.
2.Explain the Newton-Raphston iteration.
3.Write a Python program to approximate the alpha root using
Newton's method.
4.For x0=2, give the results in a table with the following columns:
xn; f(xn);alpha- xn; xn+1- xn.
5.Compare with the bisection method and interpret the results.
Algorithm of Fixed Point Iteration Method
Data: g, x0,eps,iter, itmax
Output: alpha (approximation of the FPT of g on [a,b])
1. iter=1; err=1+eps
2. while (iter<=itmax and err>eps) do
a. xk=g(x0)
b. err=|xk-x0|
c. iter=iter+1
d. x0=xk
3. If (iter>itmax) then the method is divergent
else if alpha=xk.
TP3:
We consider the two functions f(x)=x-x3 and g(x)=x+x3
1. Determine the fixed points of f and g in the interval [-1,1]
2. Study the convergence of the fixed point iteration for f then for g
for an initial point x0 in [-1,1].
3. Write a Python program that performs the fixed point iteration
and illustrates the results obtained in the previous question.
TP4:
We consider the following equation x’ + x = tanh(wx).
To find the equilibrium points relative to this equation, we solve x’=0.
1. Find the fixed points of this equation for w<=1.
2. Find the fixed points of this equation for w>1.
3. Are these fixed points stable or unstable?
4. Conclude.
H. Sebastian Seung, Daniel D. Lee, Ben Y. Reis, and David W. Tank. The autapse: a simple illustration of short-term
analog memory storage by tuned synaptic feedback. Journal of Computational Neuroscience, 9:171–85, 2000.
TP5: Wilkinson polynomial
The goal of this lab is to study how very small perturbations of a
polynomial's coefficients can dramatically affect its roots. This is actually
(for many people) one of the most surprising results in mathematics.
Most of the time, this won't be the case, but it still poses a difficult
problem in numerical analysis.
TP5: Wilkinson polynomial