Numerical Computation - Computing
Numerical Computation - Computing
Consider, f ( x ) x sin x 1 0
It can be written as x – 1 = sin x.
Now, we shall draw the graphs of y =(x -1)
and y = sin x
The approximate value
of the root is found to be
1.9
-34.8 17.6
12 14 Change 16
of sign
Iter1
-34.8 -12.6 17.6
1 14.0000000000
2 15.0000000000
3 14.5000000000
4 14.7500000000 14 15 16
5 14.8750000000 Change
6 14.9375000000
Iter2 of sign
8 14.8906250000
9 14.8984375000
10 14.9023437500
Change of sign
11 14.9003906250
14.5
15
12 14.8994140625
14
13 14.8999023438
14 14.9001464844 Iter3
-5.8
1.5
15 14.9000244141
-12.6
16 Department
14.8999633789
of Computer Science
Basis of Bisection Method
a=c
f(a)>0
a c b
f(c)<0
f(b)<0
Department of Computer Science 15
Bisection Method
b=c
f(b)<0
a b
c
a = 0; b = 4;
if ( (y_m > 0 && y_a < 0) || (y_m < 0 && y_a > 0) )
{ // f(a) and f(m) have different signs: move b
b = m;
}
else
{ // f(a) and f(m) have same signs: move a
a = m;
}
System.out.println("New interval: [" + a + " .. " + b + "]");
// Print progress
}
Always convergent
The root bracket gets halved with each iteration
- guaranteed.
Slow convergence
If one of the initial guesses is close to
xu
x
x
25
Drawbacks
f(x)
f x x 2
1
f(x)
f x
x
x
Binary Search:
Input: Key, A Sorted Array
1 2 3 4 5 6 7 8
2 3 5 7 9 10 11 12
Bisection Method
Input: f(x), Closed Interval in which root of given
function lie. E.g [-12,12]
i = log2 N
Change of sign
14.5
15
14
Iter4
-5.8
1.5
-12.6
Department of Computer Science
The Bisection Method
Error Estimates 12 Change 16
of sign
Iter1
At the iter1: -34.8 17.6
12 14 Change 16
of sign
Iter2
-34.8 -12.6 17.6
At the iter2:
14 15 16
Change
Iter3 of sign
Change of sign
14.5
15
14
Iter4
-5.8
1.5
-12.6
Department of Computer Science
The Bisection Method
Error Estimates 12 14 16
for Bisection Iter1
-34.8 -12.6 17.6
At the iter1:
True root
live inside
this interval
At the iter2:
14 15 16
Iter2
-12.6 1.5 17.6
the absolute
error in the
n-th iteration
Department of Computer Science
The Bisection Method
Theorem 2.1
Remark
Example
Determine the number of iterations
necessary to solve f (x) = 0 with
accuracy 10−2 using a1 = 12 and b1 = 1 14.0000 9.0000e-01
16. 2 2 15.0000 1.0000e-01
3 14.5000 4.0000e-01
the desired error 4 14.7500 1.5000e-01
5 14.8750 2.5000e-02
solve for 6 14.9375 3.7500e-02
n: 7 14.9063 6.2500e-03
8 14.8906 9.3750e-03
Remark 9 14.8984 1.5625e-03
It is important to keep in mind that the error analysis 10 14.9023 2.3437e-03
gives only a bound for the number of iterations. In 11 14.9004 3.9062e-04
many cases this bound is much larger than the actual 12 14.8994 5.8594e-04
number required.
Department of Computer Science
Regula falsi Method
If f (c) and f (b) have opposite signs, a zero lies in [c, b].
Let f(x)=2x3-2x-5
x 0 1 2
f(x) -5 -5 7
Approximate root
of the equation 2x3-
2x-5=0 using
False Position
mehtod is 1.60056
b. [1, 3.2]
Also compare results
Use the Bisection method to find solutions, accurate to within 10 -5 for
the following problems.
3x - ex = 0 for 1 ≤ x ≤ 2
2x + 3 cos x - ex = 0 for 0 ≤ x ≤ 1
x2 - 4x + 4 - ln x = 0 for 1 ≤ x ≤ 2 and 2 ≤ x ≤ 4
Numerical Methods
for Solving
Nonlinear Equations
Open Methods
Algorithm Example
1 0.000000000000000
2 0.500000000000000
3 0.566311003197218
4 0.567143165034862 The true value of the root: 0.56714329.Thus,
5 0.567143290409781
Department of Computer Science the approach rapidly converges on the true root.
Comparison with previous two methods :
In previous methods, we were given an interval.
Here we are required an initial guess value of root.
The previous two methods are guaranteed to
converge.
Newton Rahhson may not converge in some cases.
For many problems, Newton Raphson method converges
faster than the above two methods.
The previous two methods do not identify repeated
roots.
It can identify repeated roots, since it does not look
for changes in the sign of f(x) explicitly.
Iteration xi
Number
0 5.0000
1 3.6560
2 2.7465
3 2.1084
4 1.6000
5 0.92589
6 −30.119
7 −19.746
18 0.2000
Figure 8 Divergence at inflection point for
f x x 1 0.512 0
3
2. Division by zero
For the equation
f x x 3 0.03x 2 2.4 10 6 0
the Newton-Raphson method
reduces to
xi3 0.03 xi2 2.4 10 6
xi 1 xi
3xi2 0.06 xi
4. Root Jumping
In some cases where the function f x is oscillating and has a number of
roots, one may choose an initial guess close to a root. However, the guesses
may jump and converge to some other root. 1.5
f(x)
For example 1
f x sin x 0 0.5
x
Choose -2
0
0 2 4 6 8 10
-1.5
Secant method
Figure 1 Geometrical illustration of f ( xi )( xi xi 1 )
the Newton-Raphson method. xi 1 xi
f ( xi ) f ( xi 1 )
86
The secant method can also be derived from geometry:
f(x)
The Geometric Similar Triangles
AB DC
f(xi) B AE DE
can be written as
f ( xi ) f ( xi 1 )
C
xi xi 1 xi 1 xi 1
f(xi-1)
E D A
On rearranging, the secant
X
xi+1 xi-1 xi method is given as
f ( xi )( xi xi 1 )
Figure 2 Geometrical representation of xi 1 xi
the Secant method. f ( xi ) f ( xi 1 )
87
Algorithm for Secant Method
Calculate the next estimate of the root from two initial guesses
f ( xi )( xi xi 1 )
xi 1 xi
f ( xi ) f ( xi 1 )
Find the absolute relative approximate error
xi 1- xi
a = 100
xi 1
89
Step 2
90
91
Advantages
Converges fast, if it converges
Requires two guesses that do not need to
bracket the root
Example:
solution
Step
1
Step
2
Note
Convert the problem from
root-finding to finding fixed-
Department of Computer Science point
Fixed- Point Iteration
Step Example1
1
Step
1 0.0000000000000
2 2 1.0000000000000
3 0.3678794411714
4 0.6922006275553
Starting with an initial guess of x = 0 0
5 0.5004735005636
6 0.6062435350856
7 0.5453957859750
8 0.5796123355034
9 0.5601154613611
10 0.5711431150802
11 0.5648793473910
12 0.5684287250291
Thus, each iteration brings the estimate closer
Department of Computer Science to the true value of the root: 0.56714329
Conclusion