0% found this document useful (0 votes)
101 views

Numerical Computation - Computing

This document discusses numerical methods for solving nonlinear equations, specifically the bisection method. It provides an overview of the bisection method, including that it is a bracketing method that finds a root within an interval where the function changes sign. It describes the iterative process of bisection and gives pseudocode for a Java implementation. Key details include that the bisection method is guaranteed to converge to a root if one exists, but that it converges slowly, with the root bracket halved each iteration. A drawback is that it only applies when the function changes sign and could overlook double roots.

Uploaded by

Shoaib
Copyright
© © All Rights Reserved
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
101 views

Numerical Computation - Computing

This document discusses numerical methods for solving nonlinear equations, specifically the bisection method. It provides an overview of the bisection method, including that it is a bracketing method that finds a root within an interval where the function changes sign. It describes the iterative process of bisection and gives pseudocode for a Java implementation. Key details include that the bisection method is guaranteed to converge to a root if one exists, but that it converges slowly, with the root bracket halved each iteration. A drawback is that it only applies when the function changes sign and could overlook double roots.

Uploaded by

Shoaib
Copyright
© © All Rights Reserved
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 97

CSC475: Numerical Computing

Department of Computer Science


COMSATS University, Islamabad
Recap
Numerical Methods
for Solving
Nonlinear Equations
Bracketing Methods

Department of Computer Science 2


Root of an equation

In graph, the root (or zero) of Various numerical methods for


a function is the x-intercept root-finding
The Bisection Method
The False Position Method
root Fixed point iterations
The Newton-Raphson Method
The Secant Method

Department of Computer Science


Department of Computer Science 4
Graphical Method

 Step 1: Rewrite f (x) = 0 as f1(x) = f2(x)


 Step 2: Determine the point of intersection of
the graphs of y = f1(x) and y = f2(x).
 Step 3: The first approximation to a root of f
(x) = 0 can be taken as the abscissa obtained in
step 2

Department of Computer Science 5


Example: Graphical Method

 Consider, f ( x )  x  sin x  1  0
 It can be written as x – 1 = sin x.
 Now, we shall draw the graphs of y =(x -1)
and y = sin x
 The approximate value
of the root is found to be
1.9

Department of Computer Science 6


Example: Graphical Method

 Graphical techniques are of limited practical


value because they are not precise.
 However, graphical methods can be utilized to
obtain rough estimates of roots.

Department of Computer Science 7


Example: Graphical Method

 Aside from providing rough estimates of the


root, graphical interpretations are important
tools for understanding the properties of the
functions and anticipating the pitfalls of the
numerical methods.

Department of Computer Science 8


Bracketing
Methods
Bisection Method
Method of False Position

Department of Computer Science


Bisection Method

 The bisection method is a bracketing method for


finding a numerical solution of an equation of
the form f(x) = 0 when it is known that within a
given interval [a, b], f(x) is continuous and the
equation has a solution. When this is the case,
f(x) will have opposite signs at the endpoints of
the interval.

Department of Computer Science 10


Bisection Method

 As shown in Fig., if f(x) is continuous and has a


solution between the points x = a and x = b ,
then either f(a) > 0 and f(b) < 0 or f(a) < 0 and
f(b) > 0.
 In other words, if there is a solution between x =a
and x = b, then f(a)f(b) < 0 .

Department of Computer Science 11


Bisection Method
 The process of finding a solution with the bisection
method starts by finding points a and b that define an
interval where a solution exists.
 Such an interval is found either by plotting f(x) and observing a
zero crossing, or by examining the function for sign change.
 The midpoint of the interval xNs1 is then taken as the first
 estimate for the numerical solution.
 The true solution is either in the section between points
a and xNs1 or in the section between points xNs1 and b.
 If the numerical solution is not accurate enough, a new
interval that contains the true solution is defined.
 The new interval is the half of the original interval that
contains the true solution, and its midpoint is taken as
the new
Department (second)
of Computer Science estimate of the numerical solution.12
Bisection Method
 The process continues until the numerical solution is
accurate enough according to a criterion that is selected.
 The process of finding a solution with the bisection
method is illustrated in following Fig.

Department of Computer Science 13


The Bisection Method
Example:
12 Change 16
of sign

-34.8 17.6

12 14 Change 16
of sign
Iter1
-34.8 -12.6 17.6
1 14.0000000000
2 15.0000000000
3 14.5000000000
4 14.7500000000 14 15 16
5 14.8750000000 Change

6 14.9375000000
Iter2 of sign

7 14.9062500000 -12.6 1.5 17.6

8 14.8906250000
9 14.8984375000
10 14.9023437500

Change of sign
11 14.9003906250

14.5
15
12 14.8994140625

14
13 14.8999023438
14 14.9001464844 Iter3

-5.8

1.5
15 14.9000244141

-12.6
16 Department
14.8999633789
of Computer Science
Basis of Bisection Method

 Guaranteed to converge to a root if one exists


within the bracket.

a=c
f(a)>0

a c b

f(c)<0
f(b)<0
Department of Computer Science 15
Bisection Method

 Slowly converges to a root

b=c
f(b)<0

a b
c

Department of Computer Science 16


Algorithm and its C++ Implementation

Department of Computer Science 17


Algorithm and its MATLAB Implementation

Department of Computer Science 18


Algorithm and its MATLAB Implementation

Department of Computer Science 19


Algorithm and its Python Implementation

Department of Computer Science 20


Algorithm and its Java Implementation
// Bisection Method - Solves: x^2 - 3 = 0

public class Bisection01


{
public static void main(String[] args)
{
final double epsilon = 0.00001;
double a, b, m, y_m, y_a;

a = 0; b = 4;

while ( (b-a) > epsilon )


{
m = (a+b)/2; // Mid point

Department of Computer Science 21


Algorithm and its Java Implementation
y_m = m*m - 3.0; // y_m = f(m)
y_a = a*a - 3.0; // y_a = f(a)

if ( (y_m > 0 && y_a < 0) || (y_m < 0 && y_a > 0) )
{ // f(a) and f(m) have different signs: move b
b = m;
}
else
{ // f(a) and f(m) have same signs: move a
a = m;
}
System.out.println("New interval: [" + a + " .. " + b + "]");
// Print progress
}

System.out.println("Approximate solution = " + (a+b)/2 );


}
}

Department of Computer Science 22


Advantages

 Always convergent
 The root bracket gets halved with each iteration
- guaranteed.

Department of Computer Science 23


Drawbacks

 Slow convergence
 If one of the initial guesses is close to

the root, the convergence is slower

Department of Computer Science 24


Drawback
One drawback of the bisection method is that it applies only to roots of f()about which
f (x) changes sign. In particular, double roots can be overlooked; one should be
careful to examine f(x) in any range where it is small, so that repeated roots about
which f (x) does not change sign are otherwise evaluated
f(x)

xu
x
x

Figure 4 If the function f x  changes sign between two points, more


than one root for the equation f x   0 may exist between the two points.

25
Drawbacks

 If a function f(x) is such that it just touches the


x-axis it will be unable to find the lower and
upper guesses.

f(x)

f x   x 2

Department of Computer Science 26


Drawbacks

 Function changes sign but root does not exist

1
f(x)
f x  
x
x

Department of Computer Science 27


Analogy of Binary Search and Bisection Method

 Binary Search:
 Input: Key, A Sorted Array
1 2 3 4 5 6 7 8

2 3 5 7 9 10 11 12

 Bisection Method
 Input: f(x), Closed Interval in which root of given
function lie. E.g [-12,12]

Department of Computer Science 28


Analogy of Binary Search and Bisection Method

 Bisection Method is similar to Binary search


Input: Key, A Sorted Array
Input: Closed Interval [a,b], f(x) whose


root to be computed
 Output: Location(array index) where key founf

 Algorithm Designing Technique


Employed: Decrease and Conquer

 Algorithm Designing Technique Employed: Decrease


and Conquer
 Output: Root of a given function

Department of Computer Science 29


Error Analysis
of
Bisection Method
Department of Computer Science
Bisection Method

 What is the maximum error after n iterations


of bisection method?
 How many iterations will be required to
obtained the root to the desired accuracy.

Department of Computer Science


Binary search: Efficiency
 Let us first recall how to determine the time complexity of Binary Search.
 Binary Search
 Size of list( Search space size) = N
 We keep dividing until we found key
 Worst case: We divide till we left one item
 Bisection Method
 Length of Interval [a , b] = b-a
 The process continues until the numerical solution is accurate enough according to a criterion that is selected.
 What is the maximum error?

Department of Computer Science 32


Binary search: Efficiency

 After 1 bisection N/2 items


 After 2 bisectionsN/4 = N/22 items
 . . .
 After i bisectionsN/2i =1 item

i = log2 N

Department of Computer Science 33


The Bisection Method

Error Estimate 12 Change 16


of sign
Iter1
-34.8 17.6
At the n-th iteration:
endpoints of the inteval
12 14 Change 16
of sign
Iter2
-34.8 -12.6 17.6

Length of the interval


14 15 16
Change
Iter3 of sign

-12.6 1.5 17.6

Change of sign
14.5
15
14
Iter4

-5.8

1.5
-12.6
Department of Computer Science
The Bisection Method
Error Estimates 12 Change 16
of sign
Iter1
At the iter1: -34.8 17.6

12 14 Change 16
of sign
Iter2
-34.8 -12.6 17.6

At the iter2:
14 15 16
Change
Iter3 of sign

-12.6 1.5 17.6

At the nth iteration:

Change of sign
14.5
15
14
Iter4

-5.8

1.5
-12.6
Department of Computer Science
The Bisection Method

Error Estimates 12 14 16
for Bisection Iter1
-34.8 -12.6 17.6

At the iter1:
True root
live inside
this interval

At the iter2:
14 15 16
Iter2
-12.6 1.5 17.6

At the nth iteration: Theorem 2.1

the absolute
error in the
n-th iteration
Department of Computer Science
The Bisection Method

Theorem 2.1

Department of Computer Science


The Bisection Method
Example:
Theorem 2.1

Remark

It is important to realize that Theorem


1 14.0000 9.0000e-01
2.1 gives only a bound for 2 15.0000 1.0000e-01
approximation error and that this 3 14.5000 4.0000e-01
bound might be quite conservative. 4 14.7500 1.5000e-01
For example, 5 14.8750 2.5000e-02
6 14.9375 3.7500e-02
7 14.9063 6.2500e-03
8 14.8906 9.3750e-03
9 14.8984 1.5625e-03
Department of Computer Science
Error Bounds for Bisection Method

Department of Computer Science 39


The Bisection Method

Department of Computer Science


The Bisection Method
Theorem 2.1
Example:

Example
Determine the number of iterations
necessary to solve f (x) = 0 with
accuracy 10−2 using a1 = 12 and b1 = 1 14.0000 9.0000e-01
16. 2 2 15.0000 1.0000e-01
3 14.5000 4.0000e-01
the desired error 4 14.7500 1.5000e-01
5 14.8750 2.5000e-02
solve for 6 14.9375 3.7500e-02
n: 7 14.9063 6.2500e-03
8 14.8906 9.3750e-03
Remark 9 14.8984 1.5625e-03
It is important to keep in mind that the error analysis 10 14.9023 2.3437e-03
gives only a bound for the number of iterations. In 11 14.9004 3.9062e-04
many cases this bound is much larger than the actual 12 14.8994 5.8594e-04
number required.
Department of Computer Science
Regula falsi Method

Department of Computer Science 42


Regula Falsi Method
 Frequently, as in the case shown in Fig., the function in
the interval [a, b] is either concave up or concave down.
 In this case, one of the endpoints of the interval stays
the same in all the iterations, while the other endpoint
advances toward the root.
 In other words, the numerical solution advances toward
the root only from one side.
 The convergence toward the solution could be faster if
the other endpoint would also "move" toward the root.
 Food for thought: How we move other point?

Department of Computer Science 43


Method of False Position
A better approximation is obtained if we find the point (c, 0) where the
secant line L joining the points (a, f (a)) and (b, f (b)) crosses the x-axis
(see the image below). To find the value c, we write down two versions
of the slope m of the line L:

Department of Computer Science 44


Method of False Position
A better approximation is obtained if we find the point (c, 0) where the
secant line L joining the points (a, f (a)) and (b, f (b)) crosses the x-axis
(see the image below). To find the value c, we write down two versions
of the slope m of the line L:

Department of Computer Science 45


Method of False Position
A better approximation is obtained if we find the point (c, 0) where the
secant line L joining the points (a, f (a)) and (b, f (b)) crosses the x-axis
(see the image below). To find the value c, we write down two versions
of the slope m of the line L:

Department of Computer Science 46


Method of False Position
We first use points (a, f (a)) and (b, f (b)) to get equation 1 (below),
and then use the points (c, 0) and (b, f (b)) to get equation 2 (below).
Equating these two equation we get equation 3 (below) which is easily
solved for c to get equation 4 (below):

Department of Computer Science 47


Method of False Position
The three possibilities are the same as before in the bisection method:
If f (a) and f (c) have opposite signs, a zero lies in [a, c].

If f (c) and f (b) have opposite signs, a zero lies in [c, b].

If f (c) = 0, then the zero is c.


Department of Computer Science 48


Method of False Position
Find a root of an equation f(x)=2x3-2x-5 using
False Position method (Regula Falsi method)
Solution
Here 2x3-2x-5=0

Let f(x)=2x3-2x-5
x 0 1 2
f(x) -5 -5 7

Approximate root
of the equation 2x3-
2x-5=0 using
False Position 
mehtod is 1.60056

Department of Computer Science 49


Method of False Position and its C++ Implementation

Department of Computer Science 50


Home Tasks
 Apply bisection Method and regula falsi method and find p 3 for f (x) = √x - cos x on [0,
1]. Also compare results
 Use the Bisection method and Regula Falsi to find solutions accurate to within 10 -2 for x3
- 7x2 + 14x - 6 = 0 on
each interval.
 a. [0, 1]


b. [1, 3.2]
 Also compare results
 Use the Bisection method to find solutions, accurate to within 10 -5 for
the following problems.
 3x - ex = 0 for 1 ≤ x ≤ 2
 2x + 3 cos x - ex = 0 for 0 ≤ x ≤ 1

 x2 - 4x + 4 - ln x = 0 for 1 ≤ x ≤ 2 and 2 ≤ x ≤ 4

Department of Computer Science 51


Lecture No 4

Numerical Methods
for Solving
Nonlinear Equations
Open Methods

Department of Computer Science 52


Today Covered

After completing this lecture you will be able to


know
 Newton Raphson Method
 Convergence of Newton-Raphson Method
 Rate of Convergence of Newton-Raphson Method
 Modified Newton-Raphson Method 88
 Rate of Convergence of Modified Newton-Raphson
Method
 Examples
 Special Cases of Newton Raphson method

Department of Computer Science


Newton Raphson Method

 Newton's method (also called the Newton-


Raphson method) is a scheme for finding a
numerical solution of an equation of the form
f(x) = 0
 where f(x) is continuous and
differentiable
 the equation is known to have a solution near a
given point.

Department of Computer Science 54


Newton Raphson Method

 The method is illustrated in Fig.

Department of Computer Science 55


Newton Raphson Method

 The solution process starts by choosing point x1


as the first estimate of the solution.
 The second estimate x2 is obtained by taking
the tangent line to f(x) at the point (x1, f(x1))
and finding the intersection point of the tangent
line with the x-axis.
 The next estimate x3 is the intersection of the
tangent line to f(x) at the point (x2, f(x2)) with
the x-axis, and so on.

Department of Computer Science 56


Newton Raphson Method

 Mathematically, for the first iteration, the slope, f


'(x1), of the tangent at point (x1, f(x1)) is given by:

Department of Computer Science 57


Algorithm for Newton's method

Department of Computer Science 58


Algorithm

Department of Computer Science 59


Example: NR

Department of Computer Science 60


Example: True Error

Department of Computer Science 61


Newton’s Method

THE NEWTON-RAPHSON METHOD is a method for finding successively better


approximations to the roots (or zeroes) of a function.

Algorithm Example

To approximate the roots of

1 0.000000000000000
2 0.500000000000000
3 0.566311003197218
4 0.567143165034862 The true value of the root: 0.56714329.Thus,
5 0.567143290409781
Department of Computer Science the approach rapidly converges on the true root.
Comparison with previous two methods :
 In previous methods, we were given an interval.
 Here we are required an initial guess value of root.
 The previous two methods are guaranteed to
converge.
 Newton Rahhson may not converge in some cases.
 For many problems, Newton Raphson method converges
faster than the above two methods.
 The previous two methods do not identify repeated
roots.
 It can identify repeated roots, since it does not look
for changes in the sign of f(x) explicitly.

Department of Computer Science 63


Comparison with previous two methods :
 Newton Raphson method requires derivative.
 Some functions may be difficult to differentiate.

Department of Computer Science 64


Notes on Newton's method

 Illustrations of two cases where Newton's method


diverges are shown in following fig.

 When it does not converge, it is usually because the


starting point is not close enough to the solution.

Department of Computer Science 65


Notes on Newton's method

Department of Computer Science 66


Notes on Newton's method

Department of Computer Science 67


Convergence of Newton-Raphson Method

 The method, when successful, works well and


converges fast.
 Convergence problems typically occur when the
value of f '(x) is close to zero in the vicinity of the
solution (where f(x) = 0).
 It is possible to show that Newton's method
converges:
 if the function f(x) and its first and second derivatives f '(x)
and f "(x) are all continuous,
 if f '(x) is not zero at the solution
 and if the starting value x1 is near the actual solution.

Department of Computer Science 68


Convergence of Newton-Raphson Method

Department of Computer Science 69


Advantages and Drawbacks
of Newton Raphson Method

Department of Computer Science 76


Advantages

 Converges fast (quadratic convergence), if it


converges.
 Requires only one guess

Department of Computer Science 77


Drawbacks

1. Divergence at inflection points


Selection of the initial guess or an iteration value of the root that is
close to the inflection point of the function f xmay
 start
diverging away from the root in the Newton-Raphson method.

For example, to find the root of the equation f x   x  1  0.512  0


3
.

The Newton-Raphson method reduces to xi 1  xi 


x3
i  3
 1  0.512
.
3xi  1
2

Table 1 shows the iterated values of the root of the equation.


The root starts to diverge at Iteration 6 because the previous estimate
of 0.92589 is close to the inflection point of x  1.
Eventually after 12 more iterations the root converges to the exact
value of x  0.2.

Department of Computer Science 78


Drawbacks – Inflection Points

Table 1 Divergence near inflection point.

Iteration xi
Number
0 5.0000
1 3.6560
2 2.7465
3 2.1084
4 1.6000
5 0.92589
6 −30.119
7 −19.746
18 0.2000
Figure 8 Divergence at inflection point for
f x   x  1  0.512  0
3

Department of Computer Science 79


Drawbacks – Division by Zero

2. Division by zero
For the equation
f x   x 3  0.03x 2  2.4 10 6  0
the Newton-Raphson method
reduces to
xi3  0.03 xi2  2.4 10 6
xi 1  xi 
3xi2  0.06 xi

For x0  0 or x0  0.02 , the Figure 9 Pitfall of division by zero


denominator will equal zero. or near a zero
number

Department of Computer Science 80


Drawbacks – Root Jumping

4. Root Jumping

In some cases where the function f x is oscillating and has a number of
roots, one may choose an initial guess close to a root. However, the guesses
may jump and converge to some other root. 1.5
f(x)

For example 1

f x   sin x  0 0.5

x
Choose -2
0
0 2 4 6 8 10

x0  2.4  7.539822 -0.06307


-0.5
0.5499 4.461 7.539822

It will converge to x0 -1

-1.5

instead of x  2  6.2831853 Figure 11 Root jumping from intended


location of root for
f x   sin x . 0
Department of Computer Science 83
Secant Method

Department of Computer Science


Secant Method -Derivation

f(x ) Newton’s Method


f(xi )
xi 1 = xi - (1)
f (xi )
f ( x i)
x f x 
i, i

Approximate the derivative


f ( xi )  f ( xi 1 )
f ( xi )  (2)
f ( x i-1 )
xi  xi 1

X
Substituting Equation (2)
x x xi
into Equation (1) gives the
i+ 2 i+ 1

Secant method
Figure 1 Geometrical illustration of f ( xi )( xi  xi 1 )
the Newton-Raphson method. xi 1  xi 
f ( xi )  f ( xi 1 )
86
The secant method can also be derived from geometry:
f(x)
The Geometric Similar Triangles
AB DC

f(xi) B AE DE
can be written as
f ( xi ) f ( xi 1 )

C
xi  xi 1 xi 1  xi 1
f(xi-1)

E D A
On rearranging, the secant
X
xi+1 xi-1 xi method is given as

f ( xi )( xi  xi 1 )
Figure 2 Geometrical representation of xi 1  xi 
the Secant method. f ( xi )  f ( xi 1 )
87
Algorithm for Secant Method

Department of Computer Science 88


Step 1

Calculate the next estimate of the root from two initial guesses
f ( xi )( xi  xi 1 )
xi 1  xi 
f ( xi )  f ( xi 1 )
Find the absolute relative approximate error

xi 1- xi
a =  100
xi 1

89
Step 2

Find if the absolute relative approximate error is greater


than the prespecified relative error tolerance.

If so, go back to step 1, else stop the algorithm.

Also check if the number of iterations has exceeded the


maximum number of iterations.

90
91
Advantages
 Converges fast, if it converges
 Requires two guesses that do not need to
bracket the root

Department of Computer Science 92


Secant Method

 The method uses two points in the


neighborhood of the solution to determine a
new estimate for the solution.
 the two points can be on one side of the solution
 the solution can be between the two points

Department of Computer Science 93


Secant Method

 The two points (marked as x1 and x2 in the


figure) are used to define a straight line (secant
line), and the point where the line intersects the
x-axis (marked as x3 in the figure) is the new
estimate for the solution.

Department of Computer Science 94


Secant Method

 The slope of the secant line is given by:

 which can be solved for x3 :

Department of Computer Science 95


Secant Method

 Once point x3 is determined, it is used together


with point x2 to calculate the next estimate of
the solution, x4.
 The previous equation can be generalized to an
iteration formula in which a new estimate of the
solution xi+ 1 is determined from the previous
two solutions xi and xi-1.

Department of Computer Science 96


Relationship to Newton's method

 This equation is almost identical to Eq. of


Newton's method.(How?)

Department of Computer Science 97


Relationship to Newton's method

 In Eq. the denominator of the


second term on the right-hand side of the
equation is an approximation of the value of the
derivative of f(x) at xi.
 In Newtons formula , the denominator is
actually the derivative f'(x).

 In the secant method (unlike Newton’s


method), it is not necessary to know the
analytical form of f'(x).

Department of Computer Science 98


Relationship to Newton's method

 Examination of the secant method shows that


when the two points that define the secant line
are close to each other, the method is actually
an approximated form of Newton's method.
 This can be seen by rewriting Eq.
in the form:

 This equation is almost identical to Eq. of


Newton's method

Department of Computer Science 99


Example

Department of Computer Science


Your Turn

Find a root of the equation x3 – 8x – 5 = 0 using


the secant method.

Department of Computer Science


Fixed- Point Iteration

Example:

solution

Department of Computer Science


Fixed- Point Iteration
simple fixed-point iteration

Step
1

Step
2
Note
Convert the problem from
root-finding to finding fixed-
Department of Computer Science point
Fixed- Point Iteration
Step Example1
1

Step
1 0.0000000000000
2 2 1.0000000000000
3 0.3678794411714
4 0.6922006275553
Starting with an initial guess of x = 0 0
5 0.5004735005636
6 0.6062435350856
7 0.5453957859750
8 0.5796123355034
9 0.5601154613611
10 0.5711431150802
11 0.5648793473910
12 0.5684287250291
Thus, each iteration brings the estimate closer
Department of Computer Science to the true value of the root: 0.56714329
Conclusion

Department of Computer Science


Summary

Department of Computer Science 106

You might also like