0% found this document useful (0 votes)
109 views

TOPIC: Various Iteration Methods and Comparison Between Them

The document discusses and compares various iterative methods for finding the roots of nonlinear equations. It introduces direct and iterative methods, with iterative methods being best suited for computers due to their algorithmic nature. Several iterative methods are then described in detail, including bisection, Regula Falsi (false position), and Newton-Raphson. The bisection method bisects the interval at each step while Regula Falsi uses linear interpolation. Newton-Raphson draws a tangent line to approximate the next root. The document also discusses convergence rates and termination criteria for iterative root-finding algorithms.

Uploaded by

Rashmi Sharma
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
109 views

TOPIC: Various Iteration Methods and Comparison Between Them

The document discusses and compares various iterative methods for finding the roots of nonlinear equations. It introduces direct and iterative methods, with iterative methods being best suited for computers due to their algorithmic nature. Several iterative methods are then described in detail, including bisection, Regula Falsi (false position), and Newton-Raphson. The bisection method bisects the interval at each step while Regula Falsi uses linear interpolation. Newton-Raphson draws a tangent line to approximate the next root. The document also discusses convergence rates and termination criteria for iterative root-finding algorithms.

Uploaded by

Rashmi Sharma
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 14

TOPIC: Various Iteration Methods And Comparison

Between Them.
INTRODUCTION:
There are two types of methods to get solutions of non-linear equations:

 Direct methods
 It gives the root of the equation in finite number of steps. This method
gives the roots of the equation at the same time.
Foe e.g. the roots of the quadratic equation
a x 2+ bx+ c=0, wherea ≠ 0 are:

−b ± √ b2−4 ac
x 1,2=
2a

 Iterative methods
 Iterative methods are very cumbersome and time consuming for
solving non-linear equations manually. Despite this they are best
suited for use on computers, due to following reasons:
 These methods can be concisely expressed as computational
algorithms.
 It is possible to formulate algorithms which can handle class
of similar problems like an algorithm can be developed to
solve a polynomial equation of degree n
 Round off errors are negligible in iterative methods as
compared to direct methods.

Convergence

It is defined as for the method which will result in a solution (close to the exact
solution). If the method gets a solution, then the method is said to be
convergent. Otherwise, the method is said to be divergent.

Rate of Convergence

The convergence rate to the root is different for different methods. This means
that, some methods are slow to converge and it takes a long time to arrive at the
root, while other methods can lead us to the root faster. This is in general a
compromise between ease of calculation and time. If ei is the magnitude of the
ei +1
error in the ith iteration, ignoring sign, then the order is n if is approximately
eni
constant.
It is also important to note that the chosen method will converge only if
e i+1 <e i.

Iterative Methods
It is also known as trial and error methods. These methods are based on the idea
of successive approximations. They start with one or more initial
approximations to the root and obtain a sequence of approximations by
repeating a fixed sequence of steps till the solution with reasonable accuracy is
obtained. Iterative methods, generally give one root at a time.

In case of linear system it can be explained as a wide range of techniques


which uses successive approximations to get more accurate solutions to a linear
system. Basically there are two types of iterative methods:

Stationary Method:
There are four main stationary iterative methods: the Jacobi method, the Gauss-
Seidal method, the Successive Over relaxation (SOR) method and
the Symmetric Successive Over relaxation (SSOR) method.

Some of the methods are explained in brief:


Jacobi Method: It is based on solving for each variable locally with respect to the
other variables i.e. one iteration of the method corresponds to solving for every
variable once. Its convergence is low although it is easy to understand and
implement.

Gauss-Seidal: It is just like Jacobi method. In this it uses updated values as soon
as they are available. In case the Jacobi method converges, the Gauss-Seidal
method will converge faster than the Jacobi method, though it is relatively slow.

Successive Over relaxation (SOR): It can be derived from the Gauss Seidal
method with an introduction of an extrapolation parameter. In terms of order of
magnitude, it converges faster than Gauss-Seidal.

Non-Stationary Method:

The rate of convergence of an iterative method depends on the spectrum of the


coefficient matrix. Therefore iterative methods include a second matrix that
transforms the coefficient matrix into one with a more favourable spectrum. The
transformation matrix is called a preconditioner.

To Terminate An Iterative Procedure:


An iterative procedure is continued till the required degree of accuracy in the
solution is achieved. But we need to measure this degree of accuracy. We need
to assume that the prescribed tolerance in the root is epsilon ( E ).
Termination criteria 1
Assume that starting with xi as the current approximation, x i+1 is the next
approximation, the iterative procedure would terminate when the inequality
|f ( x +1)|≤ E
is satisfied. The approximation xi+1 will be taken as the approximation solution.

Terminal Criteria 2

Terminate the iterative procedure when two successive approximations differ by an


amount less than or equal to the prescribe tolerance. If x i and x i+1 are two
successive approximation, iterative procedure would terminate when the inequality

|x i+1 – x i| ≤ E

is satisfied i.e. the absolute error is less than or equal to the prescribed tolerance.
The approximation x i+1 will be taken as the approximation solution.

Termination Criteria 3

Terminate the iterative procedure when two successive approximation, the


absolute value of the ratio

x i+1 – xi
| x i+1 |≤ E for x i+1≠ 0

becomes less than or equal to the prescribed tolerance i.e. when the absolute
value of the relative error becomes less than or equal to the prescribed tolerance.
Here also the approximation xi+1 will be taken as the approximate solution.

Unfortunately, there are some problems that may arise while using these
termination criteria’s. There are some examples in which | f (xi+1) | becomes
very small but xi+1 remains far from being the reasonable solution of the
equation f(x) =0. It is also possible that the term |x i+1 – x i| to become very small
without giving reasonable solution of the equation. Out of these criteria’s TC3 is a
good one to use as it is more reliable.

Bisection Method:

The Bisection Method is a numerical method for


estimating the root of polynomial f(x).  It is one
of the simplest and most reliable but it is not the
fastest method. With each iteration, the interval
is bisected and the value of the function at the
midpoint is calculated. The sign of this value is used to determine which half of
the interval does not contain a root. That half is discarded to give a new, smaller
interval containing the root. This method can be continued indefinitely until the
interval is sufficiently small. At any time, the current estimate of the root is
taken as the midpoint of the interval. Bisection method has linear convergence.

When an interval contains more than one root, the bisection method can find
one of them. When an interval contains a singularity, the bisection method
converges to that singularity.

This method is based on the repeated application of the intermediate value


property. Let the function f(x) be continuous between a and b. let f(a) positive
(a+ b)
and f(b) be negative. Then the first approximation to the root is x1= 2

 If f(x1) =0, then x1 is a root of f(x).


 If f(x1) is negative then root lies in interval(x1 and b).
 If f(x1) is positive then root lies in interval(x1 and a).
This new interval is designated as a1, b1 where length if this interval is
|b-a|/2.
METHOD OF REGULA FALSI
In the bisection method, it is considered that root lies at the midpoint but in
method of Regula Falsi [x1,f(x1)] and [x2,f(x2)] are joined by a straight line.
Then point of intersection of this straight line with the x-axis will give first
approx. value of root.

Regula Falsi method (false position) is a root-finding method based on linear


interpolation. Its convergence is linear, but it is usually faster than bisection. On
each iteration a line is drawn between the endpoints (a, f (a)) and (b, f (b)) and
the point where this line crosses the x-axis is taken as a "midpoint". The value
of the function at this point is calculated and its sign is used to determine which
side of the interval does not contain a root. That side is discarded to give a new,
smaller interval containing the root. Regula Falsi method can be continued
indefinitely until the interval is sufficiently small. The best estimate of the root
is taken from the linear interpolation of the interval on the current iteration.

This is the oldest method of finding the real root of an equation f(x) = 0 and it
almost resembles the bisection method.
Difference between False position method and bisection method:
The false position method differs from the bisection method only in the choice
it makes for subdividing the interval at each iteration. False position converges
faster to the root because it is an algorithm which uses appropriate weighting of
the initial end points x1 and x2 using the information about the function, or the
data of the problem. And the false position method uses the information about
the function to arrive at x3.
NEWTON RAPHSON METHOD:
The Newton-Raphson method uses an iterative process to approach one root of
a function. In this method, value of new guess or root is approx. by drawing a
tangent at the point whose co-ordinates are (x1,y1). This tangent intersects at x-
axis at a particular point which represents the new value of x2.
Uses of newton raphson method:
 Newton raphson method is
useful in cases of large values
of f ' ( x ) i.e when the graph of
f(x) while crossing the x-axis
is nearly vertical.
 Newton Raphson method is
used to improve the result
obtained by other methods. It
is applicable to the solution of
both algebraic and
transcendental equations.
 Newton’s formula converges provided the initial approximation x0 is
chosen sufficiently close to root.
 Newton Raphson Method has a “Quadratic Convergence”.

Method of Successive approximation:


This method is also known as direct substitution method or method of iterations
or method of fixed iteration, is applicable if the equation
f ' ( x )=0

can be expressed as
x=g( x )
If x1 is the initial approximation to the root, then the next approximation to the
root is given by:
x2 = g(x1) and next approximation x3 = g(x2)
in general

xi+1 = g(xi)

The iterative cycle will terminate when the relative error in the new
approximation is within the prescribed tolerance.

Convergence Of Iterative Methods:


Convergence of an iterative method is judged by the order or the rate at which
the error between successive approximations to the root decreases. The order of
convergence of an iterative method is defined in terms of errors ei and ei+1 in
successive approximations. An iterative method is said to be kth order
convergent if k is the largest integer such that;
e i+1
lim ≤M
ei
i∞
where M is a finite number.
In other words, the error in any step is proportional to the kth power of the error
in the preceding step. The kth order convergence means that in each iteration the
number of significant digits in each approximation increases k times.
For example, if the value of the root is good to n significant digits in the ith
iteration (| ei ≤ 10-n|), then
|ei+1| ≤ M |eik| or
≤ M × 10-nk
CONVERGENCE OF METHOD OF SUCCESSIVE APPROXIMATIONS:

The general formula for successive approximation is:

x i+1=g( xi )

Suppose y is the root of the equation f ( x )=0 , ei∧ei+ 1 are the errors of i th∧(i+1)th
iteration then the above formula can be written as:
y +e i+1=g ( y +e i)

Now expanding g( y + ei ) in Taylor’s series around y, we get

1 e2i 2 e3i 3
y +e i+1=g ( y ) +e i g ( y ) + g ( y ) + g ( y ) +…
2 6

If e iis very small, then the terms involvinge 2i ,e 3i ,… can be ignored to obtain
e i+1=ei g1 ( y ) −g ( y ) − y

Since y is the root of f ( x )=0 , therefore g ( y )= y , and the above equation reduces
to
e i+1=ei g1 ( y )

Hence the successive approximation method has first order convergence, and
converge only if |g1 ( y ) <1|
Convergence of various methods:
The bisection is linear order convergent.
The rate of convergence of the secant method is 1.618.
The rate of convergence of the Newton-Raphson method is 2.
The successive approximation method has first order convergence.
The false position method is linearly convergent.

EXAMPLE OF GAUSS SEIDAL METHOD:


Solve the following system of equations:
10x1 + x2 + 2x = 44
2 x1+10x2 + x = 51
x1+ 2x2 +10x = 61 accurate to four significant digits.
Solution:
As we can see that the system is diagonal system, therefore the convergence is
assured. Since we want the solution to four significant digits, hence the iterative
process will be terminated as soon as we find that successive iteration do not
produce any change at first four significant positions.
1
x 1= (44−x 2−2 x 3 )
10
1
x 2= (51−2 x 1−x 3 )
10
1
x 3= (61−x 1−2 x2 )
10

We start with initial approximation as


x 1=x 2=x 3=0

Iteration 1
Substituting x 2=x 3=0 in the first equation, we obtain
x 1 = 4.4

Substituting x 1=4.4∧x 3=0 in the second equation, we obtain


x 2 = 4.22

Substituting x 1=4.4∧x 2=4.22 in the third equation, we obtain


x 3 = 4.816

Thus, we obtain
x 1=4.4

x 2=4.22x 3=4.816

as new approximation at the end of the first iteration.


ITERATION 2
Now substituting x 2=4.22∧x 3=4.816 in the first equation, we obtain
x 1 = 4.0154

Next substituting x 1=4.0154∧x 3=4.816 in the second equation, we obtain


x 2 = 3.0148

Next substituting x 1=4.0154∧x 2=3.0148 in the third equation, we obtain


x 3 = 5.0955

Thus, we obtain
x 1=4.0154
x 2=3.0148x 3=5.0955

as new approximation at the end of the second iteration.

ITERATION 3
Now substituting x 2=3.0148∧x 3=5.0955 in the first equation, we obtain
x 1 = 3.0794

Next substituting x 1=3.0794∧x 3=5.0955 in the second equation, we obtain


x 2 = 3.9746

Next substituting x 1=3.0794 ¿ x 2=3.9746 in the third equation, we obtain


x 3 = 4.9971

Thus, we obtain
x 1=3.0794

x 2=3.9746x 3=4.9971

as new approximation at the end of the third iteration.


ITERATION 4
Now substituting x 2=3.0794∧x 3=4.9971 in the first equation, we obtain
x 1 = 3.0031

Next substituting x 1=3.0031∧x 3=4.9971 in the second equation, we obtain


x 2 = 3.9997

Next substituting x 1=3.0031∧x 2=3.9997 in the third equation, we obtain


x 3 = 4.8001

Thus, we obtain
x 1=¿ 3.0031

x 2=¿ 3.9997
x 3=¿ 4.8001

as new approximation at the end of the fourth iteration.


ITERATION 5
Now substituting x 2=3.9997∧x3 =4.8001 in the first equation, we obtain
x 1 = 3.04120

Next substituting x 1=3.0400∧x 3=4.8001 in the second equation, we obtain


x 2 = 4.0120

Next substituting x 1=3.0400∧x 2=4.0120 in the third equation, we obtain


x 3 = 4.8360

Thus, we obtain
x 1=¿ 3.0400

x 2=¿ 4.0120
x 3=¿ 4.8360

as new approximation at the end of the fourth iteration.


ITERATION 6
Now substituting x 2=4.0120∧x 3=4.8360 in the first equation, we obtain
x 1 = 3.0316

Next substituting x 1=3.0316∧x3 =4.8360 in the second equation, we obtain


x 2 = 4.0101

Next substituting x 1=3.0316∧x2=4.0101 in the third equation, we obtain


x 3 = 4.9948

Thus, we obtain
x 1=¿ 3.0316

x 2=¿ 4.0101

x 3=¿4.9948

as new approximation at the end of the fourth iteration.


ITERATION 7
Now substituting x 2=4.0101∧x3 =4.9948 in the first equation, we obtain
x 1 = 3.0000
Next substituting x 1=3.0000∧x 3=4.9948 in the second equation, we obtain
x 2 = 4.0001

Next substituting x 1=3.0000∧x 2=4.0001 in the third equation, we obtain


x 3 = 5.0000

Thus, we obtain
x 1=¿ 3.0000

x 2=¿ 4.0001
x 3=¿ 5.0000

as new approximation at the end of the fourth iteration.


ITERATION 8
Now substituting x 2=4.0001 ¿ x 3=5.0000 in the first equation, we obtain
x 1 = 3.0000

Next substituting x 1=3.0000 ¿ x3 =5.0000 in the second equation, we obtain


x 2 = 4.0000

Next substituting x 1=3.0000 ¿ x2 =4.0000in the third equation, we obtain


x 3 = 5.0000

Thus, we obtain
x 1=¿ 3.0000

x 2=¿ 4.0000
x 3=¿ 5.0000

as new approximation at the end of the eighth iteration.


By comparing the approximations of the seventh and eighth iterations, we find
that there is no variation in first significant digits, therefore we take the solution
obtained at the end of eighth iteration as the required solution.
Hence the solution correct to four significant digits is:
x 1=¿ 3.0000

x 2=¿ 4.0000
x 3=¿ 5.0000
COMPARISON OF ITERATIVE METHODS:
 Convergence of Bisection method is slow but steady. It is simplest
method and never fails.
 The method of False position is slow and it is first order convergent.
And it’s convergent is guaranteed. It is superior to Bisection method.
 The Secant method is not guaranteed to converge. But as its order of
convergence is 1.62, therefore, it converges faster than the Method of
False position. This method is considered economical as it gives rapid
convergence at low cost. Newton Raphson’s Method has fastest Rate
of convergence. This method is sensitive to the starting value. It may
diverge if f ' ( x ) is near zero during iterative cycle.
 Newton method is used for locating complex roots.
 LU Decomposition is superior to Gauss Elimination method f ' ( x ) and
is often used for the solution of linear system and for finding the
inverse of matrix.
 Gauss elimination method requires more of recording and is quite time
consuming for operations. But this method is more expensive for
programming. Crout’s trianulariztion method is used for the solution
of linear system and as software for computers.
 The rounding off errors get propagated in the elimination method
whereas in the iteration method only the rounding off errors
committed in the final iteration have any effect.

THANK YOU!!!

You might also like