TOPIC: Various Iteration Methods and Comparison Between Them
TOPIC: Various Iteration Methods and Comparison Between Them
Between Them.
INTRODUCTION:
There are two types of methods to get solutions of non-linear equations:
Direct methods
It gives the root of the equation in finite number of steps. This method
gives the roots of the equation at the same time.
Foe e.g. the roots of the quadratic equation
a x 2+ bx+ c=0, wherea ≠ 0 are:
−b ± √ b2−4 ac
x 1,2=
2a
Iterative methods
Iterative methods are very cumbersome and time consuming for
solving non-linear equations manually. Despite this they are best
suited for use on computers, due to following reasons:
These methods can be concisely expressed as computational
algorithms.
It is possible to formulate algorithms which can handle class
of similar problems like an algorithm can be developed to
solve a polynomial equation of degree n
Round off errors are negligible in iterative methods as
compared to direct methods.
Convergence
It is defined as for the method which will result in a solution (close to the exact
solution). If the method gets a solution, then the method is said to be
convergent. Otherwise, the method is said to be divergent.
Rate of Convergence
The convergence rate to the root is different for different methods. This means
that, some methods are slow to converge and it takes a long time to arrive at the
root, while other methods can lead us to the root faster. This is in general a
compromise between ease of calculation and time. If ei is the magnitude of the
ei +1
error in the ith iteration, ignoring sign, then the order is n if is approximately
eni
constant.
It is also important to note that the chosen method will converge only if
e i+1 <e i.
Iterative Methods
It is also known as trial and error methods. These methods are based on the idea
of successive approximations. They start with one or more initial
approximations to the root and obtain a sequence of approximations by
repeating a fixed sequence of steps till the solution with reasonable accuracy is
obtained. Iterative methods, generally give one root at a time.
Stationary Method:
There are four main stationary iterative methods: the Jacobi method, the Gauss-
Seidal method, the Successive Over relaxation (SOR) method and
the Symmetric Successive Over relaxation (SSOR) method.
Gauss-Seidal: It is just like Jacobi method. In this it uses updated values as soon
as they are available. In case the Jacobi method converges, the Gauss-Seidal
method will converge faster than the Jacobi method, though it is relatively slow.
Successive Over relaxation (SOR): It can be derived from the Gauss Seidal
method with an introduction of an extrapolation parameter. In terms of order of
magnitude, it converges faster than Gauss-Seidal.
Non-Stationary Method:
Terminal Criteria 2
|x i+1 – x i| ≤ E
is satisfied i.e. the absolute error is less than or equal to the prescribed tolerance.
The approximation x i+1 will be taken as the approximation solution.
Termination Criteria 3
x i+1 – xi
| x i+1 |≤ E for x i+1≠ 0
becomes less than or equal to the prescribed tolerance i.e. when the absolute
value of the relative error becomes less than or equal to the prescribed tolerance.
Here also the approximation xi+1 will be taken as the approximate solution.
Unfortunately, there are some problems that may arise while using these
termination criteria’s. There are some examples in which | f (xi+1) | becomes
very small but xi+1 remains far from being the reasonable solution of the
equation f(x) =0. It is also possible that the term |x i+1 – x i| to become very small
without giving reasonable solution of the equation. Out of these criteria’s TC3 is a
good one to use as it is more reliable.
Bisection Method:
When an interval contains more than one root, the bisection method can find
one of them. When an interval contains a singularity, the bisection method
converges to that singularity.
This is the oldest method of finding the real root of an equation f(x) = 0 and it
almost resembles the bisection method.
Difference between False position method and bisection method:
The false position method differs from the bisection method only in the choice
it makes for subdividing the interval at each iteration. False position converges
faster to the root because it is an algorithm which uses appropriate weighting of
the initial end points x1 and x2 using the information about the function, or the
data of the problem. And the false position method uses the information about
the function to arrive at x3.
NEWTON RAPHSON METHOD:
The Newton-Raphson method uses an iterative process to approach one root of
a function. In this method, value of new guess or root is approx. by drawing a
tangent at the point whose co-ordinates are (x1,y1). This tangent intersects at x-
axis at a particular point which represents the new value of x2.
Uses of newton raphson method:
Newton raphson method is
useful in cases of large values
of f ' ( x ) i.e when the graph of
f(x) while crossing the x-axis
is nearly vertical.
Newton Raphson method is
used to improve the result
obtained by other methods. It
is applicable to the solution of
both algebraic and
transcendental equations.
Newton’s formula converges provided the initial approximation x0 is
chosen sufficiently close to root.
Newton Raphson Method has a “Quadratic Convergence”.
can be expressed as
x=g( x )
If x1 is the initial approximation to the root, then the next approximation to the
root is given by:
x2 = g(x1) and next approximation x3 = g(x2)
in general
xi+1 = g(xi)
The iterative cycle will terminate when the relative error in the new
approximation is within the prescribed tolerance.
x i+1=g( xi )
Suppose y is the root of the equation f ( x )=0 , ei∧ei+ 1 are the errors of i th∧(i+1)th
iteration then the above formula can be written as:
y +e i+1=g ( y +e i)
1 e2i 2 e3i 3
y +e i+1=g ( y ) +e i g ( y ) + g ( y ) + g ( y ) +…
2 6
If e iis very small, then the terms involvinge 2i ,e 3i ,… can be ignored to obtain
e i+1=ei g1 ( y ) −g ( y ) − y
Since y is the root of f ( x )=0 , therefore g ( y )= y , and the above equation reduces
to
e i+1=ei g1 ( y )
Hence the successive approximation method has first order convergence, and
converge only if |g1 ( y ) <1|
Convergence of various methods:
The bisection is linear order convergent.
The rate of convergence of the secant method is 1.618.
The rate of convergence of the Newton-Raphson method is 2.
The successive approximation method has first order convergence.
The false position method is linearly convergent.
Iteration 1
Substituting x 2=x 3=0 in the first equation, we obtain
x 1 = 4.4
Thus, we obtain
x 1=4.4
x 2=4.22x 3=4.816
Thus, we obtain
x 1=4.0154
x 2=3.0148x 3=5.0955
ITERATION 3
Now substituting x 2=3.0148∧x 3=5.0955 in the first equation, we obtain
x 1 = 3.0794
Thus, we obtain
x 1=3.0794
x 2=3.9746x 3=4.9971
Thus, we obtain
x 1=¿ 3.0031
x 2=¿ 3.9997
x 3=¿ 4.8001
Thus, we obtain
x 1=¿ 3.0400
x 2=¿ 4.0120
x 3=¿ 4.8360
Thus, we obtain
x 1=¿ 3.0316
x 2=¿ 4.0101
x 3=¿4.9948
Thus, we obtain
x 1=¿ 3.0000
x 2=¿ 4.0001
x 3=¿ 5.0000
Thus, we obtain
x 1=¿ 3.0000
x 2=¿ 4.0000
x 3=¿ 5.0000
x 2=¿ 4.0000
x 3=¿ 5.0000
COMPARISON OF ITERATIVE METHODS:
Convergence of Bisection method is slow but steady. It is simplest
method and never fails.
The method of False position is slow and it is first order convergent.
And it’s convergent is guaranteed. It is superior to Bisection method.
The Secant method is not guaranteed to converge. But as its order of
convergence is 1.62, therefore, it converges faster than the Method of
False position. This method is considered economical as it gives rapid
convergence at low cost. Newton Raphson’s Method has fastest Rate
of convergence. This method is sensitive to the starting value. It may
diverge if f ' ( x ) is near zero during iterative cycle.
Newton method is used for locating complex roots.
LU Decomposition is superior to Gauss Elimination method f ' ( x ) and
is often used for the solution of linear system and for finding the
inverse of matrix.
Gauss elimination method requires more of recording and is quite time
consuming for operations. But this method is more expensive for
programming. Crout’s trianulariztion method is used for the solution
of linear system and as software for computers.
The rounding off errors get propagated in the elimination method
whereas in the iteration method only the rounding off errors
committed in the final iteration have any effect.
THANK YOU!!!