0% found this document useful (0 votes)
76 views8 pages

CLL113 Term Paper PDF

This document describes and compares four iterative methods for solving nonlinear equations: 1) Newton-Raphson method 2) Steffensen's method and a modified version 3) Soleymani-Khratti-Karimi method 4) Sharma-Kumar-Sharma method The last two methods are fourth-order convergence methods. Numerical tests show these higher-order methods can solve nonlinear equations comparably or better than Newton-Raphson without explicitly providing derivatives.

Uploaded by

pablo
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
76 views8 pages

CLL113 Term Paper PDF

This document describes and compares four iterative methods for solving nonlinear equations: 1) Newton-Raphson method 2) Steffensen's method and a modified version 3) Soleymani-Khratti-Karimi method 4) Sharma-Kumar-Sharma method The last two methods are fourth-order convergence methods. Numerical tests show these higher-order methods can solve nonlinear equations comparably or better than Newton-Raphson without explicitly providing derivatives.

Uploaded by

pablo
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/337647386

ITERATIVE METHODS WITH BETTER APPLICABILITY TO SOLVE NONLINEAR


EQUATIONS

Research · November 2019


DOI: 10.13140/RG.2.2.18198.29765

CITATIONS READS

0 91

4 authors, including:

Saksham Garg Siddarth Goyal


Indian Institute of Technology Delhi Indian Institute of Technology Delhi
3 PUBLICATIONS   0 CITATIONS    1 PUBLICATION   0 CITATIONS   

SEE PROFILE SEE PROFILE

Pranshu Bhagat
Indian Institute of Technology Delhi
1 PUBLICATION   0 CITATIONS   

SEE PROFILE

All content following this page was uploaded by Saksham Garg on 30 November 2019.

The user has requested enhancement of the downloaded file.


ITERATIVE METHODS WITH BETTER
APPLICABILITY TO SOLVE NONLINEAR EQUATIONS
Pranshu Bhagat Saksham Garg Siddarth Goyal Parth Jain
2018CH10235 2018CH10927 2018CH10726 2018CH10230
November 2019

Abstract
The Newton-Raphson method is a great method of solving nonlinear equations iteratively.
In this paper, we discuss four more higher-order numerical methods for solving nonlinear equa-
tions. The methods are namely Steffensen’s method (and modified Steffensen’s Method) which
is a modification of Newton raphson method, Soleymani-Khratti-Karimi (SKK) method, and
Sharma-Kumar-Sharma(SKS) method. SKK and SKS methods are fourth order convergence
methods. We also compare the results obtained from these four methods using some examples.
Numerical tests show that these methods are comparable to the well-known existing methods
and show better results.

1 Introduction
Non linear equations are used to solve a variety of equations. A real gas equation is an example of
a nonlinear equation in V.  a 
P + 2 (V − b) − RT = 0 (1)
V
Here, the symbols have their usual meanings. If we know the values of P, T, a and b, and we want
to find the value of V, then we can do so by applying numerical methods like Newton-Raphson
method, Fixed-Point iteration method and other popular methods. Non linear equations are also
used a lot in Mass and Energy balances. A great deal of nonlinear equation solving is required
in computer science and mathematics. Because of such an immense applications of the nonlinear
equations, we need better numerical methods to solve them. Newton Raphson method is a widely
used and preferred method for this purpose.

Undoubtedly, Newton Raphson method is good method of finding the roots of a nonlinear equation,
but here we study various improved versions of it. One improvement can be to improve the order
of convergence in the method. Another improvement is to calculate the derivative of the function
internally without explicitly providing the derivative function to the computer. This not only makes
the algorithm versatile, but also generates new applications.

2 Methods’ description
The derivation of these methods is not discussed, but it can be easily found out. We discuss the
results attained and draw comparisons (errors vs number of iterations and accuracy) between all

1
these methods. The following section gives a brief description of all these methods.

2.1 Newton Raphson Method


Newton Raphson Method (NRM) iteratively finds the approximation to the roots of the polynomial
using the following formula:
f (xn )
xn+1 = xn − 0 (2)
f (xn )
This equation is repeated until the absolute difference between the true value (xtrue ) and (xi ) is less
than the tolerance value (εtol ) i.e.
|xtrue − xn | < εtol (3)
This method has a quadratic convergence and is quite fast method. But this method has some
drawbacks. To computationally find the value of the root, we have to provide the derivative function
to the algorithm, explicitly. Also, higher order methods are available.

2.2 Steffensen’s Method


Steffensen’s Method (SM) is a modification of Newton Raphson method. Here the derivative of the
function is calculated using numerical techniques. The basic formula remains the same:
f (xn )
xn+1 = xn −
g(xn )
However, we calculate g(xn ) used in the equation 2, using the formula:
f (xn + f (xn ))
g(xn ) = −1 (4)
f (xn )
Now, if we consider h = f (xn ) in equation 4, then we get:
f (xn + h) − f (xn )
g(xn ) = (5)
h
which is same as the forward difference derivative formula. But, in this paper we also consider g(xn )
in accordance to the central-difference derivative scheme as follows:
f (xn + h) − f (xn − h)
g(xn ) = (6)
2h
We call this Modified Version of SM and use it to generate our results.

Another smarter way to solve the equation is to use the Aitken’s Delta-squared process. Here,
we consider xn = p − pn where p is the final answer, and pn is the nth term of the sequence {pn }.
If we consider this approximation, we can derive the following formula:
(pn+1 − pn )2
pn+3 = pn − (7)
pn+2 − 2pn+1 + pn
This modification to SM not only increases the rate of convergence, but also assumes linear con-
vergence sequence {pn }. Although this method has quadratic convergence, which is similar to the
NRM, still this method increases the applicability to solve nonlinear equations without providing
the derivative function to the algorithm for computationally finding the root. The above process is
repeated iteratively until the equation 3 is satisfied.

2
2.3 Soleymani Khratti Karimi Method
Soleymani Khratti Karimi Method (SKKM) is a two-step method with predictor-corrector approach
of problem solving.
2 f (xn )
yn = xn − (8)
3 f 0 (xn )
4 ! 2 !
7 f 0 (yn )
  0
2f (xn ) f (xn ) 3 f (yn )
xn+1 = xn − 0 1+ 2− + (9)
f (xn ) + f 0 (yn ) f 0 (xn ) 4 f 0 (xn ) 4 f 0 (xn )
This is a fourth-order method. Here we calculate the yn as a predictor value in the first step. In
the second step, we calculate the value of xn using f 0 (yn ) and f 0 (xn ). The value of xn we get from
SKKM is obviously better than the xn which we get from NRM. The above process is repeated
iteratively until the equation 3 is satisfied.

2.4 Sharma Kumar Sharma Method


Sharma Kumar Sharma Method (SKSM) is also a two-step method with predictor-corrector approach
of problem solving.
2 f (xn )
yn = xn − (10)
3 f 0 (xn )
1 9 f 0 (xn ) 3 f 0 (yn ) f (xn )
 
xn+1 = xn − − + + (11)
2 8 f 0 (yn ) 8 f 0 (xn ) f 0 (xn )
This is also fourth-order method. Here we calculate the yn as a predictor value in the first step.
In the second step, we calculate the value of xn using f 0 (yn ) and f 0 (xn ). The value of xn we get
from SKSM is obviously better than the xn which we get from NRM. The above process is repeated
iteratively until the equation 3 is satisfied.

3 Problem formulation
We solved the 5 different mathematical nonlinear equations with the help of the methods stated
above and compared their error in calculation. Their Computational Order of Convergence (COC)
and Extrapolated COC (ECOC) can be computed as:
ln|(xn+1 − xn )/(xn − xn−1 )|
COC ≈ (12)
ln|(xn − xn−1 )/(xn−1 − xn−2 )|
The five functions with their initial guesses are given as follows:
f1 (x) = x3 + 5x2 − 3, x0 = 1.5 (13a)
2 2
f2 (x) = −x + 2sin x + 1, x0 = 1.2 (13b)
x 2
f3 (x) = −e + x − 3x + 2, x0 = 3 (13c)
f4 (x) = cos2x − x, x0 = 1.4 (13d)
3
f5 (x) = (x − 2) − 1, x0 = 2 (13e)

4 Numerical Analysis
This section talks about the results obtained on implementing the above algorithms. On implement-
ing the algorithms in C++ for equation (13a), we get the following results:

3
Steffensen’s method Error :
1: 6.95273
Root : 0 . 7 2 3 9 5 7 2: 0.000864484
Error :
1: 101.188 SKK Method
2: 94.9161 Root : 0 . 7 2 3 9 5 6
3: 88.3559 1: 0.661931
4: 81.4834 2: 0
5: 74.274
6: 66.7047 Modified Steffensen’s Method
7: 58.7582 Root : 0 . 7 2 3 9 5 8
8: 50.4312 Error :
9: 41.75 1: 33.378
10: 32.8013 2: 4.96227
11: 23.7903 3: 0.139338
12: 15.1443 4: 0.00015643
13: 7.65533
14: 2.4786 Newton Raphson Method
15: 0.317669 Root : 0 . 7 2 3 9 5 7
16: 0.00574676 Error :
1 7 : 8 . 2 3 3 1 8 e −06 1: 33.3668
SKS Method 2: 4.95509
3: 0.138005
Root : 0 . 7 2 3 9 6 3 4: 0.000115265

Similar results were also obtained on solving other test-equations {(13a)-(13e)}. Error versus
the number of iteration line curve is obtained in the following manner:

Figure 1: Steffensen’s Method

4
Figure 2: Newton Raphson Method

Figure 3: Modified Steffensen’s Method

These graphs are generated in Microsoft Excel. Because the SKS and SKK methods converged in
such a small number of steps, it was not possible to draw their line graphs. But, clearly the SKS
and SKK methods are giving us a very accurate result and that too in very less number of steps.
Modified Steffensen’s method is way better than the general form of Steffensen’s method, as it reduces
the number of steps significantly and even giving us better results. The convergence is in the order:

SM < M odif ied SM < N RM < SKSM < SKKM (14)

Although SM and Modified SM are converging slower than NRM, still we have used the following
functions to calculate the derivative:

5
f l o a t G( f l o a t x , f l o a t h )
{
f l o a t y , temp ;
y=( f u n c ( x+h)− f u n c ( x−h ) ) / ( 2 ∗ h ) ;
return y ;
}
And,
f l o a t G( f l o a t x , f l o a t h )
{
f l o a t y , temp ;
y=( f u n c ( x+h)− f u n c ( x ) ) / ( h ) ;
return y ;
}
Instead of:
float deriv ( float x)
{
f l o a t y=(3∗x∗x )+(10∗ x ) ;
return y ;
}
There is clearly, a difference in the usage and from a broader aspect, its applications have increased.

5 Results and Discussion


In this paper, we discuss that their are better methods of solving nonlinear equations, iteratively than
Newton-Raphson Method. The results have been displayed in the Numerical Analysis section.
Although the analysis of Equation 13(a) is done here, but the same result hods true for other test-
equations as well. These methods are better and more accurate than Newton-Raphson Method and
thus, can be used for wider applications. Although the SM and Modified SM methods are slower
than NR, but they are better in another aspect. The Order of efficiency and accuracy in the above
methods is:
SM < M odif ied SM < N RM < SKSM < SKKM

6 Conclusion
A conclusion regarding the correctness and accuracy of the selected research paper including the
figures obtained can be drawn. The given method is a very fast method and converges rapidly
(4 iterations max) making computer programs faster and hence leading to quicker results. This
has great benefits in physical , economical , biological and chemical fields with applications like
real gas theory , pendulum theory etc. In the paper focus was given to single variable non linear
equations but this can be extended to multi-variable systems as well with further applications like
image processing , population theory , predator - prey theory (Lotka - Volterra systems ) etc . The
method is also highly accurate with error of the order 10−10000 .

6
7 Self Assessment
In this paper we learnt that there are better methods (Steffensen’s Method , Soleymani-Khratti-
Karimi Method , Sharma-Kumar-Sharma Method, and an innovative approach of Modified Stef-
fensen’s Method) of solving non linear equations than Newton-Raphson Method. These methods
are faster and more efficient than Newton Raphson method as they converge to the solution really
quickly with the error of the order 10−6 . A C++ code was prepared that implemented all these
Five methods on five different functions so as to find the solution and compare the accuracies. In
the code we used central difference method to calculate first derivatives and second derivatives and
for this we used the value of  as 10−6 . Thus the basic knowledge of the course was very helpful for
this project. This project taught us new and different ideas and techniques for finding the roots of
a nonlinear equation. Not only this, the importance and ease as well as accuracy of the numerical
approach of solving the equations was also a learnt lesson.

8 References
Kalyanasundaram Madhu, Jayakumar Jayaraman (2016). Higher Order Methods for Nonlinear
Equations and Their Basins of Attraction. Mathematics 2016, 4, 22, 5-6

9 Bibliography
• Wikipedia: The free Encyclopedia
https://en.wikipedia.org/wiki/Steffensen’s method

10 C++ Codes
The codes for the above algorithms can be found (and forked) at the following links:
• Newton Raphson: https://onlinegdb.com/H1iCmYVjH

• Modified Steffensen: https://onlinegdb.com/rkFl4F4sB


• SKK: https://onlinegdb.com/S1iW4FNiH
• Steffensen: https://onlinegdb.com/S1mQ4KEjB
• SKS: https://onlinegdb.com/Sy69VFEsB

View publication stats

You might also like