The Newton-Raphson Method A Detailed Analysis
The Newton-Raphson Method A Detailed Analysis
1. Introduction
The Newton-Raphson method, named after Isaac Newton and Joseph Raphson is a powerful iterative method for
finding approximate solutions to real-valued functions. Its primary advantage lies in its quadratic convergence
rate meaning that the number of correct digits in the approximation roughly doubles with each iteration provided
the starting guess is sufficiently close to the true root. The method is widely applicable in fields such as
computational mathematics, engineering, physics and economics.
The general problem addressed by the Newton-Raphson method is the equation f(x)=0 where f(x) is a
continuous and differentiable function. The goal is to find the root of the function i.e. a value of x for which
f(x)=0. This paper examines the method's formulation, convergence behaviour and limitations as well as its
practical applications and numerical examples.
2. Mathematical Formulation
The Newton-Raphson method approximates the root of a function by iteratively refining an initial guess, x0,
using the following formula:
f (X n)
xn+1 = xn -
f ' (X n)
where:
The method generates a sequence of approximations x0, x1, x2…that, ideally, converges to the true root of the
equation.
f(x) ≈ f(xn)+f′(xn)(x−xn)
To find the root, we set f(x)=0 (the point where the function intersects the x-axis):
0=f(xn)+f′(xn)(x−xn)
f (X n)
x = xn -
f ' (X n)
This is the Newton-Raphson formula, which gives a new approximation x for the root based on the current
approximation xn, the function value f(xn), and the derivative f′(xn).
The method is iterative. To refine the estimate, we update the value of xn in the following way
f (X n)
xn+1 = xn -
f ' (X n)
By repeating this process, the sequence {xn} converges to the true root of f(x)=0 if the initial guess is
sufficiently close to the root and the function behaves well.
This derivation shows how the Newton-Raphson method uses the first-order approximation of f(x) to iteratively
improve the estimate of the roots
For the Newton-Raphson method to converge to the root, the function f(x) must satisfy certain conditions. These
include:
1. Initial Guess: The initial approximation x0 must be sufficiently close to the true root. If the initial
guess is too far from the root, the method may fail to converge or may converge to the wrong root.
2. Continuity and Differentiability: The function f(x) must be continuous and differentiable at the root
and in a neighbourhood around it. The derivative f ′(x) must not be zero at the root as division by zero
would lead to failure of the method.
3. Local Convergence: If the initial guess is close enough to the root and the function behaves "well”, the
Newton-Raphson method exhibits quadratic convergence. This means that the error in the
approximation decreases roughly quadratically with each iteration.
The Newton-Raphson method is renowned for its quadratic convergence when the initial guess is sufficiently
close to the true root. This means that the error between the true root xr and the approximation xn decreases
quadratically with each iteration leading to very rapid convergence.
Error Behaviour
Let xr be the true root, and xn be the nth approximation to the root. The error after the nth iteration is given by:
en=∣xn−xr∣
Quadratic convergence means that the error after the next iteration en+1 is approximately proportional to the
square of the current error en. Mathematically, this can be expressed as:
en+1 ≈ Cen2
where C is a constant that depends on the function f(x) and its behaviour near the root. This relationship shows
that, as the iterations progress, the error decreases very quickly. The number of correct digits in the
approximation roughly doubles with each iteration.
Fast Convergence: The quadratic rate of convergence means that the Newton-Raphson method rapidly
improves the accuracy of the approximations once the iterations are close to the root.
Doubling of Correct Digits: With each iteration, the number of correct digits in the approximation
typically doubles. For example, if x0 starts with 1 correct digit, x1 may have 2 correct digits, x2 may
have 4 correct digits and so on.
However, this rapid convergence assumes that the initial guess is sufficiently close to the root and that f(x) is
well-behaved (i.e., f′(x) is not zero or very small near the root).
Summary:
The Newton-Raphson method achieves quadratic convergence, which means that the error en decreases
exponentially with each iteration and the approximation to the root becomes highly accurate very quickly. This
characteristic makes the method extremely efficient when the initial guess is close to the true root.
Steps:
f(x)=ex−3x2
f ′(x)=ex−6x
2. Iteration Formula: The Newton-Raphson iteration formula is:
f (X n)
xn+1 = xn -
f ' (X n)
3. First Iteration:
o Initial guess: x0=0.6
o Compute f(x0) and f′(x0)
0.7421
x1=0.6 − ≈ 0.6+0.4174=1.0174
−1.7779
4. Second Iteration:
o Initial guess: x0=1.0174
o Compute f(x0) and f′(x0)
−0.3392
x2=1.0174 − ≈ 1.0174−0.1016=0.9158
−3.3373
5. Third Iteration:
o Initial guess: x0=0.9158
o Compute f(x0) and f′(x0)
−0.0179
x3=0.9158 − ≈ 0.9158−0.0060=0.9098
−2.9962
6. Fourth Iteration:
o Initial guess: x0=0.9098
o Compute f(x0) and f′(x0)
Result:
After four iterations, we have x4 ≈ 0.9099, which is very close to the actual root of the equation.
Verification:
This small result confirms that x4 ≈ 0.9098 is a very accurate approximation of the root.
Discussion:
The Newton-Raphson method converged quickly to a solution close to the real root.
This example shows that when the initial guess is reasonable, the method finds highly accurate
solutions within a few iterations.
The Newton-Raphson method is known for its rapid convergence, particularly when the initial guess is close to
the true root. Its quadratic convergence rate ensures that the number of correct digits increases exponentially
with each iteration.
The method can be applied to a wide range of nonlinear equations and is often used to solve transcendental
equations (e.g. trigonometric, exponential and logarithmic functions) which are difficult to solve analytically.
The Newton-Raphson method is straightforward to implement requiring only the function and its derivative. For
most cases, the method converges quickly making it an efficient choice for solving real-valued equations
numerically.
The method is highly sensitive to the initial guess. A poor choice of x0 can lead to divergence, slow convergence
or convergence to a local root that is not the desired one. Therefore, the method may require careful
consideration of the initial guess or multiple attempts with different starting points.
The method requires the calculation of the first derivative of the function. For some functions, this can be
computationally expensive or may not be straightforward to obtain. In these cases, derivative-free methods like
the Secant method may be more suitable.
5.3 Failure at Critical Points
If f '(x)=0 at the root or near the root, the method will fail because division by zero occurs. Similarly, if the
function has singularities or inflection points close to the root, the method may not converge.
In cases where the function has multiple roots, the Newton-Raphson method may converge to different roots
depending on the initial guess. Special techniques, such as deflation or the use of global search algorithms may
be required to find all roots.
The Newton-Raphson method is widely used in various scientific and engineering disciplines for solving
nonlinear equations. Some key applications include:
Root-finding in Engineering: The method is often used in computational fluid dynamics, structural
analysis, and control systems to solve nonlinear equations arising from physical models.
Optimization: The method is applied in optimization algorithms to find the maximum or minimum of
functions, particularly in machine learning and economics.
Computational Chemistry: In computational chemistry, the Newton-Raphson method is used to solve
nonlinear equations that arise in quantum mechanics and molecular simulations.
The Newton-Raphson method offers several distinct advantages over other root-finding methods, particularly
in terms of speed, efficiency and applicability to smooth functions. Below are some key advantages:
Key Advantage: The Newton-Raphson method converges quadratically near the root which means
that the number of correct digits roughly doubles with each iteration provided the initial guess is close
to the true root.
Comparison: This is much faster than methods like the Bisection method (linear convergence) and
Secant method (superliner convergence) especially for smooth functions.
Key Advantage: When the function f(x) is smooth and its derivative f′(x) is easily computable, the
Newton-Raphson method provides very efficient and fast root-finding.
Comparison: Other methods like Bisection or False Position are slower because they rely on interval
reduction (linear convergence) and cannot take advantage of the function's local behaviour in the same
way.
Key Advantage: Unlike methods such as Bisection and False Position, the Newton-Raphson method
does not require an initial bracketing interval. You only need a single initial guess which can be
much more convenient.
Comparison: Bisection and False Position require a bracketing interval with opposite signs at the
endpoints which may not always be easy to identify.
Key Advantage: The Newton-Raphson method directly uses the derivative f′(x) which means that it
can incorporate more information about the function's local behaviour to converge more rapidly.
Comparison: In contrast, methods like Secant approximate the derivative and therefore do not fully
utilize the information that an exact derivative could provide. Similarly, Fixed-Point Iteration requires
rearranging the equation into a form that may not directly exploit the function’s local properties.
Key Advantage: When the function f(x) is differentiable, the Newton-Raphson method is highly
versatile and can be applied to a wide variety of equations (including transcendental, algebraic and
trigonometric equations).
Comparison: Methods like Fixed-Point Iteration require special rearrangement of the function into a
suitable form x=g(x) which may not always be possible. Bisection and False Position are less flexible
requiring a sign change between two initial points.
Key Advantage: When a good initial guess is available, fewer iterations are required to achieve a
high level of accuracy compared to methods with linear convergence like the Bisection method.
Comparison: The Secant method may take more iterations than Newton-Raphson to achieve similar
accuracy and Bisection will generally be much slower.
Key Advantage: For functions where the derivative f′(x) is easy to compute and the function is
smooth, Newton-Raphson is highly effective and fast.
Comparison: Secant method does not require the derivative, but it still generally converges slower
than Newton-Raphson. Bisection and False Position methods don't make use of the local information
available from the derivative making them slower for smooth functions.
8. Conclusion
The Newton-Raphson method is a powerful and efficient technique for solving nonlinear equations. With its
quadratic convergence rate, the method can rapidly approximate roots provided the initial guess is sufficiently
close to the true root. While it has some limitations such as its sensitivity to the initial guess and its requirement
for the derivative of the function, it remains one of the most widely used methods in numerical analysis.
Understanding the method's strengths, limitations, and appropriate applications can greatly enhance its utility in
solving complex real-world problems.
References
1. Burden, R. L., & Faires, J. D. (2010). Numerical Analysis (9th ed.). Brooks/Cole.
2. Atkinson, K. E. (1989). An Introduction to Numerical Analysis (2nd ed.). Wiley.
3. Press, W. H., Teukolsky, S. A., Vetterling, W. T., & Flannery, B. P. (2007). Numerical Recipes: The
Art of Scientific Computing (3rd ed.). Cambridge University Press.
4. Newman, M. (2014). The Newton-Raphson Method and Its Applications in Numerical Analysis. Wiley.