Derivation of The Method
Derivation of The Method
The method
The first two iterations of the secant method. The red curve shows the function f
and the blue lines are the secants.
As can be seen from the recurrence relation, the secant method requires two
initial values, x0 and x1, which should ideally be chosen to lie close to the root.
We find the root of this line - the value of x such that y=0 - by solving the
following equation for x:
The solution is
We then use this value of x as x2 and repeat the process using x1 and x2 instead
of x0 and x1. We continue this process, solving for x3, x4, etc., until we reach a
sufficiently high level of precision (a sufficiently small difference between xn
and xn-1).
...
Convergence
The iterates xn of the secant method converge to a root of f, if the initial values
x0 and x1 are sufficiently close to the root. The order of convergence is α, where
This result only holds under some technical conditions, namely that f be twice
continuously differentiable and the root in question be simple (i.e., with
multiplicity 1).
If the initial values are not close to the root, then there is no guarantee that the
secant method converges.
The secant method does not require that the root remain bracketed like the
bisection method does, and hence it does not always converge. The false
position method uses the same formula as the secant method. However, it does
not apply the formula on xn−1 and xn, like the secant method, but on xn and on the
last iterate xk such that f(xk) and f(xn) have a different sign. This means that the
false position method always converges.
The recurrence formula of the secant method can be derived from the formula
for Newton's method
If we compare Newton's method with the secant method, we see that Newton's
method converges faster (order 2 against α ≈ 1.6). However, Newton's method
requires the evaluation of both f and its derivative at every step, while the secant
method only requires the evaluation of f. Therefore, the secant method may well
be faster in practice. For instance, if we assume that evaluating f takes as much
time as evaluating its derivative and we neglect all other costs, we can do two
steps of the secant method (decreasing the logarithm of the error by a factor α² ≈
2.6) for the same cost as one step of Newton's method (decreasing the logarithm
of the error by a factor 2), so the secant method is faster. If however we
consider parallel processing for the evaluation of the derivative, Newton's
method proves its worth, being faster in time, though still spending more steps.
The following C++ code is written : to find the positive number x where cos(x)
= x3. This problem is transformed into a root-finding problem of the form f(x) =
cos(x) − x3 = 0.
In the code below, the secant method continues until one of two conditions
occur:
1. | xn + 1 − xn | < ε or,
2. n > m
#include <stdio.h>
#include <math.h>
#include<conio.h>
double f(double x)
{
return cos(x) - x*x*x;
}
OUTPUT
The equation is x^2-4x+3=0
Enter the first coordinate of the initial approximation:-
0
Enter the second coordinate of the initial approximation:-
1
Enter the accuracy level or tolerance expected:-
5E-11
The output is:0.865474033101614
------------------------------------------------------------------------------------------------
-------------
ROLL NO.:2k8/EE/844
D.T.U.(D.C.E.)