Numeric Method
Numeric Method
Working Procedure
2. Using the location theorem, we find ( x 0 , x 2 ) in which a root of the equation f(x)
= 0 lies. We may assume x0 , x1 , x 2 as the initial approximations to the root of
x x2
f(x)=0 where x1 0 .
2
2. Compute f i f ( xi ), i 0,1,2
2. Compute d i xi 1 xi , i 0,1 . For the above choice of x0 , x1 , x 2 ; di = 0.5,
otherwise d 0 d1 .
2. Compute g i ( f i 1 f i ) / d i , i = 0,1
2. Set i = 0
f ( xi 2, xi 1 ) f ( xi 1, xi ) ( g i 1 g i )
2. Compute hi f ( xi 2, xi 1, xi ) = =
xi 2 xi d i 1 d i
2. Compute ci g i 1 d i 1 hi
2 f i2
2. Compute d i 2 choosing the proper sign in the denominator so
ci ci2 4 f i 2 hi
as to make its numerical value greater.
2. Set xi 3 xi 2 d i 2
2. Compute f i 3 f ( xi 3 ) and g g i 2 ( f i 3 f i 2 ) / d i 2
2. set i i 1 and repeat steps 5-10, until the required accuracy is got. i.e
xi 1 xi .
Problem: Find the negative root of the equation x 3 4 x 1 0 , that lies between –3
and –2, correct to 4 places of decimals, using Muller’s method.
Using this method, we can find a real root of a polynomial equation f(x)=0.
We will start with an initial approximation r0 to r and use Newton Raphson method to
improve the value of r such that R(r )=0.
If r1 is a close approximation to the root, then
f (r )
r1 r0 1 0
f ( r0 )
In B-V-M we will not compute f (r0 ) and f 1 ( r0 ) directly using f(x) and f’(x), but
compute them using synthetic division as
r0 a0 a1 a2 an2 a n 1 an
r0 b0 r0 b1 r0 bn 3 r0 bn 2 r0 bn 1
b0 b1 b2 b n2 bn 1 bn
r0 c 0 r0 c1 r0 c n 3 r0 c n 2
c0 c1 c2 cn2 c n 1
Using r1 in the place of ro, proceeding as before, we get a new set of values for bn
and c n-1 and hence a better approximation for r is given by
b
r2 r1 n
c n 1
Thus the Birge Vieta formula provides the iterative formula
b where bn and c n-1 are obtained from above corresponding to rk.
rn 1 rk n
c n 1
Problem:
Find the root of the equation x 3 2 x 2 10 x 20 0 that is near 1, using Birge Vieta
method , correct to 4 places of decimals.
Let us take r0 = 1
1 1 2 10 20
1 3 13
1 1 3 13 7 bn
1 4
1 4 17 c n 1
bn 7
r1 r0 1 =1.4118
c n 1 17
1.4118 1 2 10 20
1.4118 4.8168 20.9184
1.4118 1 3.4118 14.8168 0.9184 bn
1.4118 6.8100
1 4.8236 21.6268 c n 1
bn 0.9184
r2 r1 1.4118 1.3693
c n 1 21.6268
1.3693 1 2 10 20
1.3693 4.6136 20.0104
1.3693 1 3.3693 14.6136 0.0104 bn
1.3693 6.4886
1 4.7386 21.1022 c n 1
bn 0.0104
r3 r2 1.3693 1.3688
c n 1 21.1022
1.3688 1 2 10 20
1.3688 4.6112 19.9998
1.3693 1 3.3693 14.6136 0.0002 bn
1.3688 6.4848
1 4.7376 21.0960 c n 1
bn 0.0002
r4 r3 1.3693 1.3688
c n 1 21.0960
Using this method, we can find all the roots of a polynomial equation with real
coefficients,
n 1 n2
(i.e) the equation of the form a 0 x a1 x a 2 x .... a n 0
n
…..(1)
a0 a1 a2 a3 ….. an
a 02 a12 a 22 a 32 …. a n2
-2 a 0 a 2 -2 a1 a 3 -2 a3 a 4 ……
+2 a 0 a 4 2 a1 a5
-2 a 0 a 6 …….
b0 b1 b2 b3 ….. bn
Case 2:
After a few squaring process, if the magnitude of the coefficients Bi is half the square of
the magnitude of corresponding coefficient in the previous equation, then this indicates
that i is a double root.
Bi
Since Ri ,
Bi 1
Bi 1
Ri 1
Bi
Bi 1
and Ri Ri 1 Ri
2
Bi
2 m 1 Bi
Ri2 i
Bi 1
From this we can get i double root. The sign of it can got as before by substitution in
(1).
Case 3:
If k , k 1 are two complex conjugate roots, then this would make the coefficients of
x n k in the successive squaring to fluctuate both in magnitude and sign..
Bk 1 Bk 1
If the equation possesses only two complex roots possesses only two complex roots p
iq, we have 1 2 ... k 1 2 p k 2 ... n a1
This gives the value of p
Since k p 2 q 2 and k is known already q is known from this relation.
2
Problem:
Solve : x 3 x 2 x 2, using Graeffe’s method.
Solution.:
m 2m
0 1 1 -1 -1 -2
1 1 1 4
2 -4
1 2 1 3 -3 4
1 9 9 16
6 –24 1
2 4 1 15 -15 16
1 225 225 256
30 -480
3 8 1 225 -225 256
1 65025 65025 65536
510 -130560
4 16 1 65535 -65535 65536
1 4.2948362*109 4.2948362*10 4.2949673*109
9
131070 -8.5898035*109
5 32 1 4.2949673*109 -4.2949673*109 4.2949673*109
From above, we see that the magnitude of the coefficient B1 has become constant (upto
few decimals), while the magnitude of the coefficient B2 oscillates. Hence
Solving the system of simultaneous algebraic linear equations are two types (i) Direct
method (ii) Indirect methods or iterative methods.
Direct methods consist of Gauss Elimination method,Gauss – Jordan method, Method of
Triangularization, Crout’s method.
Iterative methods consist Jacobi method of iteration or Gauss – Jacobi method, Gauss-
Seidel method , Relaxation method.
The system,
a11 x1 a12 x 2 a13 x3 b1
a 21 x1 a 22 x 2 a 23 x3 b2
a31 x1 a32 x 2 a33 x3 b3
will be solvable by this method if
a11 a12 a13
a 22 a 21 a 23
a33 a 31 a 32
The system is call it as diagonally dominates.
Note: This condition is sufficient but not necessary.
GAUSS JACOBI METHOD:
Consider the system of equations
The system,
a1 x b1 y c1 z d1
a 2 x b2 y c 2 z d 2 ……….. (1)
a3 x b3 y c3 z d 3
Let us assume
a1 b1 c1
b2 a 2 c 2
c3 a 3 b3
Then, iterative method can be used for the system (1).
1
x (d1 b1 y c1 z )
a1
1
y (d 2 a 2 x c 2 z )
b2
1
z (d 3 a3 x b3 y )
c3
If x ( 0 ) , y ( 0 ) , z ( 0 ) are the initial values of x, y, z respectively, then
1
x (1) (d 1 b1 y ( 0 ) c1 z ( 0 ) )
a1
1
y (1) (d 2 a 2 x ( 0) c 2 z ( 0 )
b2
1
z (1) (d 3 a3 x ( 0 ) b3 y ( 0 ) )
c3
proceeding in the same way, if the rth iterates are x ( r ) , y ( r ) , z ( r ) , the iteration scheme
reduces to
1
x ( r 1) (d1 b1 y ( r ) c1 z ( r ) )
a1
1
y ( r 1) (d 2 a 2 x ( r ) c 2 z ( r ) )
b2
1
z ( r 1) ( d 3 a 3 x ( r ) b3 y ( r ) )
c3
The procedure is continued till the convergence is assured.
1
x (1) (d1 b1 y ( 0 ) c1 z ( 0 ) )
a1
1
y (1) ( d 2 a 2 x (1) c 2 z ( 0 )
b2
1
z (1) (d 3 a3 x (1) b3 y (1) )
c3
In finding the values of the unknowns, we use the latest available values on the right hand
side. If x ( r ) , y ( r ) , z ( r ) are the rth iterates, then the iteration scheme will be
1
x ( r 1) (d1 b1 y ( r ) c1 z ( r ) )
a1
1
y ( r 1) (d 2 a 2 x ( r 1) c 2 z ( r ) )
b2
1
z ( r 1) (d 3 a3 x ( r 1) b3 y ( r 1) )
c3
This process of iteration is continued until the convergence is assured.
Note 1: For all systems of equations, this method will not work. It converges only for
special systems of equations.
2. Iteration method is self-correcting method.
3. The iteration is stopped when the values of x,y, z start repeating with the required
degree of accuracy.
1
x (1) (3 5(0) 2(0)) 0.3
10
1
y (1) (3 4(0.3) 3(0)) 0.42
10
1
z (1) (3 (0.3) 6(0.42)) 0.582
10
proceeding in this way,
1
x (7) (3 5(0.285039017) 2( 0.5051728)) 0.3414849
10
1
x (7) (3 4(0.3414849) 3( 0.5051728)) 0.28504212
10
1
z (7) ( 3 (0.3414849) 6(0.28504212)) 0.5051737
10
There are several different notations for the single set of finite differences,
described in the preceding Step. We introduce each of these three notations in
terms of the so-called shift operator, which we will define first.
Consequently,
and
,
then
then
,
then
5. Differences display
Checkpoint
4. What is the definition of the shift operator?
5. How are the forward, backward, and central difference
operators defined?
6. When are the forward, backward, and central difference
notations likely to be of special use?
EXERCISES
7. Construct a table of differences for the polynomial
1. ;
2. ;
3. .
8. For the difference table of f (x) = ex for x = 0.1(0.05)0.5
determine to six significant digits the quantities (taking x0 = 0.1 ):
1. ;
2. ;
3. ;
4. ;
5. ;
9. Prove the statements:
1. ;
2. ;
3. ;
4. .
FINITE DIFFERENCES 3
Polynomials
yields
In passing, the student may recall that in the Differential Calculus the increment
is related to the derivative of f (x) at the point x.
2. Example
3
Construct for f (x) = x with x = 5.0(0.1)5.5 the difference table:
Since in this case n = 3, an =1, h = 0.1, we find
Note that round-off error noise may occur; for example, consider the tabulation of f(x) = x3 for
5.0(0.1)5.5, rounded to two decimal places:
Whenever the higher differences of a table become small (allowing for round-off noise), the function
represented may be approximated well by a polynomial. For example, reconsider the difference table
of 6D for f (x ) = ex with x = 0.1(0.05)0.5:
Since the estimate for round-off error at (cf. the table in STEP 12), we say that third
differences are constant within round-off error, and deduce that a cubic approximation is appropriate
for ex over the range 0.1 < x < 0.5. An example in which polynomial approximation is inappropriate
occurs when f(x) = 10x for x = 0(1)4, as is shown by the next table:
Although the function f(x) = 10x is `smooth', the large tabular interval (h = 1) produces large higher
order finite differences. It should also be understood that there exist functions that cannot usefully be
tabulated at all, at least in certain neighbourhoods; for example, f(x) = sin(1/x) near the origin x = 0.
Nevertheless, these are fairly exceptional cases.
EXERCISES
1. Construct a difference table for the polynomial f(x) = x4 for x = 0(0.1)1 when
a.
b. the values of f are exact;
c. the values of f have been rounded to 3D.
d. Compare the fourth difference round-off errors with the estimate +/-6.
e.
2. Find the degree of the polynomial which fits the data in the table:
INTERPOLATION 1
Linear and quadratic interpolation
1. Linear interpolation
so that
The first differences are almost constant locally, so that the table is
suitable for linear interpolation. For example,
2. Quadratic interpolation
Given three adjacent points xj, xj+1 = xj and xj+2 = xj + 2h, suppose that f
(x) can be approximated by
,.
where a, b, and c are chosen so that
Thus,
whence
Checkpoint
1. What process obtains an untabulated value of a function?
2. When is linear interpolation adequate?
3. When is quadratic inteipolation needed and adequate?
EXERCISES
4. Obtain an estimate of sin(0.55) by linear interpolation of f (x) =
sin x over the interval [0.5, 0.6] using the data:
1. Linear interpolation,
2. quadratic interpolation.
6. The entries in a table of tan x are:
INTERPOLATION 2
Newton interpolation formulae
The linear and quadratic interpolation formulae are based on first and
second degree polynomial approximations. Newton has derived general
forward and backward difference interpolation formulae, corresponding for
tables with constant interval h. (For tables with variable interval, we can use an
interpolation procedure in involving divided differences.)
Given a set of values f(x0), f(x1), . . , f(xn) with xj = x0 + jh, we have two
interpolation formulae of order n available:
Checkpoint
1. What is the relationship between the forward and backward
linear and quadratic interpolation formulae (for a table of
constant interval h) and Newton's interpolation formulae?
2. When do you use Newton's forward difference formula?
3. When do you use Newton's backward difference formula?
EXERCISES
4. From a difference table of f (x) = ex to 5D for x = 0.10(0.05)0.40,
estimate:
1. e0.14 by means of Newton's forward difference formula;
2. e0.315 by means of Newton's backward difference
formula.
5. Show that for j = 0, 1, 2, . . .,
INTERPOLATION 3
Lagrange interpolation formula
The linear and quadratic interpolation formulae correspond to first and second
degree polynomial approximations, respectively.We have discussed
Newton's forward and backward interpolation formulae and noted that
higher order interpolation corresponds to higher degree polynomial
approximation. In this Step we consider an interpolation formula attributed to
Lagrange, which does not require function values at equal intervals.
Lagrange's interpolation formula has the disadvantage that the degree of the
approximating polynomial must be chosen at the outset; an alternative
approach is discussed in the next Step. Thus, Lagrange's formula is mainly of
theoretical interest for us here; in passing, we mention that there are some
important applications of this formula beyond the scope of this book - for
example, the construction of basis functions to solve differential equations
using a spectral (discrete ordinate) method.
1. Procedure
then:
Hence,
i.e., the (unique) interpolating polynomial. Note that for x = xj all terms
in the sum vanish except the j-th, which is fj; Lk(x) is called the k-th
Lagrange interpolation coefficient, and the identity
2. Example
3. Notes of caution
EXERCISE
Given that f (-2) = 46, f (-1 ) = 4, f ( 1 ) = 4, f (3) = 156, and f (4) = 484, use
Lagrange's interpolation formula to estimate the value of f(0).
INTERPOLATION 4*
Divided differences*
1. Divided differences
where
Note that the remainder term R vamishes at x0, x1, . . . , xn, whence we
infer that the other terms on the right-hand side constitute the
interpolating polynomial or, equivalently, the Lagrange polynomial. If
the required degree of the interpolating polynomial is not known in
advance, it is customary to arrange the points x1, . . . , xn, according to
their increasing distance from x and add terms until R is small enough.
3. Example
From the tabulated function in Section 1, we will estimate f (2) and f(4),
using Newton's divided difference formula and find the corresponding
interpolating polynomials.
The third divided difference being constant, we can fit a cubic through
the five points. By Newton's divided difference formula, using x0 = 0,
x1 = 1, x2 = 3, and x3 = 6, the interpolation cubic becomes:
so that
and
As expected, the two interpolating polynomials are the same cubic, i.e.,
x3 - 8x + 1.
.
As it stands, this expression is not very useful, because it involves the
unknown quantity However, it may be shown (cf., for
example, Conte and de Boor (1980)) that, if
and f is (n + 1)-times differentiable on (a, b), then
there exists a such that
This formula may be useful when we know the function generating the
data and wish to find lower and upper error bounds. For example, let
there be given sin 0 = 0, sin(0.2) = 0.198669, and sin(0.4) = 0.389418 to
6D (where the arguments in the sine function are in radians). Then we
can form the divided difference table:
. .
where 0 < 0.4. For f(x) = sin x, one has , so that
It then follows that
.
The absolute value of the actual error is 0.000492, which is within
these bounds.
5. Aitken's method
and, obviously,
Next, since
one finds
noting that
Checkpoint
1. What major practical advantage has Newton's divided
difference interpolation formula over Lagrange's formula?
2. How are the tabular points usually ordered for interpolation by
Newton's divided difference formula or Aitken's method?
3. Are divided differences actually used in interpolation by
Aitken's method?
EXERCISES
1. Use Newton's divided difference formula to show that it is
quite invalied to interpolate from the points
.
2.
3. Given that use Newton's divided
difference formula to estimate the value of e0.25. Find lower and
upper bounds on the magnitude of the error and verify that the actual
magnitude is within the calculated bounds.
4. Given that f(-2) = 46, f(-1) = 4, f(1) = 4, f(3) = 156, and f(4) =
484, estimate the value of f (0) from
a. Newton's divided difference formula, and
b. Aitken's method.
Comment on the validity of this interpolation.
c. Given that f (0) = 2.3913, f( 1 ) = 2.3919, f (3) = 2.3938, and f
(4) = 2.3951, use Aitken's method to estimate the value of f(2).
INTERPOLATION 5*
Inverse interpolation *
Instead of the value of a function f (x) for a certain x, one might seek the value
of x which corresponds to a given value of f (x), a process referred to as inverse
interpolation. For example, the reader may have contemplated the possibility
of obtaining roots of f (x) = 0 by inverse interpolation.
where
In order to illustrate this statement, consider the first table of f (x) = sin x
in STEP 22 and let us seek the value of x for which f (x) = 0.2.
Obviously, 10? > x
(Note that it is unnecessary to carry many digits in the first estimates of
.) Consequently,
3. Divided differences
and determine the value of x for which f(x) = 0.2. Ordering according to
increasing distance from f(x) = 0.2, one finds the divided difference table
(entries multiplied by 100);
,
Hence,
Aitken's scheme could also have been used here! However, by either
method, we note that any advantage in accuracy gained by the use of
iterative inverse interpolation may not justify the additional
computational demand.
Checkpoint
1. Why may linear inverse interpolation be either tedious or
impractical?
2. What is the usual method for checking inverse interpolation?
3. What is the potential advantage of inverse interpolation, using
either divided differences or Aitken's scheme, compared with
the iterative method? What is a likely disadvantage?
EXERCISES
4. Use linear inverse interpolation to find the root of x + cos x = 0
correct to 4D.
5. Solve 3xex =1 to 3D.
6. Given a table of values of a cubic f without knowledge of its
specific form:
find x for which f(x) = 10, 20 and 40, respectively. Check your
answers by (direct) interpolation. Finally, obtain the equation of
the cubic and use it to recheck your answers.
dy 1 1 1
(y 0 2 y 0 3 y 0 ......)
dx h 2 3
d2y 1 1 11
2
2 (2 y 0 3 y 0 4 y 0 ......)
dx h 2 12
dy 1 1 1
(y n 2 y n 3 y n ......)
dx h 2 3
d2y 1 2 11
2
( y n 3 y n 4 y n ......)
dx h 12
d3y 1 3
3
3 ( 3 y n 4 y n ......)
dx h 2
1
Find first two derivatives of x 3 at x=50 and x=56 given the table below:
x 50 51 52 53 54 55 56
Y= x
1 3.6840 3.7084 3.7325 3.7563 3.7798 3.8030 3.8259
3
Solution: since we require f’(x) at x=50 we use Newton’s forward formula and to get
f’(x) at x=56 we use Newton’s backward formula..
x y y 2 y 3 y
50 3.6840
51 3.7084 0.0244 0.0003 0
52 3.7325 0.0241 0.0003 0
53 3.7563 0.0238 0.0003 0
54 3.7798 0.0235 0.0003 0
55 3.8030 0.0232 0.0003
56 3.8259 0.0229
dy 1 1 1
(y 0 2 y 0 3 y 0 ......)
dx h 2 3
1 1 1
= [0.0244 ( 0.003) (0)]
1 2 3
= 0.02455
d2y 1 1 11
2
2 (2 y 0 3 y 0 4 y 0 ......)
dx h 2 12
= 1[-0.0003]=-0.0003
dy 1 1 1
(y n 2 y n 3 y n ......)
dx h 2 3
dy 1 1
( ) x 56 (0.0229 (0.0003) 0)
dx 1 2
=0.02275
d2y 1 2 11
2
( y n 3 y n 4 y n ......)
dx h 12
d2y 1 11
( )
2 x 56
( 2 y n 3 y n 4 y n ......)
dx h 12
=[-0.0003]
Problem: Given the following data, find y’(6) and the maximum value of y.
X: 0 2 3 4 7 9
Y: 4 26 58 112 466 922
Solution: Since the arguments ar ot equally spaced, we will use Newton’s divided
difference formula (or even Lagrange’s formula)
x f ( x) f ( x ) 2 f ( x ) 3 f ( x ) 4 f ( x )
0 4
2 26 11 7 1
3 58 32 11 1 0
4 112 54 16 1 0
7 466 118 22
9 922 228
Y= f(x) =f(x0)+(x-x0)f(x0,x1)+(x-x0)(x-x1)f(x0,x1,x2)+….
= 4+(x-0)11+(x-0)(x-2)7+(x-0)(x-2)(x-3).1
= x3+2x2+3x+4
Y’(6) =135.
Y(x) is maximum if y’(x)=0 Therefore 3x2+4x+3=0. But the roots are imaginary.
Therefore, there is no extremum value in the range. In fact, it is an increasing curve.
NUMERICAL INTEGRATION 1
The trapezoidal rule
It is well known that the definite integral may be interpreted as the area under
the curve y = f (x) for and may be evaluated by subdivision of the
interval and summation of the component areas. This additive property of the
definite integral permits evaluation in a piecewise sense. For any subinterval
of the interval , we may approximate f (x) by the
interpolating polynomial Pn(x). Then we obtain the approximation
such that b = a + Nh. Then one can use the additive property
2. Accuracy
3. Example
using the trapezoidal rule and the data in STEP 20:. If we use T(h) to
denote the approximation with strip width h, we obtain
EXERCISES
4. Estimate the value of the integral
using the trapezoidal rule and the data given in Exercise 2 of the
preceding Step.
5. Use the trapezoidal rule with h = 1,0.5, and 0.25 to estimate the
value of the integral
Answers
last next
Simpson's Rule
1. Simpson's Rule
A parabolic arc is fitted to the curve y = f(x) at the three tabular points
Hence, if N - (b - a) is even, one obtains Simpson's Rule:
where
2. Accuracy
then
.
One may reformulate the quadrature rule for by replacing
fj+2 = f (j+1 + h) and fj = f (xj+1 - k) by its Taylor series; thus
Note that the error bound is proportional to h4, compared with h2 for the
cruder trapezoidal rule. Note that Simpson's rule is exact for cubics!
3. Example
whence it is 0.000 000 8 for h = 0.15 and 0.000 000 01 for h = 0.05.
Note that the truncation error is negligible; within round-off error, the
estimate is 0.32148(6).
Checkpoint
1. What is the degree of the approximating polynomial corresponding to
Simpson's Rule?
2. What is the error bound for Simpson's rule?
3. Why is Simpson's Rule well suited for implementation on a computer?
EXERCISES
1. Estimate by numerical integration.the value of the integral
to 4D.
Estimate to 5D the resulting error, given that the true value of the
integral is 0.26247.
ORDINARY DIFFERENTIAL EQUATIONS 1
Single-step methods
is obtained. Even then a lot more effort is required to extract the value of y,
corresponding to one value of x. In such situations, it is preferable to use a
numerical approach from the start!
Partial differential equations are beyond the scope of this text, but in this and
the next Step we shall take a brief look at some methods for solving the single
first-order ordinary differential equation
for a given initial value y(x0) = y0. The first-order differential equation and the
given initial value constitute a first-order initial value problem. The numerical
solution of this problem involves an estimation of the values of y(x) at, as a
rule, equidistant points x1, x2,..., xN. For the sake of convenience, we shall
assume that these points are indeed equidistant and use h to denote the constant
step length. In practice, it is sometimes desirable to adjust the step length as
the numerical method proceeds. For instance, one may wish to use a smaller
step length when a point is reached at which the derivative is especially large.
The numerical methods for first-order initial value problems may be used
(in a slightly modified form) to solve higher-order differential equations. A
simple (optional) introduction to this topic.
1. Taylor series
This is Euler's method. However, unless the step length h is very small,
the truncation error will be large and the results inaccurate.
2. Runge-Kutta methods
The first has the same order of accuracy as the Taylor series method with
p = 2 and is usually presented in three steps:
.
3. Example
whence
Since
we find
Thus,
and
:
.
Checkpoint
1. For each of the two types of method outlined in this Step, state the main
disadvantage?
2. Why might we expect higher order methods to be more accurate?
EXERCISES
1. Obtain estimates of y(0.8) of the solution of the initial value problem
a. Euler's method,
b. the fourth-order Taylor series method,
c. the second-order Runge-Kutta method with h = 0.1.
d. Compare the accuracy of the three methods.
2. Use Euler's method with step length h = 0.2 to estimate the value of
y(1), given that y' = -xy2 and y(0) = 2.
ORDINARY DIFFERENTIAL EQUATIONS 2
Multi-step methods
As has been mentioned earlier, the methods covered in the last Step are
classified as single-step methods, because the only value of the approximate
solution used in the construction.of yn+I is yn, the result of the preceding step. In
contrast, multi-step methods use earlier values like yn-1, yn-2, . . . , in order to
reduce the number of times that f (x, y) or its derivatives have to be evaluated.
1. Introduction
We will not go into the various ways in which multi-step methods are
used, but it is obvious that we will need more than one starting value,
which may be obtained by first using a single-step method (cf the
preceding Step). One advantage of a multi-step method is that we need
to evaluate only once f(x, y) to obtain yn+1, since fn-1, fn-2, . . ., will already
have been computed. In contrast, any Runge-Kutta method (i.e.,
single-step method) involves more than one function evaluation at each
step, which in the case of complicated functions f (x, y) can be
computationally expensive. Thus, the comparative efficiency of multi-
step methods is often attractive, but such methods may be numerically
unstable.
2. Stability
.
Working to 5D, we construct the following table which allows us to
compare the consequent estimates yn with the analytic values:
It is seen that the estimates get worse as n increases. Not only do the
approximations alternate in sign after x6, but their magnitudes also grow.
Further calculations shows that y20 has the value 77.824 55 with an error
which is more than a million times larger than the analytic value!
Checkpoint
1. What distinguishes an explicit multi-step method from an implicit
method?
2. State one advantage of multi-step methods.
EXERCISES
1. Apply the second-order Adams-Bashforth Method with h = 0.1 to the
problem: y' = -5y, y(0) = 1 and obtain the approximations y2, . . ., y10,
using y1 = 0.606 53. Confirm that the approximations do not exhibit the
unstable behaviour of the mid-point method as shown by the table of
Section 2.
2. Retaining up to six digits, use the second-order Adams-Bashforth
Method with step length h=0.1 to estimate y(0.5), given that y' = x + y,
y(0) =1. Use y2 =1.11 as second starting value.
As has been mentioned earlier, the methods covered in the last Step are
classified as single-step methods, because the only value of the approximate
solution used in the construction.of yn+I is yn, the result of the preceding step. In
contrast, multi-step methods use earlier values like yn-1, yn-2, . . . , in order to
reduce the number of times that f (x, y) or its derivatives have to be evaluated.
1. Introduction
We will not go into the various ways in which multi-step methods are
used, but it is obvious that we will need more than one starting value,
which may be obtained by first using a single-step method (cf the
preceding Step). One advantage of a multi-step method is that we need
to evaluate only once f(x, y) to obtain yn+1, since fn-1, fn-2, . . ., will already
have been computed. In contrast, any Runge-Kutta method (i.e.,
single-step method) involves more than one function evaluation at each
step, which in the case of complicated functions f (x, y) can be
computationally expensive. Thus, the comparative efficiency of multi-
step methods is often attractive, but such methods may be numerically
unstable.
2. Stability
It is seen that the estimates get worse as n increases. Not only do the
approximations alternate in sign after x6, but their magnitudes also grow.
Further calculations shows that y20 has the value 77.824 55 with an error
which is more than a million times larger than the analytic value!
Checkpoint
1. What distinguishes an explicit multi-step method from an implicit
method?
2. State one advantage of multi-step methods.
EXERCISES
1. Apply the second-order Adams-Bashforth Method with h = 0.1 to the
problem: y' = -5y, y(0) = 1 and obtain the approximations y2, . . ., y10,
using y1 = 0.606 53. Confirm that the approximations do not exhibit the
unstable behaviour of the mid-point method as shown by the table of
Section 2.
2. Retaining up to six digits, use the second-order Adams-Bashforth
Method with step length h=0.1 to estimate y(0.5), given that y' = x + y,
y(0) =1. Use y2 =1.11 as second starting value.
In the preceding two Steps, we have discussed numerical methods for solving
the first-order initial value problem y' = f(x, y), y(x0) = y(x0). However,
ordinary differential equations which arise in practice are often of higher
order. For example, a more realistic differential equation for the motion of a
pendulum is given by
which may have to be solved subject to given values of y(x0) = y0 and y'(x0) =
y'0. (In order to ensure notational consistency with the preceding two Steps, we
have changed the variables from and t to y and x, respectively.) In general, an
initial value problem for an n-th order differential equation may assume the
form:
We shall see how this n-th order initial value problem may be rewritten as a
system of first-order initial value problems, which leads us to numerical
procedures for the solution of the general initial value problem which turn
out to be extensions of the numerical methods considered in the preceding
two Steps.
Note that an initial value problem for a more general system of n first-
order differential equations is given by:
for j =1, 2, . . ., n.
2. Numerical methods for first-order systems
For the sake of simplicity, we shall consider only the case n = 2, when
the second-order initial value problem
Now we recall that Euler's method for solving y' = f(x, y) was
with the given starting (initial) value y(x0) = y0. The analogous method
for solving the system is given by
and
.
3. Numerical example
and
Checkpoint
1. How may an n-th order initial value problem be written as a system of
n first-order initial value problems?
2. How may a numerical method for solving a first-order initial value
problem be extended to solve a system of two first-order initial value
problems?
EXERCISE
Apply Euler's method with step length h = 0.2 to obtain approximations y(1)
and y'(1) for the second-order initial value problem: