Numerical Differentiation and Differential Equations
Numerical Differentiation and Differential Equations
Numerical differentiation
h2 00 h3 h4
f (x + h) = f (x) + hf 0 (x) + f (x) + f 000 (x) + f (4) (x) + · · · (6.1)
2! 3! 4!
2 3 4
h h h
f (x − h) = f (x) − hf 0 (x) + f 00 (x) − f 000 (x) + f (4) (x) − · · · (6.2)
2! 3! 4!
Rearranging Eq (6.1) gives
f (x + h) − f (x) h h2
f 0 (x) = − f 00 (x) − f 000 (x) + · · ·
h 2! 3!
f (x + h) − f (x)
f 0 (x) = + O(h). (6.3)
h
This latter equation is the Forward Difference approximation to f 0 (x).
Rearranging Eq (6.2) gives
f (x) − f (x − h) h h2
f 0 (x) = + f 00 (x) − f 000 (x) + · · ·
h 2! 3!
f (x) − f (x − h)
f 0 (x) = + O(h).
h
This equation is the Backward Difference approximation to f 0 (x).
If we subtract Eq (6.2) from Eq (6.1) and after rearranging we have
f (x + h) − f (x − h) h2 000 h4
f 0 (x) = − f (x) − f (5) − · · · ,
2h 3! 5!
f (x + h) − f (x − h)
f 0 (x) = + O(h2 ). (6.4)
2h
41
NUM701S: Numerical Methods 1 42
Eq. (6.4) is called Central Difference approximation to f 0 (x). The parameter h is called the
step size.
Example
Let f (x) = ln x and x = x0 = 1.8, approximate f 0 (1.8) by the Forward Difference and estimate
the error.
f (1.8 + h) − f (1.8)
f 0 (1.8) ≈ , h>0
h
|hf 00 (ξ)| |h| |h|
with error = 2 ≤ , where 1.8 < ξ < 1.8 + h. The results are summarised in the
2 2ξ 2(1.8)2
following table for various values of h.
f (1.8 + h) − f (1.8 − h)
f 0 (1.8) ≈ , h>0
2h
h2 |f (3) (ξ)| h2 h2
with error = ≤ , where 1.8 < ξ < 1.8 + h.
3! 3|ξ|3 3(1.8)3
f (1.8 + h) − f (1.8 − h) h2
h f (1.8 + h) f (1.8 − h)
2h 3(1.8)3
0.1 0.64185389 0.53062825 0.5561282 5.719 × 10−4
0.01 0.59332685 0.58221562 0.5555615 5.7155 × 10−6
0.001 0.58834207 0.58723096 0.555555 5.7155 × 10−8
We can also obtain an approximation to the second derivative. For that we add Eqs. (6.1)
and (6.2):
2 00 h4 (4)
f (x + h) + f (x − h) = 2f (x) + h f (x) + f (x) + · · · ,
12
00
solving for f (x) we get
Example
Following the previous example to approximate f 00 (1.8) with f (x) = ln x we obtain the results
given in the table below:
xi fi
2.0 0.123060
2.1 0.105706
2.2 0.089584
2.3 0.074764
2.4 0.061277
2.5 0.049126
2.6 0.038288
2.7 0.028722
2.8 0.020371
Using central differences with h = 0.1, we have
f (2.5) − f (2.3)
f 0 (2.4) = + O(h2 ),
2(0.1)
= −0.12819 + C1 (0.1)2 + Ĉ1 (0.1)4 + · · · ,
Repeating the same procedure with h = 0.2, we have
f (2.6) − f (2.2)
f 0 (2.4) = + O(h2 ),
2(0.2)
= −0.12824 + C2 (0.1)2 + Ĉ2 (0.1)4 + · · ·
NUM701S: Numerical Methods 1 44
The values of C1 and C2 are not identical, but we assume they are the same. It can be shown that
we make an error of O(h4 ) in taking the two values as equal. Based on this assumption, we can
eliminate the C’s and get
1
f 0 (2.4) = −0.12819 + [−0.12819 − (−0.12824)]
(0.2/0.1)2 − 1
= −0.12817 + O(h2 ).
Given two estimates of a value that have errors of O(hn ), where the h’s are in the ratio of
2 to 1, we can extrapolate to a better estimate of the exact value as
1
Better estimate = more accurate + n (more accurate − less accurate).
2 −1
The more accurate value is the one computed with the smaller value of h.
From the table of data, one can calculate an approximation with h = 0.4 and then extrapolate
using the approximation with h = 0.2 to obtain a second O(h4 ). We then have two first-order
extrapolation with O(h4 ) error, a second extrapolation can then be performed on the two O(h2 )
approximations to give an O(h6 ) approximation. The extrapolations are laid out in table form as
follows:
The rule stated above is a general rule thus it may be used for second derivative approximation
as well. For example, let’s approximate f 00 (2.4). First we take h = 0.1, the central difference
approximation gives:
f (2.5)− 2f (2.4) + f (2.3)
f 00 (x) = + O(h2 ),
(0.1)2
= 0.13360 + O(h2 ). (6.5)
Since the data was constructed using f (x) = e−x sin x, we can compute the true values f 0 (2.4) =
−0.12817 and f 00 (2.4) = 0.13379. So the difference in the results for f 00 (2.4) is due to to round-off
error.
Example
Build a Richardson table for f (x) = x2 cos x to evaluate f 0 (1). Start with h = 0.1.
h Initial estimate First order est. Second order est. Third est.
0.1 0.226736
0.05 0.236031 0.239129
0.025 0.238358 0.239133 0.239134
0.0125 0.238938 0.239132 0.239132 0.239132
The Richardson’s technique indicates convergence when two successive values on the same line are
the same. Note that the table is oriented differently to that of the tabulated data problem, this is
done to indicate the reducing value of h.
Chapter 8
f (x + h) − f (x)
f 0 (x) = + O(h)
h
to approximate dy/dx at a point xi , then
1
yi0 = (yi+1 − yi )
h
we obtain the algorithm for the Euler’s method
Thus given (x0 , y0 ) = (a, η), we can calculate (xi , yi ) for i = 1, . . . , n. Since the new value yi+1 can
be calculated from known values of xi and yi , this method is explicit.
49
NUM701S: Numerical Methods 1 50
h2 00 h3 000
yi+1 = yi + hyi0 + y + yi + . . . ,
2! i 3!
since yi0 = fi
h2 0 h3 00
yi+1 = yi + hfi + fi + fi + . . .
2! 3!
If we subtract Eq. (8.2) from this we obtain
h2 0 h3 00
εE = f + fi + . . . ,
2! i 3!
h2 0
= f (ξ),
2! i
where xi < ξ < xi+1 . Thus the error per step is O(h2 ) and is called the Local Truncation Error
(LTE). If we integrate over n step the Global Truncation Error is O(h).
Example
Solve the IVP √
x0 (t) = 2 x, x(0) = 4
to compute x(1) using Euler’s method with 4 and 8 steps respectively. Display all the intermediate
steps in the form of a table including the exact solution and the error.
With 4 steps
k tk xk x(tk ) Error
0 0 4 4 0
1 0.25 5.000 5.063 0.063
2 0.50 6.118 6.250 0.132
3 0.75 7.355 7.563 0.208
4 1.00 8.711 9.000 0.289
With 8 steps
k tk xk x(tk ) Error
0 0 4 4 0
2 0.25 5.030 5.063 0.033
4 0.50 6.182 6.250 0.068
6 0.75 7.456 7.563 0.107
8 1.00 8.852 9.000 0.148
NUM701S: Numerical Methods 1 51
h
yk+1 = yk + (f (tk , yk ) + f (tk+1 , yk+1 ))
2
This an implicit method in the sense that if yk is known, we cannot always determine directly yk+1
without solving at this stage a non-linear equation, due to the term f (tk+1 , yk+1 ) (if f is non-linear
in y), via an iterative method such as Newton’s method. A way out is to estimate yk+1 using
Euler’s method and that estimate is called a predictor and is denoted pk+1 . We can now write the
complete algorithm for the improved Euler’s method
t0 = a,
Predictor: pk+1 = yk + hf (tk , yk ),
tk+1 = tk + h,
h
Corrector: yk+1 = yk + (f (tk , yk ) + f (tk+1 , pk+1 )), k = 0, 1, . . . , n − 1.
2
Example
Solve the IVP √
x0 (t) = 2 x, x(0) = 4
to compute x(1) using the improved Euler method with 4 and 8 steps respectively and compare
your results with those of the Euler’s method. (Partially solved in Class).
NUM701S: Numerical Methods 1 52
yi+1 = yi + hφ(xi , yi ; h)
In Euler’s method we have φ(xi , yi ; h) = fi = yi0 we are using the slope at the point yi
to extrapolate yi and obtain yi+1 . The method of Runge-Kutta extends this. Instead of just
calculating the slope at yi we can take it to be the ’weighted’ average at yi and intermediate points.
q1 = f (xi , yi )
h h
q2 = f (xi + , yi + q1 )
2 2
yi+1 = yi + hq2
which has LTE of O(h3 ).
We can include more sample points in φ(xi , yi ; h) to increase the accuracy. The most widely used
formula is the forth-order RK method
q1 = f (xi , yi )
h h
q2 = f (xi + , yi + q1 )
2 2
h h
q3 = f (xi + , yi + q2 )
2 2
q4 = f (xi + h, yi + hq3 )
h
yi+1 = yi + (q1 + 2q2 + 2q3 + q4 )
6
this method has LTE of O(h4 )
Example
Approximate y(0.4) using RK4 with 2 steps for the following non-autonomous IVP
f (t, y) = ty + y + t2 , t0 = 0, y0 = 2, h = 0.2.
Step 1:
q1 = f (t0 , y0 ) = t0 y0 + y0 + t20 = 0 + 2 + 0 = 2
NUM701S: Numerical Methods 1 53
h
y0 + q1 = 2 + (0.1)(2) = 2.2
2
h h
q2 = f (t0 + , y0 + q1 ) = 0.1(2.2) + 2.2 + 0.12 = 2.43
2 2
h
y0 + q2 = 2.243
2
h h
q3 = f (t0 + , y0 + q2 ) = 2.4773
2 2
y0 + hq3 = 2.4955
q4 = f (t0 + h, y0 + hq3 ) = 3.0346
h
y1 = y0 + (q1 + 2q2 + 2q3 + q4 ) = 2.4950
6
Step 2:
In a similar manner as above, we find
y(0.4) ≈ y2 = 3.2566
The method considered above are called single step methods since they calculate the solution at
xi+1 from known information at xi This approach ignores other points which may already have
been calculated, LMM method attempt to take more than one point into consideration.
if we take the central difference operation (26)
1
yi0 = (yi+1 − yi−1 )
2h
therefore
h
yi+1 − yi−1 = (fi−1 + 4fi + fi+1 ).
3
This time an estimate of fi+1 is required on the right hand side, this is called an implicit method.
both (31) and (32) are 2 steps methods.