Numerical Differentiation: Finite Differences
Numerical Differentiation: Finite Differences
The derivative of a function f at the point x is defined as the limit of a difference quotient:
f (x + h) − f (x)
f 0 (x) = lim
h→0 h
f (x + h) − f (x)
In other words, the difference quotient is an approximation of the derivative f 0 (x), and
h
this approximation gets better as h gets smaller.
How does the error of the approximation depend on h?
Taylor’s theorem with remainder gives the Taylor series expansion
f 00 (ξ)
f (x + h) = f (x) + hf 0 (x) + h2 where ξ is some number between x and x + h.
2!
Rearranging gives
f (x + h) − f (x) f 00 (ξ)
− f 0 (x) = h ,
h 2
f (x + h) − f (x)
which tells us that the error is proportional to h to the power 1, so is said to be a
h
“first-order” approximation.
If h > 0, say h = ∆x where ∆x is a finite (as opposed to infinitesimal) positive number, then
f (x + ∆x) − f (x)
∆x
is called the first-order or O(∆x) forward difference approximation of f 0 (x).
If h < 0, say h = −∆x where ∆x > 0, then
Here are some commonly used second- and fourth-order “finite difference” formulas for approximating
first and second derivatives:
O(∆x2 ) centered difference approximations:
f 0 (x) : f (x + ∆x) − f (x − ∆x) /(2∆x)
f 00 (x) : −f (x + 2∆x) + 16f (x + ∆x) − 30f (t) + 16f (x − ∆x) − f (x − 2∆x) /(12∆x2 )
In science and engineering applications it is often the case that an exact formula for f (x) is not known.
We may only have a set of data points (x1 , y1 ), (x2 , y2 ), . . . , (xn , yn ) available to describe the functional
dependence y = f (x). If we need to estimate the rate of change of y with respect to x in such a situation,
we can use finite difference formulas to compute approximations of f 0 (x). It is appropriate to use a
forward difference at the left endpoint x = x1 , a backward difference at the right endpoint x = xn , and
centered difference formulas for the interior points.