Two-Numerical Methods For Computational Fluid Dynamics
Two-Numerical Methods For Computational Fluid Dynamics
G Fu
x
u
E
t
u
D
x
u
C
t x
u
B
t
u
A
In this equation, the coefficients A to G may be zero, constant, functions of x and y only, or
functions of x, y, and u. In addition, the coefficients A to E may be functions of u/t or u/x as
well. Here we have used x and t as independent variables, implying space and time. However,
any two physical or mathematical variables can be used. General second-order partial differential
equations are classified as parabolic, elliptic or hyperbolic depending on the following relationship
among the coefficients:
If B
2
4 A C > 0, the equation is hyperbolic.
If B
2
4 A C = 0, the equation is parabolic.
If B
2
4 A C < 0, the equation is elliptic.
These partial differential equations (PDE) have nothing to do with conic sections. It is just that
the conditions for the different behavior of the PDEs are the same as the equations that
determine the kind of curve that results from a plot of the equation At
2
+ Bxt + Cx
2
+ Dt + Ex + G =
0.
You should be able to verify the classifications of common partial differential equations that are
given below.
The wave equation,
2
2
2
2
2
x
u
c
t
u
, is hyperbolic.
The conduction equation,
2
2
x
u
t
u
, is parabolic.
Jacaranda (Engineering) 3333 Mail Code Phone: 818.677.6448
E-mail: [email protected] 8348 Fax: 818.677.7062
Page 2.2 ME 692, L. S. Caretto, February 10, 2010 Numerical methods
Laplaces equation, 0
2
2
2
2
y
u
x
u
, is elliptic.
Poissions equation, ) , , (
2
2
2
2
u y x G
y
u
x
u
, is elliptic.
The classification of these partial differential equations has implications for the kinds of boundary
conditions needed. Elliptic problems require a closed boundary with boundary conditions
specified at all points of the boundary. Hyperbolic and parabolic equations have an open
boundary. The coordinate for this boundary is time and the nature of the open boundary matches
our common sense that we cannot have future results affecting earlier ones, except in science
fiction. In addition, hyperbolic equations provide a set of space curves known as characteristics.
The solutions of hyperbolic equations propagate along these characteristic curves and this can
provide an important constraint on numerical solutions of these equations.
Problems with more than two independent variables have a behavior that is a mixture of the
classifications above. For example the conduction equation in two space dimensions,
2
2
2
2
y
u
x
u
t
u
has elliptic behavior in the x-y plane, but parabolic behavior in the x-t and y-
t planes. Fluid dynamics equations can have a variety of behaviors. The steady Navier-Stokes
equations are elliptic equations, requiring boundary conditions at all parts of the region defining
the flow. The transient Navier-Stokes equations behave in mixed ways. Subsonic flows have
elliptic behavior in the space dimensions, but parabolic behavior in the time direction. Supersonic
flows have a hyperbolic nature in the time domain, but elliptic behavior in the space domains only.
The differences in the behavior of the differential equations require differences in the numerical
approaches to solving different flow environments.
Of Various kinds of boundary conditions can arise in fluid dynamics and related problems. If the
dependent variable is specified at the boundary, the boundary condition is called a Dirichlet
boundary condition or a boundary condition of the first kind. If the gradient of the dependent
variable is specified, the boundary condition is called a Neumann boundary condition or a
boundary condition of the second kind. When the boundary conditions are specified as a
relationship between the dependent variable and its gradient, the boundary conditions are
classified as being of the third kind or mixed.
The three different types of boundary conditions can be illustrated in a heat transfer problem
where we are solving a differential equation for temperature. A specified boundary temperature
would be a Dirichlet boundary condition. A specified boundary heat flux, which is proportional to
the temperature gradient by Fouriers Law, would be a Neumann or second-kind boundary
condition. Convection boundary conditions are mixed or third-kind boundary conditions. In
such a boundary condition, the external convective heat flux is set equal to the internal
conductive heat flux. This gives a mixed boundary condition such as hconv(T - T) = -kT/x at x =
0.
All numerical methods convert the differential equation and boundary conditions into a set of
simultaneous linear algebraic equations. The differential equation provides an accurate
description of the dependent variable at any points in the region where the equation applies. The
numerical approach provides approximate numerical values of the dependent variable at a set of
discrete points in the region. The values of the dependent variable at these points are found by
solving the simultaneous algebraic equations.
Two fundamentally different approaches are used to derive the algebraic equations from the
differential equations: finite differences and finite elements. In the finite-difference approach,
Numerical methods ME 692, L. S. Caretto, February 10, 2010 Page 2.3
numerical approximations to the derivatives occurring in the differential equations are used to
replace the derivatives at a set of grid nodes. In the finite-element approach, the region is divided
into elements and an approximate behavior for the dependent variable over the small element is
used. Both of these methods will be explored in these notes, following a discussion of some
fundamental ideas.
Although much of the original work in CFD used finite differences, some codes currently used in
practice use a finite element approach. The main reason for this is the ease with which the finite
element method may be applied to irregular geometries. However, the most popular CFD codes
use an intermediate approach, known as finite volumes, which is based on the use of the integral
balance equations. This equations that result from this approach are similar to finite-difference
equations, but the geometry considerations are similar to those of finite elements. CFD codes
that are based on finite-differences or finite-volumes have come to use unstructured grids which
allow the same flexibility as finite-element codes for handling complex geometries.
Finite-difference grids
In a finite-difference grid, a region is subdivided into a set of discrete points. The spacing
between the points may be uniform or non-uniform. For example, a grid in the x direction, xmin x
xmax may be written as follows. First we place a series of N+1 nodes numbered from zero to N
in this region. The coordinate of the first node, x0 equals xmin. The final grid node, xN = xmax. The
spacing between any two grid nodes, xi and xi-1, has the symbol xi. These relations are
summarized as equation [2-1].
x0 = xmin xN = xmax xi xi-1 = xi [2-1]
A non-uniform grid, with different spacing between different nodes, is illustrated below.
------------------------------------------~ ~-----------------------
x0 x1 x2 x3 xN-2 xN-1 xN
For a uniform grid, all values of xi are the same. In this case, the uniform grid spacing, in a one-
dimensional problem is usually given the symbol h. I.e., h = xi xi-1 for all values of i.
In two space dimensions a grid is required for both the x and y, directions, which results in the
following grid and geometry definitions, assuming that there are M+1 grid nodes in the y direction.
x0 = xmin xN = xmax xi xi-1 = xi
y0 = ymjn yM = ymay yj yj-1 = yj [2-2]
For a three-dimensional transient problem there would be four independent variables: the three
space dimensions, x, y and z, and time. Each of these variables would be defined at discrete
points, i.e.
x0 = xmin xN = xmax xi xi-1 = xi
y0 = ymjn yM = ymay yj yj-1 = yj [2-3]
z0 = zmkn zK = zmaz zk zk-1 = zk
t0 = tmin tL = tmay tn tn-1 = tn
Any dependent variable such as u(x,y,z,t) in a continuous representation would be defined only at
discrete grid points in a finite-difference representation. The following notation is used for the set
of discrete values of dependent variables.
Page 2.4 ME 692, L. S. Caretto, February 10, 2010 Numerical methods
) , , , (
n k j i
n
ijk
t z y x u u [2-4]
For steady-state problems, the n superscript is omitted. For problems with only one or two space
dimensions two or one of the directional subscripts may be omitted. The general use of the
notation remains. The subscripts (and superscript) on the dependent variable represent a
particular point in the region, (xi, yj, zk, tn) where the variable is defined.
Finite-difference Expressions Derived from Taylor Series
The Taylor series provides a simple tool for deriving finite-difference approximations. It also gives
an indication of the error caused by the finite difference expression. Recall that the Taylor series
for a function of one variable, f(x), expanded about some point x = a, is given by the infinite
series,
.... ) - (
! 3
1
) - (
! 2
1
) ( ) ( ) (
3
3
3
2
2
2
+ + + +
a x
dx
f d
a x
dx
f d
a x
dx
df
a f x f
a x a x
a x
[2-5]
The x = a subscript on the derivatives reinforces the fact that these derivatives are evaluated at
the expansion point, x = a. We can write the infinite series using a summation notation as
follows:
0
) - (
!
1
) (
n
n
a x
n
n
a x
dx
f d
n
x f [2-6]
In the equation above, we use the definitions of 0! = 1! = 1 and the definition of the zeroth
derivative as the function itself. I.e., d
0
f/dx
0
|x=a = f(a).
If the series is truncated after some finite number of terms, say m terms, the omitted terms are
called the truncation error. These omitted terms are also an infinite series. This is illustrated
below.
error Truncation used Terms
a x
dx
f d
n
a x
dx
f d
n
x f
m n
n
a x
n
n m
n
n
a x
n
n
+
+
+
1 0
) - (
!
1
) - (
!
1
) (
[2-7]
In this equation the second sum represents the truncation error, m, from truncating the series
after m terms.
1
) - (
!
1
m n
n
a x
n
n
m
a x
dx
f d
n
[2-8]
The theorem of the mean can be used to show that the infinite-series truncation error can be
expressed in terms of the first term in the truncation error, that is
1
1
1
) - (
! ) 1 (
1
+
+
+
+
m
x
m
m
m
a x
dx
f d
m
[2-9]
Numerical methods ME 692, L. S. Caretto, February 10, 2010 Page 2.5
Here the subscript, x = , on the derivative indicates that this derivative is no longer evaluated at
the known point x = a, but is to be evaluated at x = , an unknown point between x and a. Thus,
the price we pay for reducing the infinite series for the truncation error to a single term is that we
lose the certainty about the point where the derivative is evaluated. In principle, this would allow
us to compute a bound on the error by finding the value of , between x and a, that made the
error computed by equation [2-9] a maximum. In practice, we do not usually know the exact
functional form, f(x), let alone its (m+1)
th
derivative.
In using Taylor series to derive the basic finite-difference expressions, we start with uniform one-
dimensional grid spacing. The difference, xi, between any two grid points is the same and is
given the symbol, h. This uniform grid can be expressed as follows.
xi = xi xi-1 = h or xi = x0 + ih for all i = 0,,N [2-10]
Various increments in x at any point along the grid can be written as follows:
xi+1 xi-1 = xi+2 xi = 2h xi-1 xi = xi xi+1 = h xi-1 xi+1 = xi xi+2 = 2h [2-11]
Using the increments in x defined above and the notation fi = f(xi) the following Taylor series can
be written using expansion about the point x = xi to express the values of f at some specific grid
points, xi+1 , xi-1 , xi+2 and xi-2. The conventional Taylor series expression for f(x) in equation [2-5]
can be adapted for use in finite differences by writing an expansion equation about a particular
grid point, x = xi, to determine the value of f(x) at another grid point, xi+k. From equation [2-10], we
see that xi+k = xi + kh so that fxi+k) = f(xi + kh). The difference, x, in the independent variable, x,
between the evaluation point, xi + kh, and the expansion point, xi, is equal to kh. Using xi = a as
the expansion point and kh as x allows us to rewrite equation [2-5] as shown below.
..... ) (
! 3
1
) (
! 2
1
) ( ) (
3
3
3
2
2
2
+ + + + +
kh
dx
f d
kh
dx
f d
kh
dx
df
x f kh x f
i i
i x x x x
x x
i i
[2-12]
The next step is to use the notation that f(xi + kh) = fi+k, and the following notation for the n
th
derivative, evaluated at x = xi.
i i
i x x
n
n
n
i
x x
i
x x
i
dx
f d
f
dx
f d
f
dx
df
f
...
2
2
' ' '
[2-13]
With these notational changes, the Taylor series in equation [2-12] can be written as follows.
.....
! 3
) (
! 2
) (
3
' ' '
2
' ' '
+ + + +
+
kh
f
kh
f kh f f f
i i i i k i
[2-14]
Finite-difference expressions for various derivatives can be obtained by writing the Taylor series
shown above for different values of k, combining the results, and solving for the derivative. The
simplest example of this is to use only the series for k = 1.
.....
6 2
3
' ' '
2
' ' '
1
+ + + +
+
h
f
h
f h f f f
i i i i i
[2-15]
We can rearrange this equation to solve for the first derivative, fi; recall that this is the first
derivative at the point x = xi.
Page 2.6 ME 692, L. S. Caretto, February 10, 2010 Numerical methods
) ( .....
6 2
1
2
' ' ' ' ' 1 '
h O
h
f f h
f
h
f
h
f f
f
i i
i i
i i
i
+
+ +
[2-16]
The first term to the right of the equal sign gives us a simple expression for the first derivative; it is
simply the difference in the function at two points, f(xi+h) f(xi), divided by h, which is the
difference in x between those two points. The remaining terms in the first form of the equation
are an infinite series. That infinite series gives us an equation for the error that we would have if
we used the simple finite difference expression to evaluate the first derivative.
As noted above, we can replace the infinite series for the truncation error by the leading term in
that series. Remember that we pay a price for this replacement; we no longer know the point at
which the leading term is to be evaluated. Because of this we often write the truncation error as
shown in the second equation. Here we use a capital oh followed by the grid size in parentheses.
In general the grid size is raised to some power. (Here we have the first power of the grid size, h
= h
1
.) In general we would have the notation, O(h
n
). This notation tells us how the truncation
error depends on the step size. This is an important concept. If the error is proportional to h,
cutting h in half would cur the error in half. If the error is proportional to h
2
, then cutting the step
size in half would reduce the error by . When the truncation error is written with this O(h
n
)
notation, we call n the order of the error. In two calculations, with step sizes h1 and h2, we
expect the following relation between the truncation errors, 1 and 2 for the calculations.
n
h
h
,
_
1
2
1 2
[2-17]
We use the approximation sign () rather than the equality sign in this equation because the error
term also includes an unknown factor of some higher order derivative, evaluated at some
unknown point in the region. The approximation shown in equation [2-17] would be an equality if
this other factor were the same for both step sizes.
Another important idea about the order of the error is that an n
th
order finite-difference expression
will give an exact value for the derivative of an n
th
order polynomial. Because a Taylor series is a
polynomial series, it can represent a polynomial exactly if a sufficient number of terms are used.
This is illustrated further below.
The expression for the first derivative that we derived in equation [2-16] is said to have a first
order error. We can obtain a similar finite difference approximation by writing the general series
in equation [2-14] for k = -1. This gives the following result.
.....
6 2
3
' ' '
2
' ' '
1
+ +
h
f
h
f h f f f
i i i i i
[2-18]
We can rearrange this equation to solve for the first derivative, fi; recall that this is the first
derivative at the point x = xi.
) ( .....
6 2
1
2
' ' ' ' ' 1 '
h O
h
f f h
f
h
f
h
f f
f
i i
i i
i i
i
+
+ +
[2-19]
Here again, as in equation [2-16], we have a simple finite-difference expression for the first
derivative that has a first-order error. The expression in equation [2-16] is called a forward
difference. It gives an approximation to the derivative at point i in terms of values at that point
Numerical methods ME 692, L. S. Caretto, February 10, 2010 Page 2.7
and points forward (in the +x direction) of that point. The expression in equation [2-19] is called a
backwards difference for similar reasons.
An expression for the first derivative that has a second-order error can be found by subtracting
equation [2-18] from equation [2-15]. When this is done, terms with even powers of h cancel
giving the following result.
.....
120
2
6
2 2
5
' ' ' ' '
3
' ' ' '
1 1
+ + +
+
h
f
h
f h f f f
i i i i i
[2-20]
Solving this equation for the first derivative gives the following result.
) (
2
.....
120 6 2
2 1 1
4
' ' ' ' '
2
' ' ' 1 1 '
h O
h
f f h
f
h
f
h
f f
f
i i
i i
i i
i
+
+ +
[2-21]
The finite-difference expression for the first derivative in equation [2-21] is called a central
difference. The point at which the derivative is evaluated, xi, is central to the two points (xi+1 and
xi-1) at which the function is evaluated. The central difference expression provides a higher order
(more accurate) expression for the first derivative as compared to the forward or backward
derivatives. There is only a small amount of extra work (a division by 2) in getting this more
accurate result. Because of their higher accuracy, central differences are usually preferred in
finite difference expressions.
Central difference expressions are not possible at the start of end of a boundary. It is possible to
get higher order finite difference expressions for such points by using more complex expressions.
For example, at the start of a region, x = x0, we can write the Taylor series in equation [2-14] for
the first two points in from the boundary, x1 and x2, expanding around the boundary point, x0.
.....
6 2
3
' ' '
0
2
' '
0
'
0 0 1
+ + + +
h
f
h
f h f f f [2-22]
.....
6
) 2 (
2
) 2 (
) 2 (
3
' ' '
0
2
' '
0
'
0 0 2
+ + + +
h
f
h
f h f f f [2-23]
These equations can be combined to eliminate the h
2
terms. To start, we multiply equation [2-22]
by 4 and subtract it from equation [2-23].
1
]
1
+ + + +
1
]
1
+ + + + ..
6
) (
2
) (
) ( 4 ...
6
) 2 (
2
) 2 (
) 2 ( 4
3
' ' '
0
2
' '
0
'
0 0
3
' ' '
0
2
' '
0
'
0 0 1 2
h
f
h
f h f f
h
f
h
f h f f f f
This equation can be simplified as follows
....
6
) (
4 ) 2 ( 3 4
3
' ' '
0
'
0 0 1 2
+ + +
h
f h f f f f [2-24]
When this equation is solved for the first derivative at the start of the region a second order
accurate expression is obtained.
) (
2
3 4
....
3 2
3 4
2 0 1 2
2
' ' '
0
0 1 2 '
0
h O
h
f f f h
f
h
f f f
f +
+
+
+
[2-25]
Page 2.8 ME 692, L. S. Caretto, February 10, 2010 Numerical methods
A similar equation can be found at the end of the region, x = xN, by obtaining the Taylor series
expansions about the point x = xN, for the values of f(x) at x = xN-1 and x = xN-2. This derivation
parallels the derivation used to obtain equation [2-25]. The result is shown below.
) (
2
3 4
....
3 2
3 4
2 1 2
2
' ' ' 1 2 '
h O
h
f f f h
f
h
f f f
f
N N N
N
N N N
N
+
+
+ +
+
[2-26]
Equations [2-25] and [2-26] give second-order accurate expressions for the first derivative. The
expression in equation [2-25] is a forward difference; the one in equation [2-26] is a backwards
difference.
The evaluation of three expressions for the first derivative is shown in Table 2-1. These are (1)
the second-order, central-difference expression from equation [2-21], (2) the first-order, forward-
difference from equation [2-16], and (3) the second-order, forward-difference from equation [2-
25]. The first derivative is evaluated for f(x) = e
x
. For this function, the first derivative, df/dx = e
x
.
Since we know the exact value of the first derivative, we can calculate the error in the finite
difference results.
In Table 2-1, the results are computed for three different step sizes: h = 0.4, h = 0.2 and h = 0.1.
The table also shows the ratio of the error as the step size is changed. The next-to-last column
shows the ratio of the error for h = 0.4 to the error for h = 0.2. The final column shows the ratio of
the error for h = 0.2 to the error for h = 0.1.
Table 2-1
Tests of Finite-Difference Formulae to Compute the First Derivative f(x) = exp(x)
x f(x)
Exact
f'(x)
h = .4 h = .2 h = .1 Error Ratios
f(x) Error f(x) Error f(x) Error (h=.4)/
(h=.2)
(h=.2)/
(h=.1)
Results using second-order central differences
0.6 1.8221 1.8221
0.7 2.0138 2.0138 2.0171 0.0034
0.8 2.2255 2.2255 2.2404 0.0149 2.2293 0.0037 4.01
0.9 2.4596 2.4596 2.4760 0.0164 2.4637 0.0041 4.01
1.0 2.7183 2.7183 2.7914 0.0731 2.7364 0.0182 2.7228 0.0045 4.02 4.01
1.1 3.0042 3.0042 3.0242 0.0201 3.0092 0.0050 4.01
1.2 3.3201 3.3201 3.3423 0.0222 3.3257 0.0055 4.01
1.3 3.6693 3.6693 3.6754 0.0061
1.4 4.0552 4.0552
Results using first-order forward differences
0.6 1.8221 1.8221 2.2404 0.4183 2.0171 0.1950 1.9163 0.0942 2.15 2.07
0.7 2.0138 2.0138 2.4760 0.4623 2.2293 0.2155 2.1179 0.1041 2.15 2.07
0.8 2.2255 2.2255 2.7364 0.5109 2.4637 0.2382 2.3406 0.1151 2.15 2.07
0.9 2.4596 2.4596 3.0242 0.5646 2.7228 0.2632 2.5868 0.1272 2.15 2.07
1.0 2.7183 2.7183 3.3423 0.6240 3.0092 0.2909 2.8588 0.1406 2.15 2.07
1.1 3.0042 3.0042 3.3257 0.3215 3.1595 0.1553 2.07
1.2 3.3201 3.3201 3.6754 0.3553 3.4918 0.1717 2.07
1.3 3.6693 3.6693 3.8590 0.1897
1.4 4.0552 4.0552
Results using second-order forward differences
0.6 1.8221 1.8221 1.6895 0.1327 1.7938 0.0283 1.8156 0.0066 4.69 4.32
0.7 2.0138 2.0138 1.9825 0.0313 2.0065 0.0072 4.32
0.8 2.2255 2.2255 2.1910 0.0346 2.2175 0.0080 4.32
Numerical methods ME 692, L. S. Caretto, February 10, 2010 Page 2.9
0.9 2.4596 2.4596 2.4214 0.0382 2.4508 0.0088 4.32
1.0 2.7183 2.7183 2.6761 0.0422 2.7085 0.0098 4.32
1.1 3.0042 3.0042 2.9934 0.0108
1.2 3.3201 3.3201 3.3082 0.0119
1.3 3.6693 3.6693
1.4 4.0552 4.0552
For the second-order formulae, the error ratios in the last two columns of Table 2-1 are about 4,
showing that the second-order error increases by a factor of 4 as the step size is doubled. For
the first order expression, these ratios are about 2. This shows that the error increases by the
same factor as the step size for the first order expressions. The expected values of the error
ratios are only obtained in the limit of very small step sizes. We see that the values in the last
column of this table (where the actual values of h are smaller than they are in the next-to-last
column) are closer to the ideal error ratio.
Truncation errors are not the only kind of error that we encounter in finite difference expressions.
As the step sizes get very small the terms in the numerator of the finite difference expressions
become very close to each other. We lose significant figure when we do the subtraction. For
example, consider the previous problem of finding the numerical derivative of f(x) = e
x
. Pick x = 1
as the point where we want to evaluate the derivative. With h = 0.1 we have the following data for
calculating the derivative by the central-difference formula in equation [2-21].
722815 . 2
) 1 . 0 ( 2
722815 . 2 004166 . 3
2
) ( ) (
2
) ( '
1 1 '
+
h
h x f h x f
h
f f
x f f
i i
i
Since the first derivative of e
x
is e
x
, the correct value of the derivative at x = 1 is e
1
= 2.718282; so
the error in this value of the first derivative is 4.5x10
-3
. For h = 0.0001, the numerical value of the
first derivative is found as follows.
90 7182818329 . 2
) 0001 . 0 ( 2
7180100139 . 2 7185536702 . 2
2
) ( ) (
) ( '
h
h x f h x f
x f
Here, the error is 4.5x10
-9
. This looks like our second-order error. We cut the step size by a
factor of 1,000 and our error decreased by a factor of 1,000,000, as we would expect for a
second order error. We are starting to see potential problems in the subtraction of the two
numbers in the numerator. Because the first four digits are the same, we have lost four
significant figures in doing this subtraction. What happens if we decrease h by a factor of 1,000
again? Here is the result for h = 10
-7
.
7182851763 . 2
) 0000001 . 0 ( 2
0388 7182815566 . 2 8724 7182821002 . 2
2
) ( ) (
) ( '
h
h x f h x f
x f
Our truncation analysis leads us to expect another factor of one million in the error reduction as
we decrease the step size by 1,000. This should give us an error of 4.5x10
-15
. However, we find
that the actual error is 5.9x10
-9
. We see the reason for this in the numerator of the finite
difference expression. As the difference between f(x+h) and f(x-h) shrinks, we are taking the
difference of nearly equal numbers. This kind of error is called roundoff error because it results
from the necessity of a computer to round off real numbers to some finite size. (These
calculations were done with an excel spreadsheet which has about 15 significant figures. Figure
2-1 shows the effect of step size on error for a large range of step sizes.
Page 2.10 ME 692, L. S. Caretto, February 10, 2010 Numerical methods
For the large step sizes to the right of Figure 2-1, the plot of error versus step size appears to be
a straight line on this log-log plot. This is consistent with equation [2-17]. If we take logs of both
sides of that equation and solve for n, we get the following result.
) log( ) log(
) log( ) log(
log
log
1 2
1 2
1
2
1
2
h h
h
h
n
,
_
,
_
[2-27]
Equation [2-27] shows that the order of the error is just the slope of a log(error) versus log(h) plot.
If we take the slope of the straight-line region on the right of Figure 2-1, we get a value of
approximately two for the slope, confirming the second order error for the central difference
expression that we are using here. However, we also see that as the step size reaches about
10
-5
, the error starts to level off and then increase. At very small step sizes the numerator of the
finite-difference expression becomes zero on a computer and the error is just the exact value of
the derivative.
Figure 2-1. Effect of Step Size on Error
1.E-11
1.E-10
1.E-09
1.E-08
1.E-07
1.E-06
1.E-05
1.E-04
1.E-03
1.E-02
1.E-01
1.E+00
1.E+01
1.E-17 1.E-15 1.E-13 1.E-11 1.E-09 1.E-07 1.E-05 1.E-03 1.E-01
Step Size
E
r
r
o
r
Final Observations on Finite-Difference Expressions from Taylor Series
The notes above have focused on the general approach to the derivation of finite-difference
expressions using Taylor series. Such derivations lead to an expression for the truncation error.
That error is due to omitting the higher order terms in the Taylor series. We have characterized
that truncation error by the power or order of the step size in the first term that is truncated. The
truncation error is an important factor in the accuracy of the results. However, we also saw that
very small step sizes lead to roundoff errors that can be even larger than truncation errors.
The use of Taylor series to derive finite difference expressions can be extended to higher order
derivatives and expressions that are more complex, but have a higher order truncation error.
Numerical methods ME 692, L. S. Caretto, February 10, 2010 Page 2.11
One expression that will be important for subsequent course work is the central-difference
expression for the second derivative. This can be found by adding equations [2-15] and [2-18].
.....
24
2
2
2 2
4
' ' ' '
2
' '
1 1
+ + + +
+
h
f
h
f f f f
i i i i i
[2-28]
We can solve this equation to obtain a finite-difference expression for the second derivative.
) (
2
.....
12
2
2
2
1 1
2
' ' ' '
2
1 1 ' '
h O
h
f f f h
f
h
f f f
f
i i i
i
i i i
i
+
+
+
+
+ +
[2-29]
Although we have been deriving expressions here for ordinary derivatives, we will apply the same
expressions to partial derivatives. For example, the expression in equation [2-29] for the second
derivative could represent d
2
f/dx
2
or
2
f/x
2
.
The Taylor series we have been using here have considered x as the independent variable.
However, these expressions can be applied to any coordinate direction or time.
Although we have used Taylor series to derive the finite-difference expressions, they could also
be derived from interpolating polynomials. In this approach, one uses numerical methods for
developing polynomial approximations to functions, then takes the derivatives of the
approximating polynomials to approximate the derivatives of the functions. A finite-difference
expression with an n
th
order error that gives the value of any quantity should be able to represent
the given quantity exactly for an n
th
order polynomial.
*
The expressions that we have considered are for constant step size. It is also possible to write
the Taylor series for variable step size and derive finite difference expressions with variable step
sizes. Such expressions have lower-order truncation error terms for the same amount of work in
computing the finite difference expression.
Although accuracy tells us that we should normally prefer central-difference expressions for
derivatives, we will see that for some convection terms special one-sided differences, known as
upwind differences will be used.
In solving differential equations by finite-difference methods, the differential equation is replaced
by its finite difference equivalent at each node. This gives a set of simultaneous algebraic
equations that are solved for the values of the dependent variable at each grid point.
*
If a second order polynomial is written as y = a + bx + cx
2
; its first derivative at a point x = x0 is given by the
following equation: [dy/dx]x=x0 = b + 2cx0. If we use the second-order central-difference expression in
equation [2-21]to evaluate the first derivative, we get the same result as shown below:
0
0
2
0
2
0
2
0
2
0
2
0 0
2
0 0 0 0
2
2
4 2
2
) 2 ( ) 2 ( 2
2
] ) ( ) ( [ ) ( ) (
2
) ( ) (
0
cx b
h
h cx bh
h
h h x x c h h x x c bh
h
h x c h x b a h x c h x b a
h
h x y h x y
dx
dy
x x
+
+
+ + + +
+ + + + + +
[2-35]
We can obtain the exact solution of equation [2-35] without specifying values for a, TA, TB, and L.
However, we have to specify these values (as well as a value of x) to compute a numerical value
for T. In the numerical solution process we cannot obtain an analytical form like equation [2-35];
instead, we must specify numerical values for the various parameters before we start to solve the
set of algebraic equations shown above. For this example, we choose a = 2, TA = 0, TB = 1, and L
= 1; we can then determine the solution for different values of N. Table 2-2 compares the exact
and numerical solution for N = 10.
The error in Table 2-2 is defined as the absolute value of the difference between the exact and
the numerical solution. This error is seen to vary over the solution domain. At the boundaries,
where the temperatures are known, the error is zero. As we get further from the boundaries, the
error increases, becoming a maximum at the midpoint of the region.
Table 2-2
Comparison of Exact and Numerical Solution for
Equation [2-33]
i x Exact
Solution
Numerical
Solution
Error
0 0 0 0 0
1 0.1 0.21849 0.21918 0.00070
2 0.2 0.42826 0.42960 0.00134
3 0.3 0.62097 0.62284 0.00187
4 0.4 0.78891 0.79115 0.00224
5 0.5 0.92541 0.92783 0.00242
6 0.6 1.02501 1.02739 0.00238
7 0.7 1.08375 1.08585 0.00211
*
These equations are said to form a tridiagonal matrix. In a matrix format only the main diagonal and the
ones above it and below it have nonzero coefficients. A simple algorithm for solving such a set of equations
is given in the appendix to these notes.
Numerical methods ME 692, L. S. Caretto, February 10, 2010 Page 2.15
8 0.8 1.09928 1.10088 0.00160
9 0.9 1.07099 1.07188 0.00089
10 1 1 1 0
In problems with a large number of numerical results, it is useful to have a single measure of the
error. (This is a typical problem any time we desire a single measure for a set of numbers. In the
parlance of vectors, we speak of the vector norm as a single number that characterizes the size
of the vector. We can use a similar terminology here and speak of the norm of the error.) Two
measures of the overall error are typically used. The first is the largest absolute error. For the
data in Table 2-2, that error is 0.00242. Another possible definition of the error norm is the root-
mean-square (RMS) error, defined as in equation [2-36]. In that equation, N refers to the number
of values that contribute to the error measure; boundary temperatures, which are given, not
calculated, would not be included in the sums.
N
i
i numerical exact
N
i
i
RMS
T T
N N
1
2
1
2
) (
1 1
[2-36]
The data in Table 2-2 have nine unknown temperatures. Plugging the data for those
temperatures into equation [2-39] and using N = 9, gives the RMS error as 0.00183. If we repeat
the solution using N = 100, the maximum error is 0.0000241, and the RMS error is 0.0000173. In
both measures of the error we have achieved a reduction in error by a factor of 100 as we
decrease the grid spacing by a factor of 10. This is an indication of the second-order error that
we used in converting the differential equation into a finite-difference equation.
An important problem in computational fluid dynamics and heat transfer is determining the
gradients at the wall that provide important physical quantities such as the heat flux. We want to
examine the error in the heat flux in the problem that we have solved. We first have to calculate
the exact solution for the heat flux. Since this problem involved heat generation, we will also be
interested in the integrated (total) heat generation over the region.
The wall gradients of the exact solution in equation in equation [2-35] can be found by taking the
first derivative of that solution.
) sin( ) cos(
) sin(
) cos(
ax aT ax
aL
aL T T
a
dx
dT
A
A B
[2-37]
The boundary heat transfer at x = 0 and x = L is found by evaluating this expression at those
points and multiplying by the thermal conductivity, k.
) sin( ) cos(
) sin(
) cos(
aL kaT aL
aL
aL T T
ka
dx
dT
k q
A
A B
L x
L x
+
[2-38]
) sin(
) cos(
) 0 sin( ) 0 cos(
) sin(
) cos(
0
0
aL
aL T T
ka kaT
aL
aL T T
ka
dx
dT
k q
A B
A
A B
x
x
+
[2-
39]
Equation [2-38] can be simplified by using a common denominator, sin(aL), and using the trig
identity that sin
2
x + cos
2
x = 1.
Page 2.16 ME 692, L. S. Caretto, February 10, 2010 Numerical methods
[ ]
[ ]
) sin(
) cos(
) ( sin ) cos( ) ( cos
) sin(
2 2
aL
aL T T ka
aL T aL T aL T
aL
ka
q
B A
A B A L x
+
[2-40]
The heat flow at x = 0 is entering the region; the heat flow at x = L is leaving the region. The total
heat leaving the region, qx=L qx=0 must be equal to the total heat generated. Since the heat
source term for the differential equation was equal to bT, the total heat generation for the region,
Qgen tot, must be equal to the integral of bT over the region. Using the exact solution in equation
[2-35], we find the total heat generated as follows.
dx ax T ax
aL
aL T T
b bTdx Q
L
A
A B
L
tot gen
1
]
1
0 0
) cos( ) sin(
) sin(
) cos(
[2-41]
Performing the indicated integration and evaluating the result at the specified upper and lower
limits gives the following result.
( )
1
]
1
) sin( ) cos( 1
) sin(
) cos(
aL T aL
aL
aL T T
a
b
Q
A
A B
tot gen
[2-42]
We can simplify this slightly by using a common denominator of sin(aL) and using the same trig
identity, sin
2
x + cos
2
x = 1, used previously.
[ ]
[ ] ) cos( 1
) sin(
) (
) ( sin ) ( cos ) cos( ) cos(
) sin(
2 2
aL
aL a
T T b
aL T aL T aL T aL T T
aL a
b
Q
A B
A A B A B tot gen
+ +
[2-43]
The result for the total heat generated can be compared to the total heat flux for the region, found
by subtracting equation [2-39] from equation [2-40].
[ ]
) sin(
) cos(
) sin(
) cos(
0
aL
aL T T
ka
aL
aL T T ka
q q
A B B A
x L x
+
[2-44]
To make the comparison between the heat flux and the heat generated we need to use the
definition of a
2
=b/k in the paragraph before equation [2-35]. If we make this substitution in
equation [2-44] for the net heat flux and do some rearrangement, we obtain equation [2-45],
which confirms that the net heat outflow equals the heat generated, as shown in equation [2-43].
[ ] [ ] ) cos( 1
) sin(
) (
) cos( ) cos(
) sin(
0
aL
aL a
T T b
aL T T aL T T
aL
ka
q q
A B
A B B A x L x
+
+
[2-
45]
In the finite difference result we need to address the computation of the gradients at x = 0 and x =
L. To do this we will use the second-order expressions for the first derivative given in equations
[2-25] and [2-26].
2357 . 2
) 1 . 0 ( 2
) 0 ( 3 ) 21928 . 0 ( 4 42960 . 0
2
3 4
0 1 2
0
+
+
h
T T T
dx
dT
x
[2-46]
Numerical methods ME 692, L. S. Caretto, February 10, 2010 Page 2.17
9153 . 0
) 1 . 0 ( 2
) 1 ( 3 ) 07188 . 1 ( 4 10088 . 1
2
3 4
1 2
+
h
T T T
dx
dT
N N N
L x
[2-47]
The exact and numerical values for the boundary temperature gradients are compared for both h
= 0.1 and h = 0.01 in Table 2-3. For both gradients, we observe the expected relationship
between error and step size for a second-order method: cutting the step size by a factor of ten
reduces the error by a factor of 100 (approximately).
Table 2-3
Comparison of Exact and Numerical Boundary Gradients in
Solution to Equation [2-33]
Location of
Gradient
Exact
Solution
Step
S
i
z
e
Numerical
Solution
Error
x = 0 2.1995
h = 0.1 2.2357 0.03618
h = 0.01 2.1999 0.00036
x = L = 1 -0.9153
h = 0.1 -0.9332 0.01786
h = 0.01 -0.9155 0.00021
Application of Finite Elements to a Simple Differential Equation
Here we apply a finite-element method, known as the Galerkin method,
*
to the solution of
equation [2-33]. In applying finite elements to a one-dimensional problem, we divide the region
between x = 0 and x = L into N elements that are straight lines of length h. (For this one-
dimensional problem, the finite element points are the same as the finite difference points.) For
this example, we will use a simple linear polynomial approximation for the temperature. We give
this approximation the symbol,
T
, to distinguish it from the true temperature, T. If we substitute
our polynomial approximation into the differential equation, we can write a differential equation
based on the approximation functions. The Galerkin method that we will be using is one method
in a class of methods knows as the method of weighted residuals. In these methods one seeks
to set the integral of the approximate differential equation, times some weighting function, wi(x), to
zero over the entire region. In our case, we would try to satisfy the following equation for some
number of weighting functions, equal to the number of elements.
N i dx T a
dx
T d
x w
L
i
, 1 0
) (
0
2
2
2
1
]
1
[2-48]
In this approach, we are using an integral approximation. If we used the exact differential
equation, we would automatically satisfy equation [2-48] since the exact differential equation is
identically zero over the region.
In this one-dimensional case, the geometry is simple enough so that we do not have to use the
dimensionless - coordinate system. We have the same one-dimensional grid in this case that
we had in the finite difference example: a set of grid nodes starting with x = x0 = 0 on the left side
and ending at x = xN = L on the right. (We will later consider the region between node i and node
*
As noted above, the method of weighted residuals is just one approach to finite elements. This approach
is used here, rather than a variational approach, because variational approaches cannot be applied to the
nonlinear equations that occur in fluid mechanics problems.
Page 2.18 ME 692, L. S. Caretto, February 10, 2010 Numerical methods
i+1 as a linear element. We can write our approximate solution over the entire region in terms of
the (unknown) values of T at nodal points, labeled as Ti, and a set of basis functions, i(x), as
follows.
N
i
i i
x T x T
1
) ( ) (
[2-49]
The basis functions used in this equation have the general property that i(xj) = ij. That is, the i
th
basis function is one at the point x = xi and is zero at all other nodal points, xj, where j i. This
property of the basis functions has the result that
i
T
'
+
+
+
+
1
1
1
1
1
1
1
1
0
0
) (
i
i i
i i
i
i i
i i
i
i
i
x x
x x x
x x
x x
x x x
x x
x x
x x
x [2-50]
We have formally defined the basis functions for the entire interval, but they are nonzero only for
the two elements that border the node with the coordinate xi. A diagram of some basis functions
is shown in Figure 2-3. We see that the basis function, i, is zero until x = xi-1. At this point it
starts a linear increase until it reaches a value of one at x = xi. It then decreases until a value of
zero at the point where x = xi-1. (The difference in the line structure for the various basis functions
in Figure 2-3 is intended to distinguish the various functions that intersect. There is no other
difference in the different basis functions shown in the figure. We see that the basis functions
from equation [2-50] shown in Figure 2-3 satisfy the condition that i(xj) = ij.
*
i-2 i-1 i i+1 i+2
xi-2 xi-1 xi xi+1 xi+2
Figure 2-3 Linear basis functions for a set of elements in one dimension.
Over a single element, say the one between xi and xi+1, there are only two basis functions that are
nonzero: i and i+1. This property is repeated for each element. Thus the approximate
temperature over the element depends on two basis functions and the two values (still unknown)
of T at the ends of the element.
*
In multidimensional problems, the basis function can depend on all the coordinates. However, it still has
the property that the basis function for node i is one at the coordinate for node i and is zero for any other
nodal coordinates. If we write the multidimensional coordinate as the vector, xi, we have the general,
multidimensional result that i(xj) = ij.
Numerical methods ME 692, L. S. Caretto, February 10, 2010 Page 2.19
We need to apply the Galerkin analysis to obtain a set of algebraic equations for the unknown
values of T. The result will be very similar to that for finite elements. We will obtain a set of
tridiagonal matrix equations to be solved for the unknown values of Ti. The desired set of
algebraic equations is obtained by substituting equation [2-49] into [2-48] and carrying out the
indicated differentiation and integration. The details of this work are presented below.
In finite elements it is necessary to distinguish between a numbering system for one element and
a global numbering system for all nodes in the region. In our case, the global system is a line
with nodes numbered from 0 to N. Our elements have only two nodes; we will use Roman
numerals, I and II, for the two ends of our element. We will continue to use the Arabic numbers 0
through N for our entire grid. Over a single element, then, we write the approximate temperature
equation [2-49] as follows.
II I
I II
I
II
I II
II
I II II I I
x x x
x x
x x
T
x x
x x
T x T x T x T
+ ) ( ) ( ) (
[2-51]
The second equation in [2-51] recognizes that there are four parts to the definition of the basis
functions in equation [2-50], only two of which are nonzero. Here we consider only the second
nonzero part of I and the first nonzero part of I+1. We will substitute the element equation in [2-
51] instead of the general equation in [2-49] into equation [2-48] to derive our desired result.
Equation [2-48] is a general equation for the method of weighted residuals. The Galerkin method
is one of the methods in the class of weighted residual methods. In this method, the basis
functions are used as the weighting functions. Since the individual elemental weighting function
is defined to be zero outside the element, we need only apply equation [2-48] to one element
when we are considering the weighting function for the element. For the Galerkin method then,
where the weighting function is the basis function, we can write equation [2-48] for each element
as follows.
II I i dx T a
dx
T d II
x
x
i
, 0
1
2
2
2
1
]
1
[2-52]
At this point, we apply integration by parts to the second derivative term in equation [2-52]. This
does two things for us. It gets rid of the second derivative, which allows us to use linear
polynomials. If we did not do so, taking the second derivative of our linear polynomial would give
us zero. It also identifies the boundary gradient as a separate term in the analysis. (In a
multidimensional problem, we would use Greens Theorem; this has a similar effect, in multiple
dimensions, to integration by parts in one dimension.) The general formula for integration by
parts is shown below.
b
a
b
a
b
a
vdu uv udv [2-53]
We have to manipulate the second derivative term in equation [2-50] to get it into the form shown
in [2-53]. We do this by noting that the second derivative is just the derivative of the first
derivative and we can algebraically cancel dx terms as shown below.
1
]
1
2
1 1
2
2
x
x
i
x
x
i
x
x
i
dx
T d
d dx
dx
T d
dx
d
dx
dx
T d II
I
II
[2-54]
Page 2.20 ME 692, L. S. Caretto, February 10, 2010 Numerical methods
The final integral in equation [2-54] has the form of the integration by parts formula in equation [2-
53]. We can identify u = i, and v = .
2
2
[2-55]
With this result, we can rewrite equation [2-52] as follows.
II I i dx T a dx
dx
T d
dx
d
dx
T d II
I
II
I
II
I
x
x
i
x
x
i
x
x
i
, 0
2
+
[2-56]
We need to evaluate this equation using the linear basis functions in equation [2-50] that we have
selected for this analysis. The basis functions have the following first derivatives for the element
under consideration. Once the derivatives of the basis functions are known we can compute the
derivatives of the approximate temperature. In taking the derivatives of the approximate
temperature polynomial, we consider the nodal values, TI and TII, to be constants.
I II
I II
I II
II
I II
I II
II
I
I
I II
II
I II
I
x x
T T
x x
T
x x
T
dx
d
T
dx
d
T
dx
T d
x x dx
d
and
x x dx
d
1 1
[2-57]
We can substitute the temperature polynomial and the basis functions, and their derivatives, into
equation [2-56]. We will show the details for = I, on a term-by-term basis, starting with the first
term in equation [2-56]. The basis function, I, is zero at the upper limit of this evaluation, x = xII)
so we only have the lower limit where I = 1. Rather than substituting the interpolation polynomial
for the approximate temperature gradient, we replace this term by the actual gradient. We can
then handle boundary conditions that use this gradient. If we do not have a gradient boundary
condition, we can use the resulting equations to compute the gradient.
1
1
x x
x x
I II
II
x x
I II
II
x
x
I II
II
x
x
I
dx
dT
dx
dT
x x
x x
dx
dT
x x
x x
dx
T d
x x
x x
dx
T d
II
II
I
II
I
[2-58]
The middle integral with two derivative terms becomes the simple integral of a constant
I II
I II
x
x
I II
I II
x
x
I II
I II
I II
x
x
I
x x
T T
x
x x
T T
dx
x x
T T
x x
dx
dx
T d
dx
d
II
I
II
I
II
I
2
) (
1
[2-59]
The final term in equation [2-56] requires the most work for integration. The last step in this
integration is left as an exercise for the interested reader.
Numerical methods ME 692, L. S. Caretto, February 10, 2010 Page 2.21
[ ] ) )( 2 (
6
) )( ( ) (
) (
2
2
2
2
2 2
I II II I
x
x
II I II II I
I II
x
x
I II
I
II
I II
II
I
I II
II
x
x
I
x x T T
a
dx x x x x T x x T
x x
a
dx
x x
x x
T
x x
x x
T
x x
x x
a dx T a
II
I
II
I
II
I
+ +
1
]
1
[2-
60]
We can substitute the results of equations [2-58] to [2-60] back into equation [2-56] to get the
result for using I as the weighting function.
0 ) )( 2 (
6
2
2 1
1
+ +
+
+
I II II I
I II
I II
x x
x
x
I
x
x
x
x
I
x x T T
a
x x
T T
dx
dT
dx T a dx
dx
T d
dx
d
dx
T d II
I
II
I
II
I
[2-61]
If we rearrange the last equation in [2-61] we get the following relationship between TI and TII
(and the gradient at x = xI) for our element, based on using I as the weighting function.
1
1
) (
6
1
) (
3
2 2
x x I II
I II II
I II
I II I
dx
dT
x x
x x
a
T
x x
x x
a
T
,
_
+ +
,
_
[2-62]
If we repeat the analysis that led to equation [2-61], using II as the weighting function, we get the
following result, in place of equation [2-61].
0 ) )( 2 (
6
2
2 2
1
+ +
I II II I
I II
I II
x x
x
x
II
x
x
x
x
II
x x T T
a
x x
T T
dx
dT
dx T a dx
dx
T d
dx
d
dx
T d
I
II
I
II
I
II
I
[2-63]
Rearranging this equation gives us the second relationship between TI and TII for our elements;
this one contains the gradient at x = xII.
I
x x I II
I II II
I II
I II I
dx
dT
x x
x x
a
T
x x
x x
a
T
1
1
) (
3
1
) (
6
2 2
,
_
,
_
+ [2-64]
There are only two different coefficients in equations [2-62] and [2-64]. We can simplify the
writing of these equations by assigning a separate symbol for the two different coefficients.
,
_
,
_
I II
I II
I II
I II
x x
x x
a
x x
x x
a 1
) (
6
1
) (
3
2 2
[2-65]
With these definitions, we can write our element equations, [2-62] and [2-64] as follows.
Page 2.22 ME 692, L. S. Caretto, February 10, 2010 Numerical methods
I
x x
II I
x x
II I
dx
dT
T T
dx
dT
T T
1 1
+ +
[2-66]
We have this pair of equations for each of our elements. We need to consider how the element
equations all fit together to construct a system of equations for the region. To do this we return to
the global numbering system for the region that was used in the finite-difference analysis. In the
global system, the first element lies on the left-hand boundary, between x0 and x1. The next
element lies between x1 and x2. We start by writing the element equations in [2-66] for these two
elements using the global numbering scheme. For the first element, the element index, I,
corresponds to the global index 0 and the element index II corresponds to the global index 1.
With this notation, the element equations in [2-66] become.
1 0
1 0 1 0
x x x x
dx
dT
T T
dx
dT
T T
+ +
[2-67]
Both equations in [2-67] have the boundary temperature, T0, and the first equation has the
temperature gradient on the left-hand boundary. In this example, we know that the boundary
temperature is given as T0 = TA from the boundary condition from the original problem in equation
[2-33]. However the boundary gradient (at x = x0) is unknown in our example. In problems with
other kinds of boundary conditions we may know the boundary gradient, but not T0 or we may
have a second relationship between T0 and the boundary gradient.
The second equation in [2-67] has an internal gradient as one of the terms. We can eliminate this
gradient term by examining the element equations for the second element, written in the global
numbering system. From equation [2-66], we have the following equations for the second
element.
2 1
2 1 2 1
x x x x
dx
dT
T T
dx
dT
T T
+ +
[2-68]
We eliminate the temperature gradient at x = x1 by adding the second equation from the equation
pair in [2-67] to the first equation in [2-68]. This gives the following equation.
0 2
2 1 0
+ + T T T [2-69]
This process of canceling the internal gradient that appears in two element equations for adjacent
elements can be continued indefinitely. Let us look at one more example. The second equation
in equation pair [2-68] has the gradient at x = x2. This equation from the second element can be
added to the first equation for the third element, which is shown below.
3 2
3 2 3 2
x x x x
dx
dT
T T
dx
dT
T T
+ +
[2-70]
The result is similar to equation [2-69].
0 2
3 2 1
+ + T T T [2-71]
This process can continue until we reach the final element between xN-1 and xN. Again, we will
have the usual pair of element equations from [2-66] that we can write in the global numbering
system for the final element.
Numerical methods ME 692, L. S. Caretto, February 10, 2010 Page 2.23
N N
x x
N N
x x
N N
dx
dT
T T
dx
dT
T T
+ +
1 1
1
[2-72]
The first equation in this pair will be used to eliminate an internal gradient term from an element
equation for the previous element. That will produce the following equation.
0 2
1 2
+ +
N N N
T T T [2-73]
The second equation, in equation pair [2-72] contains the unknown gradient at the right hand
boundary and one unknown temperature, TN-1.
We can write all the equations for our global set of elements as shown below. Here we use the
shorthand symbols G0 and GN for the unknown temperature gradients at x = x0 and x = xN.
(1) G0 - T1 = T0
(2) 2 T1 + T2 = - T0
(3) T1 + 2 T2 + T3 = 0
(4) T2 + 2 T3 + T4 = 0
[(N 7) additional similar equations with T4 through TN-4 multiplied by (2 ) go here.]
(N-2) TN-4 + 2 TN-3 + TN-2 = 0
(N-1) TN-3 + 2 TN-2 + TN-1 = 0
(N) TN-2 + 2 TN-1 = - TN
(N+1) TN-1 + GN = - TN
This shows that we have a system of N+1 simultaneous linear algebraic equations. We have
arranged them to be solved for N-1 unknown temperatures and two unknown boundary
temperature gradients. (Recall that T0 and TN are the known boundary temperatures, TA and TB,
respectively.) If we had a problem where the one or both gradients were specified as the
boundary conditions, we could simply rearrange the first or last equation to solve for the unknown
boundary temperature in terms of the known gradient.
*
The errors in the finite-element method (FEM) solution are compared to the errors in the finite-
difference method (FDM) solution four cases in Table 2-4. In this table the finite element-results
are shown for N = 10 and N = 100 elements. For the finite-difference results, the value of N
refers to the number of gaps between grid nodes. (For example, with N = 10, we have 11 nodes,
but only 10 gaps between those 11 nodes.) The results in Table 2-4 also look at the differences
caused by the parameter, a, that appears in the original differential equation [2-33]. In that
equation, there was a heat generation term that was proportional to the temperature, with a
proportionality constant, b. The parameter a equals the square root of the ratio b/k, where k is
the thermal conductivity. Thus, larger values of the heating parameter, a, imply a stronger heat
source (for a given thermal conductivity). Larger values of a also imply a more complex
problem. If a = 0, equation [2-33] has a simple linear solution, T = TA + (TB TA)x/L, with a
constant temperature gradient, (TB TA)/L. We expect both kinds of numerical methods to
perform well for a problem with such a simple solution.
*
We could just solve the N-1 equations numbered from (2) to (N), inclusive, for the N-1 unknown
temperatures and then compute the gradients. Again, the tridiagonal matrix algorithm, discussed in the
appendix, can be used to solve this set of equations. If we had a specified gradient boundary condition,
then the equation for that boundary temperature would have to be part of the simultaneous solution.
Page 2.24 ME 692, L. S. Caretto, February 10, 2010 Numerical methods
Table 2-4
Comparison of Errors in Finite-Difference (FDM) and Finite-Element (FEM) Solutions to Equation [2-33]
Heating Parameter a = 2 a = 0.2
Grid Elements N = 100 N = 10 N = 100 N = 10
Method FDM FEM FDM FEM FDM FEM FDM FEM
RMS Error in T 1.73E-05 1.73E-05 1.83E-03 1.80E-03 6.21E-10 6.21E-10 6.52E-08 6.52E-08
Maximum error in T 2.41E-05 2.41E-05 2.42E-03 2.38E-03 8.62E-10 8.62E-10 8.60E-08 8.60E-08
Error in dT/dx at x = 0 3.63E-04 7.02E-05 3.62E-02 6.99E-03 1.34E-06 2.24E-09 1.34E-04 2.25E-07
Error in dT/dx at x = L 2.14E-04 9.59E-05 1.79E-02 9.53E-03 1.31E-06 4.47E-09 1.31E-04 4.46E-07
The first two rows of error data in Table 2-4 examine the errors in the computed temperatures.
Both the maximum error and the RMS error discussed previously are shown. Based on these
values, there is almost no difference in the methods. Indeed, the temperatures profiles computed
by both methods, such as the results in Table 2-2, are nearly the same for both finite differences
and finite elements.
This similarity in the temperature results comes from a similarity in the equations used to relate
points in the region away from the boundaries in the two methods. Equation [2-34} for the finite-
difference method can be written as Ti-1 + (a
2
h
2
-2) Ti + Ti+1 = 0. The equation for the typical point
in the finite element method, of which equations [2-69], [2-71], and [2-73] are examples, can be
written as Ti-1 + (2/) Ti + Ti+1 = 0. The only difference between the finite-difference and finite
element equations, written in this fashion, is the difference between the coefficients of the Ti term.
To compare these terms on a common basis, we can set xII xI = h in equation [2-65] and
compute the ratio 2/ as follows.
,
_
,
_
,
_
,
_
,
_
,
_
6
1
3
2
2
1
6
1
3
2
1
6
1
3
2
2
2 2
2 2
2 2
2 2
2
2
a h
a h
a h
a h
h
h
a
h
h
a
[2-74]
Using long division, we can write the final fraction as an infinite series,
+ + +
,
_
,
_
216 36 6
2
6
1
3
2
2
2
8 8 6 6 4 4
2 2
2 2
2 2
a h a h a h
a h
a h
a h
[2-75]
We see that the first two terms in the infinite series for the Ti-coefficient, in the finite-element
method, are the same as the Ti-coefficient for the finite-difference method. For fine grid sizes and
small values of the heating parameter, a, the Ti-coefficient for the two methods will be nearly the
same. Even in the largest ha product considered in Table 2-4 (ha = 0.2), the Ti coefficient is 1.96
for finite differences and 1.9604 for finite elements. The similarity in the equations for the center
of the region, in this one-dimensional case, makes the temperature solutions very nearly the
same for the finite-difference and finite-element methods.
The error for the boundary gradients is smaller using the finite-element method. This is
particularly true for the smaller heating parameter value. This can be an important factor if we are
interested in computing wall heat fluxes or viscous stresses at the wall.
Numerical methods ME 692, L. S. Caretto, February 10, 2010 Page 2.25
Conclusions
Finite-differences and finite-elements provide two different approaches to the numerical analysis
of differential equations. In a broad sense, each of these approaches are a method for converting
a differential equation, that applies to every point in a region, to a set of algebraic equations that
apply at a set of discrete points in the domain.
The coefficients in the algebraic equations, using a finite difference method, are based on the use
of finite difference expressions that apply at individual points on a grid. The coefficients in the
finite-element equations are based on integrals over individual elements in the grid. For elements
that are more complex, the integrals may be evaluated numerically.
In the one-dimensional problem considered here, the algebraic equations were particularly simple
to solve. In multidimensional problems, we will have a more complex set of differential equations
to solve.
Finite difference expressions can be derived from Taylor series. This approach leads to an
expression for the truncation error that provides us with knowledge of how this error depends on
the step size. This is called the order of the error.
In finite-element approaches, we can use different order polynomials to obtain higher-order
representations in our approximate solutions. In both finite-difference and finite-element
approaches there is a tradeoff between higher order and required work. In principle, there should
be different combinations of order and grid spacing that gives the same accuracy. A higher order
finite-difference expression or a higher-order finite-element polynomial should require fewer grid
nodes to get the same accuracy. The basic question is how much extra work is required to use
the higher order expressions compared to repeating the work for the lower order expressions
more frequently on a finer grid.
In finite-difference approaches, we need to be concerned about both truncation errors and
roundoff errors. Roundoff errors were more of a concern in earlier computer applications where
limitations on available computer time and memory restricted the size of real words, for practical
applications, to 32 bits. This corresponds to the single precision type in Fortran or the float type
in C/C++. With modern computers, it is possible to do routine calculations using 64-bit real
words. This corresponds to the double precision type in Fortran
*
or the double type in C/C++.
The 32-bit real word allows about 7 significant figures; the 64-bit real word allows about 15
significant figures.
*
Also known as real(8) or real(KIND=8) in Fortran 90 and later versions; single precision is typed as real,
real(4) or real(KIND=4) in these versions of Fortran.
Page 2.26 ME 692, L. S. Caretto, February 10, 2010 Numerical methods
Appendix Solving Tridiagonal Matrix Equations
A general system of tridiagonal matrix equations may be written in the following format.
Ai xi-1 + Bi xi + Ci xi-1 = Di [2A-1]
This provides not only a representation of the general tridiagonal equation; it also suggests a data
structure for storing the array on a computer. Each diagonal of the matrix is stored as a one-
dimensional array. For the most general solution of 100 simultaneous equations, we would have
100
2
= 10,000 coefficients and 100 right-hand-side terms. In the tridiagonal matrix formulation
with 100 unknowns, we would have only 400 nonzero terms considering both the coefficients and
the right-hand side terms.
In order to maintain the tridiagonal structure, the first and last equation in the set will have only
two terms. These equations may be written as shown below. These equations show that neither
A0 nor CN are defined.
B0 x0 + C0 x1 = D0 [2A-2]
AN xN-1 + BN xN = DN [2A-3]
The set of equations represented by equation [2-34] (and shown on page 12) is particularly
simple. In that set of equations, all Ai = Ci = 1; all Bi = (a
2
h
2
2), and all Di = 0. In the general
form that we are solving here the coefficients in one equation may all be different, and a given
coefficient, say A, may have different values in different equations. To start the solution process,
we solve equation [2A-2] for x0 in terms of x1 as follows.
x0 = [C0 / B0] x1 + [D0 / B0] [2A-4]
The solution to the tridiagonal matrix set of equations, known as the Thomas algorithm,
1
seeks to
find an equation like [2A-4] for each other unknown in the set. The general equation that we are
seeking will find the value of xi in terms of xi+1 in the general form shown below.
xi = Ei xi+1 + Fi [2A-5]
By comparing equations [2A-4] and [2A-5], we see that we already know E0 = -C0 / B0 and F0 =
D0 / B0. To get equations for subsequent values of Ei and Fi, we rewrite equation [2A-5] to be
solved for xi-1 = Ei-1 xi + Fi-1, and substitute this into the general equation, [2A-1].
Ai [Ei-1 xi + Fi-1] + Bi xi + Ci xi-1 = Di [2A-6]
We can rearrange this equation to solve for xi.
1
1
1
1
+
+
i i i
i i i
i
i i i
i
i
E A B
F A D
x
E A B
C
x
[2A-7]
By comparing equations [2A-5] and [2A-6], we see that the general expressions for Ei and Fi are
given in terms of the already know equations coefficients, Ai, Bi, Ci; and Di, and previously
computed values of E and F.
1
This algorithm is sometimes called the tri-diagonal-matrix algorithm (TDMA).
Numerical methods ME 692, L. S. Caretto, February 10, 2010 Page 2.27
1
1
1
i i i
i i i
i
i i i
i
i
E A B
F A D
F and
E A B
C
E
[2A-8]
We have to get an equation for the final point, xN. We will not calculate xN, until we have
completed the process of computing the values of E and F up through equation N-1. At that
point, we will know the coefficients the following equation:
xN-1 = EN-1 xN + FN-1 [2A-9]
We will also know the coefficients AN, BN, and DN in the original matrix equation, given by [2A-3].
We can solve equations [2A-3] and [2A-9] simultaneously for xN.
1
1
N N N
N N N
N
E A B
F A D
x
[2A-10]
We see that the right-hand side of this equation is the same as the right-hand side of the equation
for FN in equation [2A-8].
The Thomas algorithm is a simple one to implement in a computer program. The code below
provides a C++ function to implement the calculations shown in this appendix. This function uses
separate arrays for Ei, Fi, and xi. However, it is possible to save computer storage by overwriting
the input arrays with the results for Ei, Fi, and xi. This is possible because the input data are not
required for the Thomas algorithm after their initial use in the computation of Ei and Fi.
void tdma( double *a, double *b, double *c, double *d,
double *x, int N )
{
// Generic subroutine to solve a set of simultaneous linear equations that
// form a tridiagonal matrix. The general form of the equations to be solved is
/ / a[i ] * x[ i - 1] + b[i ] * x[ i ] + c[ i ] * x[ i+1] = d[i ]
/ / The index, i , runs f rom 0 to N. The values of a[0] and c[N] are not def i ned
// The user must define the one-dimensional arrays a, b, c, and d.
// The user passes these arrays and a value for N to this function.
// The function returns the resulting values of x to the user.
// All arrays are declared as pointers in the calling program to allow
// allocation of the arrays at run time.
double *e = new double[N+1]; // Allocate storage for working arrays
double *f = new double[N+1];
e[0] = -c[0]/b[0]; // Get values of e and f for initial node
f[0] = d[0]/b[0];
for ( int i = 1; i < N; i++) // Get values of e and f for nodes 1 to N-1
{
e[i] = -c[i] / ( b[i] + a[i] * e[i-1] );
f [ i ] = (d[ i ] - a[ i ] * f [ i - 1] ) / ( b[i ] + a[i ] * e[i - 1] ) ;
}
// All e and f values now found now. Start with calculation of x[N].
// Then get remaining values by back substitution in a for loop.
x[N] = (d[N] - a[N] * f[N-1] ) / ( b[N] + a[N] * e[N-1] );
for ( i = N-1; i >=0; i - - )
{
x[i] = e[i] * x[i+1] + f[i];
}
delete[] e; // Free memory used for allocated arrays
delete[] f;
}
Page 2.28 ME 692, L. S. Caretto, February 10, 2010 Numerical methods