0% found this document useful (0 votes)
136 views26 pages

Handbook of Numerical Heat Transfer-Ch13 PDF

This document discusses verification and validation of computational heat transfer simulations. It begins by defining key terms like verification of codes, verification of calculations, and validation. Verification of codes involves evaluating a code against known solutions, while verification of calculations estimates errors when exact solutions are unknown. Validation determines if simulations accurately model physical phenomena. The document recommends first verifying codes, then verifications of calculations, and finally validation. It provides guidelines for verifying codes and calculations through methods like grid convergence studies.

Uploaded by

LeeSM Jacob
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
136 views26 pages

Handbook of Numerical Heat Transfer-Ch13 PDF

This document discusses verification and validation of computational heat transfer simulations. It begins by defining key terms like verification of codes, verification of calculations, and validation. Verification of codes involves evaluating a code against known solutions, while verification of calculations estimates errors when exact solutions are unknown. Validation determines if simulations accurately model physical phenomena. The document recommends first verifying codes, then verifications of calculations, and finally validation. It provides guidelines for verifying codes and calculations through methods like grid convergence studies.

Uploaded by

LeeSM Jacob
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 26

CHAPTER 13

VERIFICATION AND VALIDATION


OF COMPUTATIONAL HEAT TRANSFER

DOMINIQUE PELLETIER
Canada Research Chair
Department de Genie Mecanique, Ecole Polytechnique de Montreal
Montreal, PQ, Canada

PATRICK J. ROACHE
Consultant
1215 Apache Drive
Socorro, New Mexico

13.1 INTRODUCTION 418


13.2 TERMINOLOGY OF VERIFICATION AND VALIDATION 418
13.3 SEQUENCE OF VERIFICATION AND VALIDATION 419
13.4 VERIFICATION OF CODES 420
13.5 VERIFICATION OF CALCUlATIONS 422
13.5.1 Grid-Convergence Studies and the Grid-Convergence Index 423
13.5.2 Calculating Observed Rate of Convergence 424
13.5.3 Oscillatory Convergence 425
13.5.4 Noisy and Degraded Convergence Rates 425
13.5.5 Recent Confirmations of Grid-Convergence Index 426
13.5.6 The Ft : U9s Empirical Correlation 428
13.5.7 Determining Conservatism of Any Uncertainty Estimator:
Statistical Significance 429
13.5.8 Grid-Convergence Index for Unstructured Grids and Noisy
Convergence Rate 429
13.5.9 Grid-Convergence Index Is Not an Error Estimator 430
13.5.10 Theoretical Expectations for Grid-Convergence Index and
Richardson Extrapolation 431
13.6 SINGLE-GRID A POSTERIORI ERROR ESTIMATORS 432
13.6.1 Conservation Checks 432
13.6.2 Error Transport Equation 432
13.6.3 High- and Low-Order Methods on One Grid 432

417
41 8 VEFIiFICATlQN AND VALIDATION OF COMPUTATIONAL HEAT TRANSFER

13.7 AUXILIARY ALGEBRAIC EVALUATIONS: ZHU-ZIENKIEWICZ ESTIMATORS 433


13.7.1 Background on Estirnators 433
13.7.2 Implementation of Zhu-Zienkiewicz Estimators 433
13.8 COMPARISON OF GRID-CONVERGENCE INDEX AND SINGLE-GRID
A POSTERIORI ESTIMATORS 434
13.9 VERIFICATION WITHIN SOLUTION ADAPTATION 435
13.10 FUTURE WORK: THE Fs : UQS CORRELATION FOR ANY ERROR ESTIMATOR 437
13.11 VALIDATION 438

13.1 INTRODUCTION

In all sub-fields of computational science and engineering, including heat transfer and fl uid
dynamics, the last decade has seen an increasing awareness in the importance of quality issues,
including Verification and Validation. Many journals and professional societies have implemented
policy statements designed to enforce standards, and much agreement has been achieved on the
most important terminology and basic methods [II. This chapter summarizes some of the most
widely recognized of these.

13.2 TERMINOLOGY OF VERIFICATION AND VALIDATION

In the semantics of the subject. the three most important terms, and the most universally agreed
upon, arc Verification of Codes, Verification of Calculations, and Validation. We capitalize these
tenns (1) to emphasize their foundational importance, (2) to indicate that they are technical
terms herein, not just vague words from common usage in which verification, validation, and
confirmation are synonymous in a thesaurus , and (3) to emphasize the radical distinction (often
neglected) between Verification of Codes vs. Verification of Calculations. Note that "Verification
and Validation," or V&V. covers three subjects, not two. A better term for Code Verification
might have been "ascertaining code correctness." The double use of "Vcrification" for two
subjects is unfortunate but well entrenched.
Verification of a Code must involve error evaluation, from a known solution, whereas Verifi-
cation of a Calculation involves error estimation (since we do not know the exact answer). Both
Verifications arc purely mathemati cal activitics, with no Concern whatever for the accuracy of
physical laws; that is the concern of Validation.
The term "uncertainty" has been used with regrettable ambiguity. The most common and
easily cured ambiguity involves numerical uncertainty vs. physical parameter uncertainty. Also,
"unccrtainty" has oftcn bccn used when an "error estimate" actually was intended . And, while an
crror estimate is unquestionably appropriate, numerical "uncertainty" (a probabilistic statement
comparable to that used for experimental results) is a somewhat debatable concept since any
one calc ulation is detenninistic rather than probabilistic. A probabilistic sense can be developed
by considering an ensemble or population of similar problems [21.
In a general qualitative sense, the difference between any two numerical solutions gives
some indication of the numerical error. For example. a simple un iform grid can provide one
solution. and an adapted grid can provide another. If we compare the two solutions and find
very little difference, our confidence is increased. However. this error indicator is only qual ~
itative-we cannot very well assure ourselves that thc error is banded by the difference in
the solutions. (In fact , comparison of this vague indicator with grid-convergence results has
SEOUENCE Of VERIFICAnON AND VAlIDAnON 419

shown that it is nol acccptablc, being very optimistic.) Next. the difference between any two
error indicators (or error estimates) gives some indication of the unecrtainty of the crror indi-
cator. again in a qualitative sense. In the methods given here. these evaluations will be made
quantitative.
Numerical error and/or uncertainty have nothing to do with physical parameter uncertainty,
e.g .. what effect our inaccurate knowledge of thermal conductivity or geometry has on the results
of the simulation. Parameter uncertainty is part of the conceptual modeling or science of the
problem; it results in an error only for a single calculation that assumes specific values. When
better procedures arc followed. the sensitivity of results to these parameters is established by
computational experi ments, and ideally a parameter distribution is modeled and sampled [3].
Then the parameter uncertainty is properly not an error of the simulation, but rather is an an ....wer
obtained from the simulation. In fact. the prcdiction of parameter sensitivity and uncertainty
effects is one of the most powerful capabililies of simulations. See [41 and Chapter 14 of this
handbook.
Sensitivity involves the dependence of thc solution to variations in parameter values, and is
determined solely by mathematics and the computational solutions. SenSitivity calculations can
be pcnomled by computational solutions including nonlinear effecl<; [3J not restricted to small
perturbations or independent parameter variations. Alternately. one may restrict the assessment
to the linear response regime and use a variety of analytical/computational techniques essentially
based on a limit/differential approach. It is not necessary to assume that the basic problem is
lincar, only that the sensitivity is linear about a base solution, in the sense of small pertur-
bations. Linear methods are, of course, much cheaper a~ well as easier to interpret. providing
a conceptually clear separation of modeling of parameter distributions (which involves phys-
ical modeling) vs. sensitivity (which involves only mathematics). For a nonlinear approach.
input parameter di stributions and output scnsitivity are necessarily intertwined because the out-
put depends nonlinearly on the amplitude of parameters. Combined linear/nonlinear analysis is
possible, with some input parameters treatcd linearly and some nonlinearly [3]. For nonlinear
multiparameter problems, Monte Carlo techniques for sampling the input parameter distribu-
tions can greatly improve the efficiency of describing the range of expected outputs. Stratified
Monte Carlo (Latin Hypercube) can improvc sampling of the distribution extremes if these are
important to the proj ect analysis [3 J.
The terms "grid independence" and/or "numerically exact" arc sometimes used as convenient
shorthand expressions for the situation in which grid convergence has been achieved. This
abusive tcnninology (a nonlinear iterative calculation could never be "exact") is excusable only
with some understood, though perhaps vague, idea of a tolerance.
Other related terms such as confirmation. benchmarking, certi fication. quality assurance (QA).
etc. arc discussed at some length in [ll Other recommended recent broad publications on
the subjects include the AIAA Guide [51 and the comprehensive reviews by Oberkampf and
Trucano 16. 71.

13.3 SEQUENCE OF VERIFICATION AND VALIDATION

II is highly recommended. and indeed should be patently obvious, that in any project one
proceed in this ordcr: Verification of Code, Verification of Calculations, and Validation. That is,
first we Verify that the code is correct. (This is the sense of "ve rification" that is used in the
broader world, e.g., IEEE standards [5].) Only then docs it make sense to pursue Verification of
a Calculation. i.e., determining via numerical error estimation and uncertainty calculation that
our discretization is adequately accurate for our application. It should also be patently obvious
420 VEFIiFICATION AND VALIDATION OF COMPUTATIONAL HEAT TRANSFER

that it makes no sense to speak of a Calculation Verification unless we are first working with
a Verified , correct code. It docs no good to achieve grid independence with an erroneous cooe!
Although ccnain coding errors may be detected during a Verification of Calculation, generally
these studies cannot be relied upon. for Verification of Codes. This is easily demonstrated by
taking a correct (say) second-order accurate code, and deliberately introducing a coding error
of a factor of 2 or 10 in onc of the teems. The code solution probably will still converge
and even display second-order convergence rate; the code solution just converges to a wrong
answer [I].
Only after Verification of the Code and Verification of the Calculations should we compare the
results of our calculations, with its associated numerical error bands (and. preferably , parameter
unccl1ainty bands also), with experimental values, including their associated expcrimental error
bands. Othcrwise. we could bc lcd to a false Validation from compensating crrors.
We cons ider each of these three activities in tum, with emphasis on the second, Vcrification
of Calculations.

13.4 VERIFICATION OF CODES

As noted, Verification of a Code involves error evaluation from a known solution. (By contrast,
Verification of a Calculation will involve error estimation, sincc we do not know the exact
solution.) Both Verifications arc purely mathematical activitics, with no conccrn whatever for
the physical accuracy of the continuum mathematical model. This is a very impol1anr point for
appreciating the recommended approach to Verification of Codes.
This powcrful methodology for Code Verification has becn developed and demonstrated in
a variety of codes. It is only applicable to codes based on solving partial differential cquations,
usually nonlincar systems of PDEs. (The method could be extended to integra-differential
equations.) For some models, the method can be set up with no special eodc rcquirements ,
but we consider here the most gcneral and easy to apply mcthod, which docs rcquire twO code
features that might not be already built in.
What is needed for Code Verific:1tion is an exact, analytical solution to a nontrivial problem.
This seems difficult for nonlinear systems of PDEs. but in fact is easy, if we do not procccd
forward. but backward.
The Method of Manufactured Solutions (MMS) stalls at the end, with a solution. For example,
one could choose hyperbolic tangents; these are convenient to evaluate and contain derivatives
of all orders. We do not want a linear solution, since it would not exercise all the terms in
our PDEs.
It is strongly recommended thaI one lIot use infinite series solutions; they are ill-behaved
and typically require more careful numerical work to evaluate accurately than the code problem
we started with. There need be no concern about boundary conditions or domain, just the
solution. Especially, there is no concern about "realism." Physical realism is not impol1ant to
the "engineering proof' of code correctness, since only mathematics is involved. All we want
to do is tum on all the tenus in the mathematical model. In fact, lin realistic solutions typically
are better for Code Verification, since realistic solutions will often contain small parameters
and boundary layers in which some tenus are negligible, so that crrors in these terms might go
undetected [I, 8, 9J.
Note: Neither the mathematical model nor the physical domain has been specificd yet, only
the solution. One can use tanh or fI favorite approximate analytical solution for incompressible
flow Over airfoils, groundwater transpol1 in porous media, plastic buckling of a column, react-
ing chemistry, magnetohydrodynamics, etc. We write the problem symbolically as a nonlinear
VERIFICATION OF CODES 421

(system) operator L, which might represent something as complicated as full Navier-Stokes


equations with a turbulence model:

L[u(x, y, Z, t)] = 0 ( 13.1)

Denote the manufactured solution by

11= M(x,y, z,r) (13.2)

We will now change the problem to a new one such that the solution is exactly the manufactured
solution M . The most general and straightforward approach is to add a source tenn to the original
prOblem.

L[u(x, y, " t)] ~ Q(x. y , " t) (13.3)

The required source term is solved from

Q(x. y , " t) ~ L[M(x, y , " til ( 13.4)

Boundary vallles, for any boundary condition to be tested, are detennined from the M. For
example, specified values of dependent variables and gradients along y = 0 are evaluated as

u(X , 0, Z, t) = M(x, 0, z , t)
au -
aM ( 13 .5)
ay (x , 0, " t) ay(x, 0, " t)

This works for nonlinear boundary conditions as well. Note that the solution M might be
convenientl y written in Cartesian coordinates and applied in boundary-fitted coordinates.
Our experiences with many short courses and seminars indicate that the process is often
confusing at first glance, but after a little thought it becomes obvious. (See especially the
tutorial/review article [9l) If one is intimidated by all the differentiation involved, one may usc
symbolic manipulation. It is not even necessary to look at the complex continuum equations for
M and then encode them. One just uses the code-writing capability of a symbolic manipulation
code (Mathematica, Maple, etc.) to produce a source code segment (in Fortran, C, etc.) for the
sourcc tcnll.
Armed with this nontrivial exact analytical solution M, we perform grid-convergence tests
on the code applied to the modified problem and Verify not only that it converges, but at what
ratc it convergcs. Many details and issues are addressed in [1, Sections 3.8 - 3.11], including
the issue of astronomical numbers of option combinations. Briefly, there arc ways to pare down
the combinations. Obviously, the option combinations must be tested to be Verified, by MMS
or any other means. If complete coverage of all options is not feasible, then the code can be
Verified only for a specified set of option combinations. Thcre is nothing unique to MMS here,
and MMS will reduce the number of required test.s for complete coverage of the option matrix.
Knupp and Salari [10, Chapter 31 exercised MMS in a blind study, in which one author (not
the code builder) sabotaged a previously Verified computational fluid dynamiCS code (com-
pressible and incompressible, steady and unsteady, Navier-Stokes) developed by the other,
deliberately introducing erro rs. Then the original code author tested the sabotaged code with the
MMS. In all, 21 cases wcre studied, and all 10 of the order-of-accuracy mistakes, i.e., all that
could prevent the governing equations from being correctly solved, wcre successfully detected.
422 VEf~IFICATION AND VALIDATION OF COMPUTATIONAL HEATTRANSFEA

Other errors that may effect efficiency or robustness but not the final answer mayor may not
be detected, but neither are these Ihe concern of Verification as the tcnn is used herein; see
[1, Sections 3.14- 3.15J.
The MMS approach has also been used to Verify nol only the basic solution code but
also solution-adaptive mesh generation [121. For such applications, realistic MMS solutions are
preferable. There is nothing in the method that requires lack of realism. One may start with
an analytic solution of a simplified yet realistic problem and, by adding the appropriate source
r
terms, convert it to an exact solution of the modified full-equation problem Ill. (Realistic MMS
solutions will also be needed for numerical experiments to evaluate methods for Verification of
Calculations; sec Section 13.10.)
The MMS method works very well , as attested by dozens of practitioners on difficult and
practical problems. A related theo.rem is unlikely, but it is evident that the process is self-
correcting [10]. One might make a mistakc and obtain falsc-posi ti ve results (i.e., an indication
of a mistake when in fact the code is correct) , and counter-examples can be contrived, but
practil:al false-negative results appear almost beyond reasonable doubt.
The requirements of MMS are that the code being Verified must be capable of handling
(1) distributed source terms Q(x, y, Z, r) and (2) gene ral nonhumogeneous time-dependent
buundary cunditiuns B(x, y. Z, t) These arc nut difficult algorithmic requirements. Suurce
terms arc particularly easy. since they do. not involve spatial differencing. If the cude requires
some enhancements fur MMS to. be used, it is definitely worth the truuble. The alternative to
MMS is the typical haphazard, piecemeal, and never-ending approach of partial Code Verifica-
tions with various highly simplified problems that still leave the user uncollvinced. For more
complete details and many real-world examples of nontrivial Code Verificat ions by MMS. from
compressible flow turbulence mudels to radiatiun transpOlt, see rI, 8- 12]. See also [[ 3] for
further motivation on designing codes to facilitate V& V.

13.5 VERIFICATION OF CALCULATIONS

The gual of publicatiun standards and project quality assurance on the reporting of grid-
convergence studies should be the inclusion of reasonable error bars un the calculated solution,
similar to the goals of experimental work. The experimental community typically intends to.
achieve 95% confidence (.-....20:1 odds, or ...... 20' for a Gaussian distribution, where a is the stan-
dard deviation) in its error bars (ur uneerrainty). As in Coleman l14J, the estimated computational
or experimental solution value S is related to the true value T (either computational or physical)
by the definitiun of Uncertainty U95 ,

( 13.6)

in 95% uf the cases. This level of confidence or certainty (95%) has often been stated as the
goal uf computational engi neering, but is often not cunsistently addressed. As in the experimen-
tal case, the goal is not to obtain error bounds, since a true bound will include outliers in the
scaner and therefore will usually be excessively conservative. A "bound" would correspond to
100% certainty or U100. Likewise, error estimation alone is not the final gual, since errur esti-
mati on by itself provides only a 50% error bar (sec discussion in Section 13.5.10). A 50% error
bar is not sufficiently conservative, nor consistent with experimental practice. The data scaner
arises because of the populatiun or computational problems considered; anyone calculation is
deterministic, but the esti mated error bars will be stochastic uver an ensemble of problems [21.
We consider three categories of approaches to Verification of Calculations: grid-convergence
studies using the Gel, single-grid a posteriori error estimators, and solution-adaptive mesh
VERIFICATION OF CALCULATIONS 423

generation. Most of the observations on the first category (Gel) are equally applicablc to the
other two.

13,5.1 GrId-Convergence Studies and the Grid-Convergence Index

Grid·convergcnce studies are the most widely used. most straightforward. and arguably most
reliable method of obtaining an error estimate and, from that, an uncertainty estimate or error
bars. They are equally applicable to FEM, FYM, and FDM .
The Grid·Convergenee Index or Gel [1,2 ,8, 15, 16] is designed consistent with this goal
of achieving crror bars. It is based, as are related methods, on the generalized Richardson
Extrapolation II J (RE) but includes a factor of safety Fs. In II [, the Summary Recommendations
were proposed (as of publication date of 1998) based on experience with simple model problems
and on examination of results of some carefully pertormed systematic grid-convergence srudies
in the open literature. These recommendations have since been confinned by other studies
(sec (2, 17, 18)) and arc the ba<iis of an interim recommendation by the ASME CFD Committee
for a new publication standard for the ASME Journal of Fluid.\· Engineering {l9]. Although
further widespread applications and confinnation over stati stically significant numbers of cases
is desirable, il is unlikely that the parameters suggested here will change significantly .
Instead of the common pmctice of reporting simply the difference £ between the fine-grid
and coarse-grid solutions, one reports the Grid-Convergence Index. defined as Gel = F!J x IE II.
where E I is the error estimate from generalized RE:

. 1£1
GCI[fine gnd] = Fs!EtI = Fs -- (13.7.)
r P - I

£=
h-/I (l3.7b)
II

and where fJ and h are the respective solution values (local values or integrated functionals) on
the finest grid and the second finest grid in the sequence. r is the grid-refinement ratio (r > I),
and p is the rate of convergence.
Condition I: For the minimal two-grid (Ngrid = 2) convergence study using only a theo~
retieal (nonveri fied )p, use Fs = 3. Thi s places all grid·convergence studies, using whatever
grid-refinement factor r (not necessarily r = 2, nor even integer) and whatever convergence
rate p, on a roughly equal footing with reporting the value E from a study with grid doubling
using a seeond·order method (r = 2. p = 2). In this case, GCI = I£!.
Condition 2: For a lhree- (or more) grid-convergence study, which allows experimental deler~
mination of the observed or "pp"rellf rate of convergence p. and if the observed p is reasonable,
use Fs = 1.25. (Obviously, the confidence in the error bar so obtained will be hi gher if the
observed P is close to a theoretical value, e.g .. observed p = 1.97 for a theoretically second-
order method.)
The Gel is very convenient to apply to solution functionals like the integrated heat transfer.
For pointwise values, it is necessary to obtain fine- and coarse~grid values that are coloc3ted.
Depending on possible nonimeger grid-refinement ratios, this may require interpolation, which
should be done with interpolants of higher-order than the order of the discretization.
An m'ide: Practitioners arc remarkably reluctant to admit that a "theoretical rate of conver~
gence p" or "fonnal p" is itself often ambiguous. Once we go beyond discretizations for linear
scalar equations, there arc often more than one level of analysis for the leading tenus, so that
"formal p" is not unique. (See rIJ for examples.)
424 VERIFICATION AND VALIDATION OF COMPUTATIONAL HEAT TRANSFER

In a shorthand statement, omitting the caveats of the above discussion, the recommendation is

F.f = 3 for Ngrid = 2


(13.7,)
Ff = 1.25 for Ngrid .2: 3

As E~a and Hoekstra f20. p. 80J observed of the Gel: " Its main difficulty is the choice of the
value of the safety factor, which has to be essentially based on common sense." This is also
its main advantage, in our view. and their evaluation is that the GCI "seems to be viable and
robust."

13.5.2 Calculating Observed Rate of Convergence


To calculate the observed (or appa"'~nt) p. three grids are necessary. These must be geometrically
sim.ilar in order for the grid-refinement factor r to be defined strictly. Since one ordinarily uses
the finest grid solution as the final solution, the process should not be viewed as grid refinement,
but rather as grid coar.~enjng . Thus, the well-known "curse of dimensionality" [211 On operation
COunt becomes a blessing. In a 3D time-dependent problem using optimally efficient methods, a
grid halving costs 1116 or ....... 6% of the base fine-grid calculation. The third grid in the triplet costs
an additional (1/16)2 for a total 01''''7%. However, it is often the case in practical problems
(hat such a coarse grid is outside the asymptotic range, so r = 2 is often impracti cal. Also,
although grid-refinement (coarsening) factors closer to I are more expensive, these provide
sharper estimates [ I], limited by en"OrS due to round-off and incomplete iteration, both of which
contribure noise. In [I] it was recommended that a practical minimum limit on r was roughly 1.1,
i.e., roughly 10% coarsening. In [19] a limit of 1.3 was recommended. A good grid sequence
recommended in [22] is an overall doubling over the grid triplet, with the intermediate grid
r = /2. Literally, this is impossible due to the irrational nature of "fi, but it may be well
approximated.
In regard to incomplete iteration error, i.e .. the residual error remaining due to iterative
solution of (usually) nonlinear alg!!braie systems, [191 recommends that iterative convergence
be achieved with at least three orders of magnitude decrease (from the initial guess) in normalized
residuals to avoid the residuals causing noisy p. This must be regarded as only a rough rule
of thumb. since it takes into aCCOunt neither initial residual size nor the grid size. The basic
idea, of course, is to make the iteration residual negligible with respect to discretization error;
therefore, it is clear that higher accuracy .'ioJurions with finer grids will be more demanding of
iteration errors. For a more rational analysis of iteration residual interaction with observed p.
see Wilson and Stem [23].
With representative grid spacing 6\ < A2 < A3 (that is: fine grid, intermediate, and coarse
grid) and the grid-refinement ratio r21 = AzlA[, r32 = A3/ A2, one calculates the observed
order p of the calculation from

P= [_I ] [In 1<32 1+q(Pl]


In(r21) £21
( 13.8a)

q(p) = In 'P~l- ' ) ( 13.8b)


( r - S
J2

( 13.8,)
VERIFICATION OF CALCULATlONS 425

where 821 = h i /! . el2 = h lh and /k denotes the simulation value of the variable on the kth
grid. For nonuniform grids, one is free to choose the spatial locarion for the representati ve grid
spacing but, obviously, it must be at the same location for all grids. Note that only the ratios are
used , and only to calculate r. If r varies over the sequence of grids, the equation is nonlinear
and may be solved iteratively; direct substitution or fixed -point iteration is effective 11,19]. For
r = constant, q(p) = 0 and the solution is a direct (noniterativc) formula.

13.5.3 OSCillatory Convergence

Thc grid-convergencc sequence of solutions is not always monotonc. OSCillatory convergence


can be eauscd by inadequatc coarse-grid resolution (being outside the asymptotic range), mixed-
ordcr discretization [24[, shocks, intelfaee tracking, etc. The following 3-grid test for observed
convergence rype (expanded from [25]) is based on ratios of successive differences of solution
values. With subscripts 1,2, 3 referring to fine-, medium-, and coarse-grid solutions. we calculate
the discriminating ratio R and recognize four apparent convcrgenee conditions:

(13 .9)

Monotone convergence for O< R<


Oscillatory convergence for R<Oand IR I< 1
Monotone di vergence for R>I
Oscillatory divergence for R<Oand IRI > 1

The issue of possible misinformation has been discusscd hypothetica ll y in r26] and in real cal-
culations in [20. 27-33]. True oscillatory convergence can possibly appear, depending on the
sampling from just a 3-grid sample. to be either oscillatory, monotone diverging, or mono-
tone converging. Moreover, an oscillatory diverging sequence can possibly appear likewise [21.
Actually, the only conclusive 3-grid test result is that demonstrating oscillation (with no indica-
tion of it being oscillatory diverging or oscillatory converging.) As a practical matter, such
behavior is usually detected during cxploratory calculations. Hypothetically, with nonlinear
chaoti c solutions a possibility. any kind of nonregular solution sequence is conceivable. Thc
only way to rigorously detennine convergencc would be to perfonn a complete grid sequence,
c.g., 51 x 51. 52 x 52 . 53 x 53 , ... , 98 x 98, 99 x 99 ... .. Not only is this economically infea-
sible , ;'1 would fail because of corruption by round-off error and incomplete iteration error (as
noted in [I I).

13.5.4 Noisy and Degraded Convergence Rates


Even if convergence is monotone, the observed convergence rate p can be noisy. If the observed
p is indced close to thc theoretical p for the method, e.g., observed p = 1.97 for a nominally
second-order method. one may proceed with some confidence. However, it must be remembered
that a variety of factors can cause noisy p, i.e., a diffcrent grid triplet can produce a different
observcd p . Furthermore, fortuitous sampling of the possible grid triplets can produce a mislead-
ing observed fl ""' 2, when, in fact more complcte calculations show considerable noi se. Sec Ecra
a.nd Hoekstra {20, 27 - 33] for examples. Nevertheless, in the spirit of the targeted 95% certainty,
if the observed p is close to 2, one may proceed with some confidence. A more scrupulous
approach is to verify that the observed p is approximately constant by calculating p for at least
two separate grid triplets. Thi s requires performing a minimum of four grid calculations, which
426 VERIFICATION AND VAUDATION OF COMPUTATIONAL HEAT TRANSFER

would allow as many as four grid triplets and observed p's. Note that four grids (a, b, C, d)
give four possible grid triplets of (a, b, c), (a. b, d), (a, c, d), (b, c, d). However, E~a and
Hoekstra [20, 27] limit their grid triplets to r ~ 2. which may eliminate some triplets from a
four-gri d set.
The consequences of noisy p should be kept in perspective. Noisy p docs not necessarily
indicate an unstable algorithm or divergence. The methods and even the grid resolution may
perhaps be adequate for accuracy . Noisy p just makes the error estimation and error banding
somewhat problematical.
Another possible contributor to noisy observed p or simply degraded p (e.g. , observed
p '" 1.2 for a theoretically second-order method) is the use of any kind of interface tracking or
moving boundaries and/or re-meshing algorithms. These can be a challenge to achieving and
convincingly demonstrating second-order convergence rates even in the Code Verification stage.
Likewise, the presence of singularities can degrade the observed convergence rate, even though
the coding may be demonstrably eITor-free .
When observed p over four OVf:r more grids is not constant, the recommended procedure is
to use a least-squares determination of an effectivc p. (See below .)

13.5.5 Recent Confirmations of Grld~Convergence Index

Two major studies of the GC I [27, ]4], and several more limited evaluations cited in !2, 17J have
confirmed the summary recommendations on Fs and show more generality and robustness of
the GCI than might be expected. These are considered in more detail in [2, 18J. The highlights
of these two major studies are giv(:n hcrc.

Conservatism of the GCI in Studies of Cadafa/ch et al. The study by Cadafalch


ct a1. [18 , 34] is important for its careful consideration of grid-convcrgence issucs for a variety
of meaningful problems. The authors treated the following problems: 2D driven cavity (laminar),
and variants with 2D inclincd walls, with five levels of refinement; 3D driven cav ity (laminar)
with four levels of refinement: axisymmetric turbulent flow (low Re k-£ model) through a
compressor valve, tanh stretching, zonal refinement, power-law advection differencing, with
five levels of rcfinement; 3D premixed methane/air laminar Hat Harne on a perforated burner,
with seven lcvels of refinemcnt; free convection heat transfer from an isothermal cylindcr in
a square duct, three zones, tanh stretching of body-fitted grid, with five levels of refinement;
2D linear advection-diffusion model problem, rotated 1D exact solution, with six levels of
refinement.
The authors l34 J conclude the following:

I. For the linear model problem with exact solution; "The GCl has predicted the real absolute
discretization error for all the studied situations quite wel1."
2. For all problems: "The certainty of the error band estimator has been checked comparing
its value to the "exact" [reference value, highest-order method on finest grid] absolute
error of the numerical solutions. always obtaining very reasonable values ."
Funher examination [18] of these studies gives these results :
3. Confirms that the recommended F,. = 1.25 used with 3-grid studics to determinc the
observed p is roughly compatible with the target U 95 error bar.
4. Confirms that UDS is not only less accurate than higher-order methods but is les~ reliable,
i.e., the error estimates and error bars arc not as reliable (or "credible").
VERIFICATION OF CALCULATIONS 427

5. Suggests that reliable GCI may be calculated even though as many as "'" 113 of the nodal
values arc known to be converging nonmonotonically.
6. Suggests that there is no necessity to discard results with observed p < 1. probably
because p is increasing as convergence is approached, so that the lag effect [18 ) makes
the error estimator and/or error bar more conservati ve. That is, the fine -grid calculation
has a p larger than the average (observed) p over the three grids. This leads to excessively
conservative GC I for SMART calculations, but this is not an impediment to publication
standards or other acceptance.

Conservatism of the GCI in Studies of Et;a and Hoekstra: Least~squares Approach


In [20, 27 - 33 ] ~a and Hoekstra take an exhaustive look at grid convergence for several CFO
problems from laminar 20 driven cavity to 3D turbulent free-surface flows, using as many as
24 grid sets (grid triplets). (They are serious.) Briefly, they found [20. p. 80] in their test cases
that the GCT with Fs = 1.25 "seems to be viable and robust."
They demonstrate that grid convergencc can be remarkably consistent with theory for simple
problems (the well-behaved Laplace equation. for which virtually any grid is within the asymp-
totic regime, gives observed p = 2.00) but for realistic CFO problems (RANS solutions for the
Wigley Hull and the KY LCC2 tanker) convergence is often not monotone and the observed
p often involves significant scatter (noise) and is undependable. Chance grid sets may show
observed p '"" 2. but other nearby sets fail. This is not unique to their problems or codes, but is
(we believe) representative of computational engineering.
The authors show that the major contributor to noisy convergence is the difficulty of attaining
geometric similarity of the grids with noninteger grid refinement and especially multi block grid
generation. The latter appears to be an unavoidable limitation. Without strict geometric similarity,
the grid-refinement fac tor r is not defined strictly. Numerical interpolation and/or quadrature is
also a contributor. There arc two very positive conclusions. Turbulence modeling per se is not a
contributor, if [33 } switching functions arc not used. although the common presence of switches.
e.g., sublayer definitions, can reduce observed p to first order [11 and cause noisy p. Also. the
Reynolds number per se does not have a significant effect on the intensity of the scatter in
observed p.
When observed p over four or more grids is not constant, Erra and Hoekstra have developed
a least-squares procedure. This requires a minimum of four grid solutions for detenn ination of
effective convergence rates, which provide improved error estimation for the difficult problems.
For difficult realistic problems, more than the minimum four grids may be necessary; they obtain
r33 1 "fairly stable resuhs using about 6 grids with total refinement ratio near 2." We recommend
this procedure for noisy p problems, with the additional step of limiting the maximum p used in
the GCI to the theoretical p. It clearly would be imprudent to calculate Gel with observed p >
theoretical p. Although such superconvergence can occur, and would be appropriate to uSe if
one were actually using the extrapolated solution (sec [1] for discussion), it is recommended for
uncertainty calculations that max p = theoreticalp be used. On the other hand, there seems to
be no reason to categorically reject 0 < observed p < I. If observed p is < I, it probably means
that the coarsest grid is somewhat outside the asymptotic range, and the resulting uncertainty
estimate of the GCI will be overly conservative [2, 18]. (See below.) This is not an impediment
to publication or reporting.
The least-squares approach has been applied to several models of convergence [27, 321.
including the one-term expansion with unknown order p considered herein. Other possibilities
considered [27] were one-, two-, or three-tenn expansions with fixed exponents. (For example,
a fWo-term expansion with p's = 1 and 2 could be appropriate for mixed-order discretization
arising from first -order advection [enns and second-order diffusion tennl':, or perhaps directional
428 VERIFICATION AND VALIDATION Of COMPUTATlONAL HEAT TRANSFER

bias.) The simplest method works as well and is recommended, as follows . The assumed one-tenn
expansion of the discretization crWr is

(13.10)

The least-squares approach is based on minimizing the function

N,
S(Joo,~. p) = L:[.r; - (Joo + a~f)]2 (13.11)
i~ !

where the notation f oe (nm that of 127, 32J) suggests the limit of fine resolution (in the absence
of round-off error) . Setting the derivatives of S with respect to 100' ct, P equal to zero leads to

(13.12.)

(l3 . 12b)

(13.12c)

The last equation is nonlinear and is solved iteratively by a false-position method for observed
p. The number of grids N g must be > 3. and E9a and Hoekstra consider only 0 < p < 8. For
use in calculating an uncertainty estimator as in the Gel, we further recommend restricting max
p used to theoretical p.

13.5.6 The Fs : U95 Empirical Correlation


There is no need to preserve the initial summary recommendations of [IJ of Fs = 1.25 for
N x 2: 3, but it does not appear likely that ongoing refinement of the correlation will produce
a recommended value far from this (e .g., Fs = 1.1 or Fs = 1.5 appear unlikely) for first- and
second-order methods. More evaluation for a wide range of problcms is dcsirable. of course. A
precise optimum valLIe of Fs could not even be defined; any rough value would likely depend
on physical problem subsets such as internal vs. external flows, shocks, etc ., and perhaps on
discretization methods such as FEM vs. FVM. but more likely on the order of the discretization
method and specifically on the order of the first neglected terms (since these determinc [1] the
order of accuracy of RE), etc. Of these, the most likely sensitive factor would be p. Since
higher-order methods are not only more accurate but their error estimates arc more reliable
[11, it is likely that Fs for U95 would be < 1.25 for (say) founh-ordcr methods. Testing of
the Fs : U95 correlation for a wider rangc of practical problems is desirable, but there is not
much value in refining exeessivel) what will always remain a fuzzy estimate for any particular
problem. Equation (l3.7c) already provides the small step needed to go beyond error estimators
to the Calculation Verification tha t is needed prior to Validation, i.e., from error estimation to
error bars.
VERIFICAT ION OF CALCULATIONS 429

13.5.7 Determining Conservatism of Any Uncertainty Estimator: Statistical


Significance
Attempts to assess the adequacy of the conservatism of the GCI or any uncertainty estimator by
looking at the "conservatism" of GC I for individual problems miss thc point. The only way to
assess this is to consider an ensemble of problems, as in [1, 20, 27-34] and herein. From the
beginning of the GCI, computer users have resisted using it because of its conservatism. Since
the computational community wants 95% confidence, we must expect "over-conservatism" on
some simulations. The degree of overconservatism using Fs = 1.25 on anyone problem is only
a weak predictor of the confidcnce level, i.c., of whether or not the U95 error bar has been met.
The test of conservatism is not the conservatism of one calculation but conservatism over a
population of many problems.
If the "true" error of a single problem is overestimated by (say) 25% (which will always
occur with fine enough resolution on well-behaved problems), this says little about whether the
uncertainty U is indeed U95. The only way to assess this is to consider an ensemble of problems.
Since we are targeting 95% certainty, we must consider something on the order of 20 different
problem~' just to expect to get one case in which GCI (or another U95 estimator) "fail5,"' i.e., is
noneonservative. (It is not actually a failure; if it never "fails" to be unconservative, this would
be a kind of failure, i.e. , it would demonstrate that Fs was too large, producing U IOO instead of
the targeted U95 .)
Of course, many more than 20 problems are required for any statistical significance. In fact ,
this assembling of problems was begun in [ 1J and was the basis of the summary recommendations
(as of 1998) to use F.f = 3 for minimal 2-grid studies with theoretical p. and to use Fs = \.25 for
minimal3-grid STUdies with observed p . Note that we (and others) have stretched the definition of
"different problem" or "case" to include the samc physical problcm with multiple-grid sequences
or Reynolds numbers or numerical methods. (For example, a 4-grid sequence could provide 6
"problems" of 2-grid studies and 4 "problems" of 3-grid studies.) However, the signs of higher
derivatives are expected to remain the same for the same physical problem, so that sampling
ovcr physical problems is also required. This is done to some extent in [1, 2, 171 and herein. As
noted above, the new data by Cadafalch et al. [34], E~a and Hoekstra [20, 27~33], and other
more limited studies [I. 2, 17] are still supporti ve of the conclusion given in [I) that the GCT
method with Fs = 1.25 correlates roughly with the desired U95.
Note that with the GCI approach, a fixed percentage of a three-grid error estimate (e.g., 25%
of the error estimate for Fs = 1.25) is used to calculate an uncertainty of the error estimate
regardless of how close ~'olllfiom are to the a~ymptotic range. This is correct and intended, but
has caused some confusion. This is precisely what is needed by definition of "Uncertainty U"
that we usc. This goal and definition must apply even well into the asymptotic regime. For
example, [1] contains examples of real problems in which the obscrved order of convergence =
theoretical order = 2 to the most precision that could be reasonably expected (e.g., 2.01, 1.98,
... ) This excellent convergence behavior docs not obviate the need for a U95 error band. High
accuracy is not to be confused with lack of uncertainty in the error estimate. Ideally, F$ would
be tuned to the accuracy level, with f { smaller far into the asymptotic range, as noted by Wilson
and Stem [23J. But we would always need Fs > 1. and the additional empiricism would require
an extensive and perhaps impractical program to assess Fs with statistical significance.

13.5.8 Grid-Convergence Index for Unstructured Grids and Noisy Convergence


Rate
Unstructured base grids may be treated in this framework without further empiricism only
if the grid refinement is structured. This is difficult to achieve and more limiting, requiring
430 VERIFICATION AND VAUDATIQN OF COMPUTATIONAL HEAT TRANSFER

integer grid refinement. When unstructured refinement (coarsening) is used, the grid sets are
not geometrically similar, and the strict concept of a grid-refinement ratio is lost. Using an
old-fashioned engineering approach, one can define an effective r based only on the ratio of
nodes or clements [I. 19J

N ) IID
effective r = ( N~ ( !3.1J)

(where D is the dimensionality of the problem) and use this effective r in the calculation of Gel.
This is crude but ccnainly preferable ro the common reporting of unstrucrured grid-convergence
studies with only £; at least. the effective GeT shows some normalization of e by an effective r
and p. Pelletier and Ignat [35J showed that this generalization of the GCI for unstructured grids
"performs well," at least for global norms. Later examination of a sequence of unstructured
meshes obtained with a solution-adaptive method [36] also indicates second-order convergence
for global norms using such an effective r for global norms. The same approach would be
applicable to meshless methods. However. commOn sense again indicates that one cannot cut
this very close ; observed p over various grid triplets are sure to be noisy, and an f , = 3
is recommended for all nonadaplive unstructured grid- (and meshless) convergence studies .
(For adaptive unstructured grid methods. see Section 13.9.) It is to be expected that. as new
discretization approaches such as meshless methods are developed, error estimation and banding
would lag behind in development.
The method is straightforward to apply to a functional like the Nusselt number or to global
norms, but pointwise values present a difficulty. Collocated values arc necessary and must be
determined by interpolation, preferably of higher-order accuracy than the PDE method. This is
problematical for unstructured and meshless methods.

13.5.9 Grid· Convergence Index Is Not an Error Estimator


It is impol1am to note the obvioUl:, that the GC I with F, > I is not an error estimator. but an
uncel1ainty estimator. If only an estimate is given in a repol1, without an explicit statement of
probability, the only rational approach in the absence of additional information is to treat the
error estimate as a 50% uncel1ainty . The distinctions are significant.
The uncel1ainty U9 5 is not an elTor estimate , which we denote generally by Ee. (The notation
used herein for the error estimate based on RE is E l , but there are other possibilities for Ee .)
Although Ee and U95 are not unrelated, the definition of U95 in Eq . (13.1) does not involve
an error estimate; if we were gi ven all the data (experimental or computational), we could
calculate U95 without ever calculating an Ee. We could also examine a subset of the ensemble,
calculate a U95 (subset) directly (again without calculating Ee) and use it as an estimate of
U9~ for the entire ensemble. (This would lead into consideration of biased estimators, which is
not an issue for the subject at hand; we are not presuming to model the variance.) Although
the recommendations for going from an error estimator to an error bar or uncel1ainty involve
nothing more than multiplying the absolute value of an error estimate by a factor of safety, as
in Eg. (13.7), there are other approaches possible for estimating U95. Some of these involve
USing two error estimators. and others perhaps are conceivable [19] without utilizing any error
estimator. The essential poim is that the stated goals of the community are not 50% uncel1ainties,
but something larger, usually U95. There are other distinctions between error estimators Ee and
error bars Eb, some of which distinctions arc universal and some of which arc pal1icular to the
GC l and similar methods. See discussion in [21-
Whereas the RE error estimator is ordered (asymptotically exact), the Fs used in the Ge l
is not. Fs is not detemlined by theory but by empirical correlations of numerical experiments
VERIFICATION OF CALCULATIONS 431

aimed to achieve GC I ~ U95 for a statistically significant population of problems. This F.. : Urn
correlation is now [21 based on some hundreds of numerical experiments and is recognized
as good engineering practice [ 19]. Still, one must bear in mind that the correlation does not
necessarily improve as the grid is refined. GCI with F$ = 1.25 or 3 or other may give a good
estimate of U95 for some problems but not for others; it will not converge exactly for an
arbitrarily fine grid. It is not an ordered approximation.
A possible semantic pitfall lies in the fact that, when some authors [37] speak of "making
an error estimate," they implicitly assume that one intends to use the error estimate to provide
and use a corrected solution, as in using the (often) fourth-order RE correctcd solution. This is
not our use of the term. We can make an error estimate and simply state it, or we can use it
to obtain a corrected solution (hopefully improved, but demonstrably not always so), or we can
usc it with an empirical correlation to give an uncertainty or error bar, as in the GCI method.

13.5.10 Theoretical Expectations for Grid-Convergence Index and Richardson


Extrapolation

The original definition and concept of the GCI [15, 16] was based on placing any grid-
convergence study, with any grid-refinement factor r and any order of convergence p, on
the same footing as a grid doubling with a second-order method, i.e., r = 2 and p = 2. The
Gel is based on a generalized RE, with many caveats and conditions discussed in [IJ. Given
only the information from two grid solutions, RE provides the best estimate available for the
true (numerical) answer, i.e., of the grid-converged solution. /fthis were all we knew, we would
expect RE to give El with 50% uncertainty, i.e .• it is equally likely that the true solution be ~ or
2': the RE solution. Any particular problem ensemble could be :-::. or 2': . In fact, we would expect
about half the problems to give :S, and half to give 2':. So the determination of a "minimum PI"
as in [321 would as often give Fsmin ~ as :::.. "Problem ensembles" here could include the same
physical problem over various grid sequences, e.g.. a transonic airfoil at M = 0.8 with turbulent
boundary layer computed over 200 x 200 and 300 x 300 grids would constitute one problem,
and 300 x 300 and 400 x 400 would constitute another. Examples of each bias « or > 0)
are readily available. In the excellent transonic turbulent boundary laycr airfoil computations
of Zingg [38. 39J, RE wa~ alway~ con~ervative. In the easily replicated 10 Burgers equations
calculations [1 , Section 5.91. RE was seldom conservative.
Actually, there is good reason to be more pessimistic, to expect RE to give U A = IE J I, where
A < 50%. For specificity, we consider theoretical p = 2. i.e., a formally second-order method.
RE would provide the exact solution if the convergence rate was exact (e.g. , p = 2 exactly and
uniformly). RE applies in the asymptotic range, which docs not mean that p = 2 exactly, but
rather that the higher-order term~ in the power ~erics arc ~mall compared to the leading p = 2
term. While it is true that some situations can produce observed p > 2 for a nominally p = 2
method (e.g. , certain ranges of grid sequences, mixed-order discretizations like QUICK, wherein
advection terms are third-order, or constructive interference of higher-derivative terms), it is the
more common experience that higher-order term~ neglected in the fonnal analy~i~ produce
an observed p somewhat <2. This additional unconservativeness of RE produces VA = lEd,
where A < 50%. Even if RE for some problem sets produces VA = lEd. where A> 50%, this
i~ usually unacceptable, the goal being U95.
As noted, the work of E\a and Hoekstra l5. 27-331 has shown that the greatest practical
contributor to noisy observed p appears to be lack of true grid ~imi l arity. Multiple-block grid
generation makes the departure significant; true geometric similarity is virtually impossible
to achieve with muJtiblock grid generation methods. This provides additional motivation for
single-grid error estimators and uncertainty calculations, considered next.
432 VERIFICAT ION AND VALIDATION OF COMPUTATIONAL HEAT TRANSFER

13.6 SINGLE-GRID A POSTERIORI ERROR ESTIMATORS

Grid generation can be problematic.al. and multiple-grid generat ion required for grid-convergence
studies is always troublesome. TIIU S, single-grid error estimators are very much of interest.
Tn [1], it was highly recommended that all commercial codes contain somc kind of single-
grid eSlimator, hard-wired, whose value was always output (i.e., noncircumventablc by the
user).

13.6.1 Conservation Checks


In [11 it was noted that nonconservation of conservation variables and higher moments can be
llsed as surrogate error indicators; c.g., a non-mass-conserving boundary layer calculation can
be processed to evaluate the mass error. Since this error must tend to zero as the grid is refined,
the mass balance error is an indirect indicator of the convergence of other propenies of interest,
e.g., integrated skin friction and heat transfer. But to be useful, an empirical correlation has to
be established between the mass balance error and errors in the quantities of interest, which
errors must be established by grid-convergence studies. This is not effective for a new problem,
but can be useful for large parametric studies .

13.6.2 Error Transport Equation

Cclik and Hu f401 have revisited the idea of a single-grid error estimator based on numer-
ical integration of a transpon equation for truncation error, solved on the same grid as the
base solution. This work is very much of imerest. (Among other attributes. it holds out the
possibility of ordered error estimation for Lagrangian methods. which are in a primitive con-
dition.) At this time, the ETE methods are not sufficiently devcloped to be included in this
handbook.

13.6.3 High- and Low-Order Methods on One Grid


The difference between (say) a second-order solution using second-order methods and a fourth-
order solution (using higher-order FEM or other discretization stencils) on the same grid is
clearly an ordered error estimator [1], as is RE. In fact, RE is such a procedure. but one in
whieh the founh-order solution is not obtained by higher-order steneils but by a second grid
plus extrapolation to the limit. This is the approach used in p -refinemem FEM.
These methods arc not common. Compared to grid-convergence studies, they require con-
siderable algorithmic development. If they are used, it must be recognized that they provide an
ordered error estimator (as docs RE) and therefore a 50% uncenainty calculation. If error bands
are intended. as recommended, such error estimates also should be multiplied by a factor of
safety Fs. (This is also true of conservation checks and the ETE methods.) Although a scrupulous
approach would call for separate evaluation of the value of Fs. there is good reason to expect
that the same values used in the GCI should be adequate for second- and fourth-order methods.
For higher-order methods, the values Fs = 1.25 or 3 would likely be more conservative than
necessary. i.e., they would produce more than 95% certainty.
The category of single-grid a posteriori error estimators that is most important is that of
auxiliary algebraic evaluations, in particular, Zhu-Zienkiewicz estimators. considered next.
AUXILIARY ALGEBRAIC EVALUATIONS: ZHU-ZlENKIEWICZ ESTIMATORS 433

13.7 AUXILIARY ALGEBRAIC EVALUATIONS: ZHU-ZIENKIEWICZ ESTIMATORS

13.7.1 Background on Estimators

A category of single-grid a posteriori error estimators arc described generically [II as auxiliary
algebraic evaluations. which we refer to here in by the shorthand AAE. What they have in
common is their modus of application; they all involve local algebraic processing of a single-
grid solution. A recommended overview of the theoretical development of AAE is given by
Ainsworth and Oden [4IJ , who refer to this category simply as "a posteriori estimators." The
most widely known methods of this type are the Zhu-Zienkiewicz family (ZZ).
AAE are also described as "error indicators" or "surrogate estimators" rather than error
estimators, because the energy norm metric on which they are based is not usually of any direct
engineering or scientific interest. (They are of interest in meteorological and ocean calculations.)
The AAE could be useful for engineering use only if correlated by experience with quantities of
direct engineering interest. But remarkably, Ainsworth and Oden [411 have shown how the AAE
may be extended from merely the energy norm (which has fundamental theoretical significance)
to functionals like the Nusselt number, drag coefficient, etc., which they refer to generically as
"quantities of interest." The major pari of their book involve~ linear strongly elliptic problems,
with the last six pages covering quantities of interest for nonlinear systems and Navier-Stokes
equations. No demonstration calculation.;; are given. The limitation of the theory to "small data"
probably is similar to existence requirements; it may restrict the theory to Galerkin methods
without stabil ization (low Re) and avoidance of some pathological cases. but may not signal
practical inapplicability. (As usual, a strong theoretical foundation may be expected to lag
methods which may nevertheless work.)
The computational community will follow all these developments with interest, but a general
point is that they all basically provide error est imates. but ultimately for Validation exercises we
want the Calculation Verification to incl ude error bars. i.e. , 95% certainty rather than the 50%
(at best) intrinsic to error estimates. In lieu of empirical evidence from a statistically significant
number of studies specific to AAE, the same Fl' should apply to conve rt any of these error
estimates into roughly 95% certainty error bars. (This is especially clear for the p methods.
since RE itself is a multiple-grid p estimatoL) In any case, if authors persist in reporting only
some error estimator Ee (which is, after all. a tremendous improvement over historical practice
of reporting nothing but a single grid answer!) a safety factor of 3 or 1.25 (or other judgment
call. as are all engineering safety factors) can easily be applied by reviewers, editors, readers and
users. However. visual presentations of grid-convergence studies, and especially of Validation
and certification studies, would greatly benefit if the authors used an Fs or other approach to
present error bars instead of error estimators.

13.7.2 Implementation of Zhu-Zienkiewicz Estimators


Because of the richness of FEM formulations, it is not possible to present here detailed imp le-
mentation of ZZ estimators. Nor are such details customarily presented in journal papers; it is
necessary to read dissertations and reports specific to the FEM, e.g .. [42 - 441-
The ZZ are a family of error estimators, the member being defined by what is projected, how it
is projected, how many properties are included, etc. The ZZ method to be cited here uses the local
least-squares projection error estimator [45, 46]. It is based on the observation that the derivatives
of thc numerical solution are discontinuous at the clement interfaces, while the exact derivatives
434 VERIFICATION AND VALIDATION OF COMPUTATIONAL HEAT TRANSFER

arc continuous. An approximation of the exact derivative is obtained by a superconvergenl


least-squares reconstruction. The estimator is computed as the norm of the difference between
the projected and finite-clement df:rivatives. The projection is termed "local" because the least-
squares problem is constructed using only data from clements directly connected to any given
node. A quadratic basis is sufficient for the projection of the derivatives if the FEM uses quadratic
clements for velocity, temperature, and turbulence variables. The projected derivatives are of
higher degree than the FEM derivatives. They are also continuous across element faces and
smoother than their FEM counterparts.
As Ainsworth and Oden describe them f4 1], the ZZ estimators arc " unsophisticated flittlc
mathematics involved], crude [do not make use of information from the PDE being solved],
and astonishingly effective." Even for nonlinear problems, they have been repeatedly demon-
strated to exhibit "asymptotic exactness" [35, 36, 47-56]. i.c .. they arc actually "ordered error
estimators" [lJ .

13.8 COMPARISON OF GRID-CONVERGENCE INDEX AND SINGLE-GRID


A POSTERIORI ESTIMATORS

We first repeat a minor point to avoid confusion of terminology. As noted previously, the GCI
is not an error estimator but an uncertainty estimator. equal to F.r times an error estimate. Thus,
to compare likes, we must compare the GCI not to single-grid a posteriori error estimators but
to those multipl ied by a similar Fl. Setting aside this fine point, what are the pros and cons of
the two approaches?
Gel (or more generally. a grid-convergence study) is applicable to FDM and FVM as well
as FEM, and involvcs such simple mathematics that the description given in Eqs. (13.7) may
be regarded as complete. AAE have been developed within the theoretical framework of FEM.
Pelletier [see 11 has extended the theory for ZZ to FVM , and other extensions of AAE meth-
ods to FVM and/or FDM may be possible. but at present they are not ready for "off the
shclf' application. The detailed description changes with each variation of FEM. Although the
evalua1ion is local, the cost may not be insignificant when amortized over the most efficient
solvers [11-
GC I is the most reliable approach. While requiring no additional code, it does necessar-
ily use multiple grids. If one is taking a minimalist approach to Calculation Verification by
assuming that the base grid is in the asymptotic range, then single-grid AAE methods are much
more convenient to use (once the formulas are incorporated into the code). We still strongly
recommend their inclusion in all commercial codes. However, at present they have not been con-
clusively demonstrated for quantities of engineering interest (such as heat transfer) in nonlinear
problems. Until such theoretical approaches are demonstrated, one must establish correlations
betwcen the energy nann toleranccs and those quantities of interest for a class of problems.
This is a highly worthwhile area of research, because of the great convenience of working with
a single grid, especially for unstructured grids. By contrast, GCI applies to all quantities of
interest.
We emphasize that we do not r'ecommend either minimalist approach, i.e., one grid for AAE
methods like ZZ, or two grids for GCl.
Also note that application of AAE to time-dependent problems is more difficult than to
steady-state problems and is an open issue (i.e ., requires additional theoretical work) at this time,
whereas the GCI is straightforward. The GCI is usually applied in an approximate panitioned
way by separately calculating a GCI for the temporal error. If this is reduced by reducing I!:.r
(perhaps lLsing automatic adaptive time-step selection) to a IcvC\ much smaller than the more
VERIFICATION WITHIN SOLUTION ADAPTATION 435

difficult spat ial errors then the approximation is good. A more accurate way is to combine the
temporal and spalial grid convergence . If both lime and space discretization have the same
order (e.g., p = 2). the formula for Gel is unchanged. If lime is p = I and space is p = 2,
the grid-refinement r.ltios are changed accordingly, e.g .. spatial grid doubling and time-step
quadrupling.
AAE lose accuf'Jcy ncar bound;.uies [41 , 421 precisely where we often arc most interested in
the solution and the error bands. Thi s is not a problem for Gel.
It is well recognized that singularities cause difficulties for AAE methods. through the mech-
anism of enhancing the nonlocalness of the errors, a phenomcnon simply in kccping with
behavior of continuum POEs and referred to as "pollution errors" in the AAE literature [e.g.,
41). (This behavior is clearly manifest in a grid-convergence study. but only if morc than two
grids are uscLl .) Strong nonlinearitics are also blamed. It is perhaps less recognized that simple
aLivection lenns-not necessarily nonlinear, nor even variable coeffi cient -are strong contrib-
utors to "pollution errors" si mply because Lliscretization errors themselves arc advected and
diffused [1].
Al so, any stabil izing methods (c.g .• flux limitcrs, SUPG FEM) destroy the theoretical basis
for some AAE and degrade the actual perfonnance as well (41 ,42] . The ZZ cstimators arc
immune because they do not rely on the POE to construct the estimation .
If one is not taking a minimalist approach, but instcad requires veri ficat ion that the asymptotic
range has been achieved, the advantage of the AAE is reduced. It is not possib le to determine
whether the grid is adequate (e.g., if convergence really is p = 2) by doing a single-grid calcu-
lation. Order of convergence is verifiable only by multiple-grid calcu lations. AAE methods still
retain some advantages. in that they require one less grid than conventional Gel. at all levels.
To be specific: for a minimal ist approach assuming a known convergence rate p . Gel requires
two grids. AAE requires one. However, each of these approaches is dangerous, unless one is
worki ng on a suite of "nearby" problems so that one ha<; confidence that the grids arc within
the asymptot ic range. To actually calculate an observed p, Ge l req uires three grids. wh ile AAE
require s two. To verify that p is constant, Gel requires at least four. while AAE at least three.
While it is simpler to generate three grids than four, the same issues ari se, i.e., the importance
of strict grid similarity, noi sy p, etc. (For conservation checks, as in Section 13.6.1, the exact
answer for the conservation imbalance is known - namely. zero - and an observed p for the
ma ss balance may be calculated from just two grids [1].)

13.9 VERIFICATION WITHIN SOLUTION ADAPTATION

The powerful appli cation of AAE occurs when they are used to drive solution-adaptive grid
generation. Here, the error cstimate can arise without additional penalty. We are very much in
favor of such methods. and have used them extensively. As a practical malter, the numerica l
error can be driven to small and surely acceptable levels. However. srrictly speaking, the error
estimate obtained by the adaptive AAE algorithm is not always an ordered error estimate of the
fina l solution . The ZZ methods in a solut ion-adaptive gri d sequence do provide such an ordered
estimator; see Section 13.9. For nonordcred AAE methods. a quant itative error estimator can be
obtained with systemati c grid convergence (coarsening) of the final adaptcd grid, i.e., a separation
of adaptivity and grid convergence. Using only the nonordered AAE to guide solution adaptivity
(and the truth is , almost anything intuitive works for guiding solution o.daptivity [1. 21J), it is
problematical to translate the adaptivity criterion into a reliable quantitative final error estimate.
especially for fUll ctionals like the Nussclt number and other "quantities of interest. " However.
if a corre lation between the ordered AAE criteria and the results of grid-convergence tests arc
436 VERIFICATION AND VALIDATION OF COMPUTATIONAL HEATTRANSFEA

established for a class of problems, one can proceed with confidence without requiring grid
convergence testing separate from the solution adaptation for every problem.
As noted previously. the difference between any two solutions is at least qualitatively indica-
tive of an error estimator. However, most of these arc not quantifiable (and, in fact, most arc
undependable and grossly optimistic.) For example, a "feature-based" adaptation (e.g., increas-
ing resolution in boundary layers or ncar shocks) is effective for improving accuracy but docs
not provide quantifiable error estimation. A pmven approach is based on z:z estimators.
The power of 12 (and simHar) single-grid error estimators is in fact exhibited not in a
single-grid calculation, since this minimal approach can give no indication of observed order
of convergence. The power of ZZ is most evident when combined with solution adaptation,
which indeed was its original motivation [45,461: this procedure can produce quantified error
estimation and therefore Verification, at leaSt for global error norms. This approach uses ZZ but
docs not depend on the accuracy of ZZ for a single grid. These error estimates can be extended
to uncertainty estimates via a factor of safety.
Significantly, numerical experiments consistently show that ZZ is ordered [lj, or "asymp-
totically exact" [36]. However. experience [47 - 561 demonstrates that the ZZ error estimator
is not dependably conservative. For example, [47J shows consistently unconservative estimates
for a turbulent shear layer, [481 shows consistently conservative estimates for turbulent flow
over a heated backstep, and for il turbulent shear layer, consistently (except for the coarsest
grid) unconservative for velocities but consistently conservative for turbulent diffusivities and
temperatures. This lack of dependable conservatism is not a criticism, only an observation: the
same is true for Richardson Extrapolation [II. But it docs suggest the need for a faclOr of safety
F$ applied to ZZ, whether used alone (in a single-grid calculation) or within an adaptive grid
simulation, to calculate an uncertainty. In the solution-adaptive work [35, 36, 42-561. the effi-
ciency index (or effectivity index [411, defined as the error estimate/true error) tends to unity
asymptotically. (This is likewise true for the GCL) The F:s determined by empirical correlations
is more conservative asymptotically. However, at any particular resolution, some Fs > I is still
necessary, no matter how accurate is the calculation. This is especially obvious when the ZZ
estimator is always nonconservative in a grid sequence. Clearly, this corresponds to an uncer-
tainty worse than 50%. regardless of the accuracy. Note again that uncertainty and accuracy are
distinct concepts.
The ZZ approach also allows error estimates to be made directly for parameter uncertainty
values [53]; as might be expected. these have larger percentage errors than the primary quantities.
Also note that the ZZ estimators are not as reliable near boundaries. Thi s docs not, of course,
imply that the FEM itself is necess,arily less accurate near boundaries, only that the dependability
of the error estimator is diminished ncar boundaries. More seriously , the ZZ inaccuracy near
boundaries might lead to inadequate mesh adaptation there, and thus to diminished accuracy.
This also occurs in hyperbolic problems of interface tracking where the local upwinding or other
smoothing algorithms can misdire.ct the ZZ estimator into inadequate resolution. However, this
shortcoming would not appear to be unique to ZZ adaptation.
Evaluation of an adequate factor of safety f~ for a SOlution-adaptive mesh sequence appears
to be more fuzzy than the Gel experience. Examinatjon of [47 - 561 shows that the particular
adaptive grid strategy employed if, so effective that the finest grid resolutions always correspond
to a required F.~ < 1.25. (All dependent variables contribute to the error. and adaptivity is based
on the minimum over all variables of the mesh size predicted. The examples cited here use
primarily 7-node triangular clements.) At the other extreme, when the coarse grids arc also
considered for the results of [35] (which used an earlier version of the adaptive algorithm), the
required F.f is sometimes above 3. ConSidering all the results in [47 - 56], even jf we disregard
FUTURE WORK: THE f 'S : UI)S CORRELATION FOR ANY ERROR ESnMATOR 437

mesh resolutions ...IN < 20 (Le., roughly equivaleOi to a 20 x 20 mesh), then we still find
that f~r = 1.25 gives between 5 and 10% nonconservative estimates. The small sample and the
restriction to one particularly effective solution-adaptive method make determination of F and
"
perhaps the simple factor of safelY approach. questionable. Until something better is deve loped,
we stH! recommend FJ = 1.25 with the understanding that it is a very rough value but that some
Fs > I is generally required.
On the other hand, the solution-adaptive ZZ approach offers another error estimator: the
differe nce 8 between the last two meshes in the adaptive cycle. The work of [361 fo r a lim ited
set of problems indicates that this adaptive grid-convergence error estimator is consistently con-
servative, un like the single-grid ZZ estimators that drive the adaptive process. It is not known
how this would correlate with a U,)5. (Perhaps it is more conservative than 95%.) It is also
certain that this 8 wou ld not be a reliably conservative estimator for just any adaptive scheme,
e.g .. feature adaptation such as available in many software packages.
The performance of ei ther the ZZ itself or of 8 surely depends on the selection of the grid-
adaptivity level used. i.e., in [36J reducing the estimated error by a factor of two in each cycle. A
smaller adaptivity factor would slow the convergence rate on successive grids, making the error
estimator less conservative. Neither simple feature adaptation nor redistribution refi nement pro-
portional to solution gnldients or curvatures would dependably give an ordered error estimator.
But both theory and computational experi ments indicate that the performance is not restricted
to the seven -node triangular clement formulation.
We strongly recommend such an adaptive verification approach. with the addition of a factor
of safety. for steady-state problems. provided that grid convergence is monitored [0 estab lish
that the grids are in the asymptotic reg ime. This approach certainly avoids the difficu lties of
mu ltiple-grid generation of systematically refined grids, especially when unstructured and/or
mu ltiblock grids are appropriate. Of course, this approach is applicable only to a specific class
of algorithms, but once implemented, the process of (global) error estimation and Calculation
Verification becomes relatively robust and painless.

13.10 FUTURE WORK: THE Fs : U.5 CORRELATION FOR ANY ERROR


ESTIMATOR

Continual tcsting of the adequacy of the correlation of Fs = 1.25 wi th U95 is difficult because the
use of multiple-grid sequences requires a better estimate of the exact solution . The comparisons
for realistic problems have often used the best solution available, e.g., the RE solution on the
finest grid. This reference solution is not "exact." as acknowledged by the authors even when
they usc the abusive termi nology "exact solution." Comparisons of other grid solutions and
evaluation of U95 become corrupted for the finer grid sequences.
A better approach is to generate realistic exact solutions using MMS (Section 13 .4). These
can be obtained from simplified theoretical solutions made into exact solutions of full equations
(e.g., Navicr-Stokcs) modified by source terms. For examples, see Pelletier and Roache [Ill
With a suite of such exact solutions to realistic problems. adequacy of f"s could be evaluated.
Docs FJ = 3 correspond to U95 for 2-grid studies using thcorctical (formal) p? Docs f~ = 1.25
correspond to U95 for 3 (or more) grid studies using observed p?
A simi lar f~ approach would be applicable to estimates of non-~ errors such as finite domain
size [ I, Section 6. 10]. It was shown that the effect on lift and drag for transonic airfoils of fi nite
distance L8 to the outflow boundary can be ordered in IIL8 so that extrapolation to the limit
438 VERIFICATION AND VALIDATION OF COMPUTATIONAL HEAT TRANSFER

of Ln -....+ 00 can be accomplished. However, the value of F$ that correimes with a U95 for this
crror estimate would have to be ascertained in a separate study.

13.11 VALIDATION

Once Code Verification and Calculation Verification have been performed, one may proceed
to Validation by comparison with experiments. The danger of comparing directly with experi-
ments without first performing Code Verification is well demonstrated in examples given in [I,
Ch. 10, especially Sections 10.5 and 10.10]. This practice is most dangerous when only one or
a fe w features or functionals arc observed, rather then entire fields. Another striking and simple
example is reattachment length for the turbulent backstep (see Fig. I of [36J or Fig. 6 of [12J).
The combination of the most inaccurate numerical method used in the study and the worst
turbulence model produced an almost exact match with experiment at a coarse grid, implying a
false Validation of the poor turbulence model. A grid-refinement study clearly showed that the
agreement was due to cancellation of errors.
Although experimental practice is such that a reliable U95 is not always met, it is at least
honored in the breach: it is the stated goal. Validation involves calculation of discrepancies
between experimental data, including its uncertainty, with computational data, including its
uncertainty. We are in agreement with Coleman and Stern [57] that the metric for Validation
is IUEXp + UcoMP I and with Coleman [14J and Oberkampf [6, 7J that the acceptable level of
agreement (i.e., the pass/fail decision on the difference of the means) is not to be pre-detennined
during Validation. This contrasts to the position taken in [I), which, as noted in [6J, is the
COmmon practice. Rather, if the difference between computation and experiment is Eo BS and the
total Validation uncertainty Uv is the root-sum -square of UEXP + UCOMP then the computation
is Validated to the level max {EOIl-S, Uv} [14]. Note that Oberkampf [6, 71 has also suggested a
class of Validation metrics that weight the experimental uncertainty UEXP to include the higher
reliability of large replication experiments. The project-specific tolerance suggested in [58, 59,
Section 10.23 of IJ is best not considered part of Validation per se, which addresses science
or engineering science issues, but of "certification" or "qualifi cation ," which is project-oriented
and the purview of engineering management [IJ.
Nevertheless, we agree with Ce\ik [60J that the description "Validated" applied to a code
should have some value inherent to the code rather than just to the QA management process
that the code has undergone. Totally unreasonable Validation levels can be dismissed outright
as failure. But project needs for accuracy can vary widely. It is well-recognized that it is not
practical to set a standard tolerance for acceptable Validation that would apply to all projects.
For example, a tolerance on local velocity components in a steady boundary layer might be slack
for a chemistry or heat transfer calculation, perhaps not even sensitive to a sign error in the wall-
normal component, whereas the same steady boundary-layer computation would require high
accuracy to be used in a boundary-layer stability calculation or in an optical path calculation .
If we cannot set a universal tolerance for Validation. it follows that we cannot set a universal
tolerance for Invalidation. However. it does not make sense to base the Validation metric on
IUEXP + UcoMP I when UEXP is a U95 but UCOMP is a U5Q, i.e., merely an error estimator.
While this ambiguity in terminology probably will not affect reporting of results in research
journals, it will impact contract statements of work (c.g., suppose a contract calls for usc of a
"validated code") and regulatory requirements. It is well to be forewarned of possible pitfalls,
and to read the fine print carefully.
A r.;cent note by Coleman [61 J presents thoughtful insights on the definitions, metrics, assess-
ments, and interplay bctween Verification and Validation; it is highly recommended.
Validation is considered more thoroughly in Chapter 14.
NOMENClATURE 439

NOMENCLATURE

coefficient in error expansion


discretization measure
& difference in solutions from two grids
G standard deviation
AAE auxiliary algebraic evaluations error estimators
B boundary condition function
D dimensionality
E, error estimator
E, error estimator from RE
FDM finite-difference method
FEM finite-element method
FVM finite-volume method
1', factor of safety
f~min minimum F~
f discretized dependent variable or functional
Gel grid-convergence index
k grid index = I, 2, or 3
L, L ' operators
M manufactured solution
Ngrid number of grids in a sequence
N, Ngrid
Nl, N2 number of clement'> in grids I. 2
P order of convergence (convergence rate)
Q source tenn
q sec Eq. (U8b)
R ratio indicative of apparent convergence type
, grid-refinement ratio
RE (generalized) Richardson Extrapolation
S experimental or computational solution value
S least-squares functional
s scc Eq. (13.8b)
SUPG strcamwisc upwind Pctrov-Galerkin
U uncertainty as defined in Eq. (13.6)
U COMP uncertainty of computations
U"xp uncertainty of experiments
U9S. USQ, ... uncertainty value for 95%, 50%, . .. confidence
u, uncertainty for the fine grid
u continuum dependent variable
T experimental or computational true value
time
X, Y .Z spatial variables
zz Zhu-Zienkiewicz error estimators

Subscripts
00 limit value for .6. ---+ 0
1,2.3 grids in sequence fine, medium, coarse
21,32 from grids 2 tu I, 3 to 2
440 VERIFICATION AND VAUDATION OF COMPUTATIONAL HEAT TRANSFER

REFERENCES

I. P. 1. Roache, Verijicarion and Va!idarion in Computational Science alld Engineering, Hennosa, Albu-
querque. NM, 1998.
2. P. 1. Roache, Error Bars for em, AIM Pape/' 2003-0408, AIAA 41st Aerospace Sciences Meeting, Reno,
Nevada, January 2003.
3. 1. C. Helton. D. R. Anderson, ... , P. 1. Roache, et al., Uncertainty and Sensitivi ty Analysis Results
Obtained in the 1992 Performance Assessment for the Waste Isolation Pilut Plant, Reliability Engineering
and System Safety, 5 1, 53-100 (1996).
4. D. Pelletier, E. Turgeon. D. Lacasse, and 1. Borggaard, Adaptivity, Sensitivity, and Uncertainty: Toward
Standards of Good Practice ill Computational Auid Dynamics. AIM 1. 41, 1925-1933 (2003).
5. AIAA, Guide for rhe Verification and Validation of Computational Fluid Dynamics Simulations, A1AA
G-077-l998.
6. w. L. Oberkampf, T. G. Trucano , and C. Hirsch, Verification. Validation, and Predictive Capability in
Computational Engineering and Physics, Session HI, Verification and Validation for Modeling and Simu-
lation in Computational Science and Engineering Applications, Foundalions for Verificatjon and Validation
in the 21$1 Century Workshop. Johns Hopkins University/Applied Physics Laboratory, Laurel, MD, 22- 23
October 2002.
7. W. 1.. Oberkampf and T. G. Trucano, Verification and Validation in Computational Auid Dynamics, Pmgr.
Aerospace Sci .. 38. 209-272 (20(l:!).
8. P. J. Roache, Quantification of Un<:ertainty in Computational Fluid Dynamics, Annu. ReI'. Fluid Mech .• 29,
123- 160 ( 1997).
9. P. J. Roache, Code Verification by the Method of Manufactured Solutions. A5M£ 1. FluidJ Eng., 114,
4-10 (2002).
10. P. Knupp and K. Salari, Verification of Computer Codes in Compuralir1l!al Science and Engineering, CRC
Press, Roca Raton, FL, 2003.
II. D. Pelletier and P. J. Roache, CFD Code Verification and the Method of Manufactured Solutions, CFD
2002, 10th Annual Conferellce of the CFD So(;iely of C(mada, Windsor, Ontario, Canada, 9-1 1 June 2002.
12. E. Turgeon and D. Pelletier, Veritication and Validation in CFD Using an Adaptive Finite-element Method,
CASI, 48, 219- 231 (2002).
13. P.1. Roache, Huilding PDE Codes to be Verifiable and Validatable, Computing in Science and Engineering,
Spt:cial Issue on Verification and Validation, Sept.lOct . 2004, 3D-3ft
14. H. W. Coleman, Some Oh~rvation s on Uncertaimies and the Verification and Validation of Simulations,
ASM£ 1. Fluids Eng. 125, 733-735 (2003).
15. P.1. Roache, A Method for Unifl1nn Reponing of Grid Refinement Studies, in I. Celik, C. J. Chen, P.
J. Roache, and G. Scheurer (eds.), ASME FED.vol. 15R, Quantification of Uncer/llinty in Computational
Fluid Dynamics. ASME Auids Engineering Division Summer Meeting. Washington. DC, pp. 109 -120,
20·· 24 June 1993.
16. P. J. Roache, A Method for Unifonn Reporting of Grid Refinement Studies, Proc. AlM 11th Computational
Fluid DynamiCS Conference, Part 2, Orlando, Fl., pp. 1057 - 1058,6-9 July 1993.
17. P. .I. Roache, Recent Contributions to Verification and Validation Methodology. Proc 51h World Congress
on Computational Mechanics, Vienna. 7-12 July 2002.
IS. P. J. Roache. Conservatism of thc GCI in Finite Volume Computations on Steady State Auid Flow and
Heat Transfer, ASME J. Fluids £ng .. 125, 731-732 (2003).
19. C. J. Freitas, U. Ghia, I. Celik, H. Coleman, P. Raad, and P. J. Roache, ASME'S Quest 10 Quantify
Numerical Uncertainty, AIM Paper 2003·0627, AIAA 41st Aerospace Sciences Meeting, Reno, Nevada,
January 2003.
20. L. ~a and M. Hoekstra. An Evaluation of Ve rification Procedures for Computational Huid Dynamics, lSI
Report D72·7, in,tituto Superior Tecnico (Lisbon), June 2000.
REFERENCES 441

21. P. J. Roache, Fluzdllmenllll.f nfComputatiolllli Fluid Dynamic.f, Chapter 14, Hennosa, Albuquerque, NM,
1998.
22. R. V. Wilson, F. Stern, H. W. Coleman. and E. G. Paterson, Comprehensive Approach to Verification
and Validation of cm Simulations, Part 2: Application for RANS Simulation of a Cargo/Container Ship,
ASM£ J. FllIid.f Eng., 123,803 -8 10 (2001).
23. R. V. Wilson and F. Stem, Verification and Validation for RANS Simulation of a Naval Surface Combatant,
AlAA Paper 2002-0904, AIAA 40th Aerospace Sciences Meeting, Reno. Nevada. 14 - 17 January 2002.
24. C. 1. Roy, Grid Convergence Error Analysis for Mixed-Order Numerical Schemes. AlAA Paper 2001 - 2606,
Anaheim, June 2001.
25. F. Stem, R. V. Wilson, H. W. Coleman, and E. G. Paterson, Comprehensive Approach to Verification and
Validation of Crn Simulations, Part I: Methodology and Procedures. ASME J. Fluids Eng. , 123, 793-802
(2001).
26. H. W. Coleman, F. Stern, A. Di Mascio, and E. Campana, The Problem with Oscillatory Behavior in Grid
Convergence Studies. ASME 1. F1uids En,,: .. 123,438-439 (2001 ).
27. L. Er;a and M. Hoekstra. Verification Procedures for Computational Huid Dynamics on Trial, 1ST RepOri
072-/4, Instituto Superior Tecnico (Lisbon), July 2002.
28. M. Hoekstra and L. E~'a, An Example of Error Quantification of Ship-related CFD Results, Maritime
Research Institute Netherlands, 71h Numerical Ship HydrodYllamics Conference, Nantes, July 1999.
29. M. Hoekstra and L. Eya, An Ex:ample of Error Quantification of Ship-related CFD Results, Maritime
Research Institute Netherlands, 2000.
30. M. Hoekstra, L. Er;a, 1. Windt and H. Raven, Viscous Flow Calculations for KVLCC2 and KCS Models
using the PARNASSQS Code. Proc. Gothenburg 2{){x), A Worhhop 011 Numeriwl Ship Hydrodynamics.
Gothenburg. Sweden.
31 . L. Er;a and M. Hoekstra, On the Application of Verification Procedures in Computational Fluid Dynamics,
2nd MARNET Workshop, Maritime Research Institute Netherlands, 2000.
32. L. Ec;a and M. Hoekstra, An Evaluation of Verification Procedures for CFD Algorithms, Proc. 24th Sym-
posium Of! Nuval HydrodYl1amics, Fukuoka, Japan. 8-13 July 2002.
33. H. C. Raven, M. Hoekstra. and L. E~a, A Discu ssion of Procedures for CFD Uncenainty
Analysis, MARIN Repon 17678-I-RD, Maritime Institute of the Netherlands, October 2002.
www.marin.nUpublications/pg_resistance.html.
34. J. Cadafalch, C. C. Perez-Segarra. R. C6nsul, and A. Oliva, Verification of Finite Volume Computations
on Steady State Fluid Aow and Heat Tnl1lsfer, ASME J. Fluids Eng., 124, 11- 21 (2002).
35. D. Pelletier and L. Ignat, On the Accuracy of the Grid Convergence Index and the Zhu-Zienkiewicz EITor
Estimator, in R. W. Johnson and E. D. Hughes (eds.), l oint JSME-ASME Fluid Mechanics Meeting,
Quantification of Uncertainty in Computational Auid Dynamics- I995, ASME FED Vol. 213, Hilton Head,
South Carolina, pp. 31-36, 14-18 August 1995.
36. E. Turgeon, D. Pelletier, and I.. Ignat, Effects of Adaptivity on Finite Element Schemes for Turbulent Heat
Transfer and Row Predictions, NlImer. Heat Transfer. Part A, ]8, R47 -86R (2000).
37. L. Larsson, F. Stem and V. Benram (eds.), Pmc. Gothenhurg 2000. A Workshop on Numerical Ship
HydrodYnllmics, Gothenburg, Sweden. 2000.
38. D. W. Zingg, Vi ~ous Airfoil Computations Using Richard~on Extrapolation, AlAA Paper 9/-1559-CP,
AIAA 30th Aerospace Sciences Meeting, Reno, Nevada, 6-9 1anuary 1992.
39. D. W. Zingg, Grid Studies for Thin-layer Navier-Stokes Computations of Airfoil Flowlields. AIM J. 30.
2561-2564 (1993). See also AIAA Paper 92-0184.
40. L Ceiik and G. Hu, Further Refinement and Benchmarking of a Single-grid Error Estimation Technique,
AlM-2003-0628. 41s1 AIM Aerospace Sciences Meetin g, Reno, Nevada, 6- 9 January 2003.
4 1 M. Ainsworth and 1. T Oden, A Posteriori Error EstimatiOIl ill Finile Element Analysi.l", Wiley, New
York, 2000.
442 VERIFICATION AND VALIDATION OF COMPUTATIONAL HEAT TRANSFER

42. D. Pelletier and 1.-Y. Trepanier. Implementation of Error Analysis and Nonns to CompUlatiollal Fluid
Dynamics Applications. Parts 1-5, Project C.D.T. P2223, CO.T., Ecole Poly technique de Montreal.luly
[997.
43. F. llinca, Methodes d'Elemenl.f f'inis Adaptatives pour les Ecouleme1!tJ Turbu/eIltJ, Thesis, Mechanical
Engineering De panment, Ecole Poly technique de Montreal, 1996.
44. E. Turgeo n, Application d'une Methode d't"lemenls Fillis Adaptatives a des les Ecoulements Axisymetriques,
Thesis, Mechanical Engineering Dt;partment, Ecole Polyfechnique de Montreal, June 1997.
45. O. C. Zienkiewicz and J. Z. Zhu, The Superconvergent Patch Recovery and a Posteriori Error Estimates,
Part I: The Recovery Technique. Int. 1. Numer. Methods E"g. , 33, \33 1- \364 (1992).
46. O. C. Zienkiewicz and J. Z. Zhu, The Superconvergent Patch Recovery and a Posteriori Error E:;timates,
Part 2: Error Estimates and Adaptivity, 1m. J. Numa. Methods Eng., 33, 1365-1382 (1992).
47 . F. !linea, D. Pelletier, and F. Arnoux·Guisse, An Adaptive Finite Element Scheme for Turbulent Free
Shear Flows, 1m. J. Comput. Fluid Dynamics, 8, 171 - 188 (1997).
48. L. 19nat, D. Pelletier. and F. ltinca, Adaptive Computations of Turbulent Forced Convection, Numer. Heat
Transfer, Part A, 34, 847-871 (1998).
49. F. Ilinca, J.-F. Hetu, and D. Pellet.ier. A Unified Finite Element Algorithm for Two-equation Models of
Turbulence. Complll. Fluid.r. 27. 291-310 (1998).
50. F. llinca, L. Ignat. and D. Pelletier, Adaptive Finite Element Solution of Compressible Turbulent Flows,
AlAA J.. 36, 2187- 2194 (1999).
5 L L. ignat, D. Pelletier, and F. Ilinca. A Universal Fonnulation of Two-equation Models for Adaptive
Computation of Turbulent rlows. Compm. Method.f Appl. Mech. Eng. , 189, 1119- 1139 (2000).
52. D. Lacasse, E. Turgeon, and D. Pelletier, Prediction of Turbulent Separated Flow in a Turnaround Duct
Using Wall Functions and Adaptivity. bu. 1. Comput. Fluid Dynamic... , 15, 209- 225 (2001).
53. E. Turgeon and D. Pelletier. A General Continuous Sensitivity Equation Fonnulation for Complex Flows,
NUll/a. Heat Tmnsfa, Part B, 42. 485 - 498 (2002).
54. E. Turgeon and D. Pelletier, Verification and Validation of Adaptive Finite Element Method for Impinge-
ment Heat Transfe r. J. Thermophy.~. Heat Tram/er, 15.284-292 (2001).
55. E. Turgeon, D. Pelletier, and F. llinca, Compressible Heat Transfe r Computations by an Adaptive Finite
Element Method. lilt. 1. Thermnl Sd, 41 , 721 - 736, 2002.
56. D. Pelletier. E. TUrgeon. and D. T~mblay, Verification and Validation of Impinging Round Jet Simulations
Using an Adaptive FEM. 1m. J. Numer. Methods Eng .• 44(7). 737-763, 2004.
57. H. W. Coleman and F. Stern, Uncertainties in CFD Code Validation, ASM£ J. Flu.ids £ng. 119, 795-803
( 1997).
58. P. J. Roache. Discussion: Uncertainties in CFD Code Validation and Coleman ~d F. Stem, Authors'
Closure. ASM£ 1. Fluids Eng .• 120. 635-636 (1998).
59. H. W. Coleman ~d F. Stern. Authors' Closure, ASME J. Fluids Eng .. 120,635 - 636 (1998).
60. I. Celik, personal communication, 2003.
61. H. W. Coleman, Some Observations on Uncertainties and the Verification and Validation of a Simulation,
ASME J. Fluids Eng .• 125. 733-735 (2003).

You might also like