0% found this document useful (0 votes)
28 views7 pages

Kriging Metamodeling in Discrete-Event Simulation An Overview

This document summarizes the use of Kriging metamodeling in discrete-event simulation. Kriging is an interpolation technique originally developed for the mining industry. It has since been widely used in computer-aided engineering applications but only recently applied to discrete-event simulation. The document outlines the history and formal model of Kriging, discusses experimental design considerations for Kriging metamodels in both deterministic and random simulation, and provides examples of customized designs.

Uploaded by

Fuzzy K
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
28 views7 pages

Kriging Metamodeling in Discrete-Event Simulation An Overview

This document summarizes the use of Kriging metamodeling in discrete-event simulation. Kriging is an interpolation technique originally developed for the mining industry. It has since been widely used in computer-aided engineering applications but only recently applied to discrete-event simulation. The document outlines the history and formal model of Kriging, discusses experimental design considerations for Kriging metamodels in both deterministic and random simulation, and provides examples of customized designs.

Uploaded by

Fuzzy K
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

Proceedings of the 2005 Winter Simulation Conference

M. E. Kuhl, N. M. Steiger, F. B. Armstrong, and J. A. Joines, eds.

KRIGING METAMODELING IN DISCRETE-EVENT SIMULATION:


AN OVERVIEW

Wim C.M. Van Beers

Department of Information Systems and Management


Tilburg University
Postbox 90153, 5000 LE Tilburg, THE NETHERLANDS

ABSTRACT state. A disadvantage is that it cannot benefit from the spe-


cific structure of the simulation model, so it may take more
Many simulation experiments require considerable com- computer time compared with techniques such as perturba-
puter time, so interpolation is needed for sensitivity analy- tion analysis and score functions.
sis and optimization. The interpolating functions are Metamodeling can also help in optimization and vali-
‘metamodels’ (or ‘response surfaces’) of the underlying dation of a simulation model. This paper, however, does
simulation models. For sensitivity analysis and optimiza- not discuss these two topics. Further, if the simulation
tion, simulationists use different interpolation techniques model has hundreds of inputs, then special ‘screening’ de-
(e.g. low-order polynomial regression or neural nets). This signs are needed, discussed in Campolongo, Kleijnen, and
paper, however, focuses on Kriging interpolation. In the Andres (2000). The examples in this paper, however, limit
1950’s, D.G. Krige developed this technique for the min- the number of inputs only to one or two.
ing industry. Currently, Kriging interpolation is frequently Whereas polynomial-regression metamodels have
applied in Computer Aided Engineering. In discrete-event been applied extensively in discrete-event simulation (such
simulation, however, Kriging has just started. This paper as queueing simulation), Kriging has hardly been applied
discusses Kriging for sensitivity analysis in simulation, in- to random simulation. However, in deterministic simula-
cluding methods to select an experimental design for tion (applied in many engineering disciplines; see for ex-
Kriging interpolation. ample De Geest et al. 1999), Kriging has been applied fre-
quently, since the pioneering article by Sacks et al. (1989).
1 INTRODUCTION In such simulation, Kriging is attractive because it can en-
sure that the metamodel’s prediction has exactly the same
A primary goal of simulation is ‘what-if’ or sensitivity value as the observed simulation output. In random simula-
analysis: What happens to the outputs if inputs of the simu- tion, however, this Kriging property may not be so desir-
lation model change? Therefore simulationists run a given able, since the observed (average) value is only an estimate
simulation program—or computer code—for (say) n dif- of the true, expected simulation output.
ferent combinations of the k simulation inputs and observe Note that several types of random simulation may be
the outputs. (Most simulation models have multiple out- distinguished:
puts, but in practice these outputs are analyzed per output
type.) To analyze this input/output (I/O) data, classic 1. Deterministic simulation with randomly sampled
analysis uses low-order regression metamodels; see Klei- inputs. For example, in investment analysis the
jnen (1998). A metamodel is an approximation of the I/O cash flow development over time can be com-
transformation implied by the underlying simulation pro- puted through a spreadsheet such as Excel. Next,
gram. (In certain disciplines, metamodels are also called: the random values of inputs are sampled—such as
Response surface, compact model, emulator, etc.) Such a the cash flow growth rate—by means of either
metamodel treats the simulation model as a black box; that Monte Carlo or Latin Hypercube Sampling (LHS)
is, the simulation model's I/O is observed, and the parame- through an add-on such as @Risk or Crystal Ball;
ters of the metamodel are estimated. This black-box ap- see Van Groenendaal and Kleijnen (1997).
proach has the following advantages and disadvantages. 2. Discrete-event simulation. For example, classic
An advantage is that the metamodel can be applied to queueing simulation is applied in logistics and
the output of all types of simulation models, either deter- telecommunications; see Van Beers and Kleijnen
ministic or random, either in steady-state or in transient (2003).

202

Authorized licensed use limited to: Indian Institute of Technology- Goa. Downloaded on August 13,2021 at 05:28:10 UTC from IEEE Xplore. Restrictions apply.
Van Beers

3. Combined continuous/discrete-event simulation. ten applied in CAE. Van Beers and Kleijnen (2003) intro-
For example, simulation of nuclear waste disposal duce Kriging interpolation into the area of random simula-
represents the physical and chemical processes tion.
through deterministic non-linear difference equa-
tions and models the human interventions as dis- 2.2 Formal Model for Kriging
crete events (see Kleijnen and Helton, 1999).
A random process Z(•) can be described by {Z (s) : s ∈ D}
The remainder of this paper is organized as follows. where D is a fixed subset of Rd and Z(s) is a random func-
Subsection 2.1 sketches the history of Kriging and its ap-
tion at location s ∈ D ; see Cressie (1993, p. 52).
plication in geology and in simulation. Subsection 2.2 de-
There are several types of Kriging, but this paper lim-
scribes the basics of Kriging and gives the formal Kriging
its to Ordinary Kriging, which makes the following two
model. Section 3 discusses classic designs for Kriging and
assumptions:
mentions criteria for measurement of their performance.
Subsection 3.1 treats customized designs for Kriging in de-
1. The model assumption is that the random process
terministic simulation, whereas subsection 3.2 treats cus-
consists of a constant μ and an error term δ (s) :
tomized designs for random simulation. Both subsections
demonstrate the performance of the customized designs by
two academic simulation models. Section 4 presents con- Z (s) = μ + δ (s) with s ∈ D, μ ∈ R
clusions and topics for future research.
2. The predictor assumption is that the predictor for
2 KRIGING the point s 0 —denoted by p( Z (s 0 )) —is a
weighted linear function of all the observed output
2.1 History of Kriging data:
In the 1950s, the South African mining engineer D.G.
p ( Z (s 0 )) = ∑i =1 λi Z (s i ) ∑
n n
Krige (born in 1919) devised an interpolation method to with i =1
λi = 1 . (1)
determine true ore-bodies, based on samples. The basic
idea is that these predictions are weighted averages of the
observed outputs, where the weights depend on the dis- To select the weights λi in (1), the criterion is minimal
tances between the input location to be predicted and the mean-squared prediction error (MSE), defined as
input locations already observed. The weights are chosen
so as to minimize the prediction variance, i.e., the weights
should provide a Best Linear Unbiased Estimator (BLUE) (
σ e2 = E (Z (s 0 ) − p ( Z (s 0 )) )2 . ) (2)
of the output value for a given input. Therefore, Kriging is
also called Optimal Interpolation. Substituting the variogram, defined as
The dependence of the interpolation weights on the
distances between the inputs was mathematically formal- 2γ (h) = var[Z (s + h) − Z (s)] ,
ized by the French mathematician Georges Matheron
(1930-2000) in his monumental ‘Traité de géostatistique in (2) gives the optimal weights λ1 , K , λ n
appliquée’ (1962). He introduced a function, which he
called a variogram, to describe the variance of the differ- /
ence between two observations. The variogram is the cor- ⎛ 1 − 1 / Γ −1 γ ⎞
λ / = ⎜⎜ γ + 1 / −1 ⎟⎟ Γ −1 , (3)
nerstone in Kriging. Hence, accurate estimation of the 1 Γ 1 ⎠

variogram, based on the observed data, is essential. Journel
and Huijbregts (1978, pp. 161-195) present various para-
metric variogram models. The values of its parameters are
where γ denotes the vector of (co)variances
obtained by either Weighted Least Squares (WLS) or (γ (s 0 − s1 ), K , γ (s 0 − s n )) , Γ denotes the n × n matrix
/

Maximum Likelihood Estimation (MLE); see Cressie


whose (i, j)th element is γ (s i − s j ) , 1 = (1, K ,1) / is the vec-
(1993).
So Kriging originated in geostatistics to answer con- tor of ones; also see Cressie (1993, p. 122).
crete questions in the gold mining industry: Drilling for Note that these optimal Kriging weights λi depend on
ore—deep under the ground—is expensive, so efficient the specific point s 0 that is to be predicted, whereas lin-
prediction methods are necessary. Later on, Kriging was
successfully introduced into deterministic simulation by ear-regression metamodels use fixed estimated parameters
Sacks et al. (1989). For example, Kriging is nowadays of- (say) β̂ for each s 0 to be predicted.

203

Authorized licensed use limited to: Indian Institute of Technology- Goa. Downloaded on August 13,2021 at 05:28:10 UTC from IEEE Xplore. Restrictions apply.
Van Beers

However, in (3) γ (h) is unknown. The usual estimator The most popular design type for Kriging is Latin Hy-
is percube Sample (LHS). This type of design was introduced
by McKay, Beckman, and Conover (1979) for determinis-
1 tic simulation models. Those authors did not analyze the
2γˆ (h) =
N (h)
∑N (h )
( Z (s i − Z (s j ) 2 I/O data by Kriging (but they did assume I/O functions
more complicated than the polynomial models in classic
DOE). LHS offers flexible design sizes n (number of input
where N (h) denotes the number of distinct pairs in combinations actually simulated) for any k (number of
simulation inputs). LHS proceeds as follows; also see the
N (h) = {(s i , s j ) : s i − s j = h ; i, j = 1, K, n} ; see Matheron example for k = 2 factors in Figure 1.
(1962).
1. LHS divides each input range into n intervals of
3 DESIGNS FOR KRIGING equal length, numbered from 1 to n (so the num-
ber of values per input can be much larger than in
An experimental design is a set of n combinations of k fac- designs for low-order polynomials).
tor values. These combinations are usually bounded by 2. Next, LHS places these integers 1,…, n such that
‘box’ constraints: with a j , b j ∈ R and j = 1, K, k . The set each integer appears exactly once in each row and
of all feasible combinations is called the experimental re- each column of the design matrix.
gion (say) H. We suppose that H is a k-dimensional unit 3. Within each cell of the design matrix, the exact
cube, after rescaling the original rectangular area. input value may be sampled uniformly. (Alterna-
Our goal is to find the ‘best’ design for Kriging pre- tively, these values may be placed systematically
dictions within H; the Kriging literature proposes several in the middle of each cell. In risk analysis, this
criteria (see Sacks et al. 1989, p. 414). Most of these crite- uniform sampling may be replaced by sampling
ria are based on the predictor’s MSE (2). Most progress from some other distribution for the input values.)
has been made for the IMSE (see Bates et al. 1996): x2
(i): Scenario i (i = 1, 2, 3, 4)
IMSE = ∫ MSE Yˆ (x) φ (x)dx
H
( ) (4) (2)
+1

(3)
where MSE follows from minimizing (2), and φ (x) is a
x1
given weight function—usually assumed to be a constant. -1 (4) +1
To evaluate a design, Sacks et al. (1989, p. 416) com-
pare the predictions with the known output values of a test (1)
set consisting of (say) N inputs. Assuming a constant φ (x)
in (4), the IMSE can then be estimated by the Empirical -1
IMSE (EIMSE):

1 N
EIMSE =
N
∑ ( yˆ (x) − y (x))
i =1
i i
2
. (5) Figure 1: A LHS Design for Two Factors & Four Scenarios

Because LHS implies randomness, its result may hap-


Besides this EIMSE, we will also study the maximum pen to be an outlier. For example, it might happen—with
MSE; that is, we also consider risk-averse users (also see small probability—that two input factors have a correlation
Van Groenigen, 2000). So IMSE—defined in (4)—is re- coefficient of –1 (all their values lie on the main diagonal
placed by of the design matrix). Therefore the LHS may be adjusted
to become (nearly) orthogonal; see Ye (1998).
MaxMSE = max MSE Yˆ (x)
x∈H
{ ( )} Classic designs simulate extreme scenarios—namely
the corners of a k-dimensional square—whereas LHS has
better space filling properties; again see Figure 1. This space
and EIMSE in (5) by filling property has inspired many statisticians to develop
related designs. One type maximizes the minimum Euclid-
EMaxIMSE = max
i ∈ {1, ..., m}
{(yˆ i ( x) − y i ( x) ) }.
2
(6) ean distance between any two points in the k-dimensional
experimental area. Other designs minimize the maximum
distance. See Koehler and Owen (1996), Santner, Williams,
and Notz (2003), and also Kleijnen et al. (2004).

204

Authorized licensed use limited to: Indian Institute of Technology- Goa. Downloaded on August 13,2021 at 05:28:10 UTC from IEEE Xplore. Restrictions apply.
Van Beers

3.1 Customized Sequential Designs for Deterministic Successive Relative Improvement (SRI) after n observa-
Simulation tions:

Kleijnen and Van Beers (2004) derive designs that are cus-
SRI n = max{~
s j2 } n − max{~
s j2 } n −1 max{~
s j2 } n −1
tomized; that is, they are not generic designs (such as 2 k − p j j j
designs or LHS). More precisely, these customized designs
account for the specific input/output function of the par-
where max{ ~
s j2 }n denotes the maximum jackknife variance
ticular simulation function at hand. This customization is j
achieved through cross-validation and jackknifing. Fur- over j = 1, K , c candidates after n evaluations. Note that
thermore, these designs are sequential, because sequential there are several stopping rules; for example, Sasena et al.
procedures are known to be more ‘efficient’; see, for ex- (2002) use the Generalized Expected Improvement func-
ample, Ghosh and Sen (1991) and Park et al. (2002). tion, which selects inputs that have high model inaccuracy.
The procedure starts with a ‘small’ pilot design of size They stop their tests—rather arbitrary—after 100 calls of
(say) n 0 . To avoid extrapolation, the procedure first selects this function, whereas Schonlau (1997) proposes to stop-
the 2 k vertices of H. Besides these vertices, the procedure ping once the ratio of the expected improvement becomes
selects some extra points—space-filling—to estimate the sufficiently small, e.g. 0.01.
variogram. After selecting and simulating the pilot design, Kleijnen and Van Beers (2004) test their Customized
the procedure selects (say) c candidate inputs—again, Sequential Designs (CSD) through two academic applica-
space-filling—without actually running the simulation tions:
model for these candidates. To find the ‘winning’ candi- x
1. the hyperbolic I/O function y = with
date, the procedure estimates the variance of the of the pre- 1− x
dicted output at each candidate input. Therefore, the proce- 0 < x <1
dure uses cross-validation and jackknifing. Figure 2 2. the fourth-order polynomial I/O function
demonstrates the procedure for a fourth-order polynomial
y = -0.0579 x 4 + 1.11 x 3 - 6.845 x 2 + 14.1071 x + 2
simulation model.
Figure 2. Fourth-ordFigure 2. Fourth-order polynomia with 0 ≤ x ≤ 10 .
--- model, O I/O data, × candidate locations,
example, including four pilot observations and three
To quantify the CSD’s performance, they use a test set,
candidate inputs• with
predictions Y ( − i ) based on cross-
predictions
consisting of 32 true test values, and compare the Kriging
validation, where (-i) denotes which observation i is prediction error for the CSD with the prediction error for a
dropped in the cross validationer polynomial example LHS design of the same size. Both EIMSE and EMaxIMSE
including four pilot observations and three candidate have substantial smaller values for the CSD than for the
inputs with predictions based on cross-validation, whe LHS designs. Moreover, both examples show that the CSD
i) denotes which observation i is dropped in the cross procedure simulates relatively many input combinations in
validationFigure 2. Fourth-order polynomial example those sub-areas that have interesting I/O behavior. Figure 3
including four pilot observations and three candidate shows the final design for the fourth-order polynomial ex-
ample with RSI < 1% and n = 24 observations.
inputs with predictions based on cross-validation, whe
i) denotes which observation i is dropped in the cross sequential design model initial data
validation 15

Figure 2: Fourth-Order Polynomial Example, including 10


Four Pilot Observations and Three Candidate Inputs with
5
Predictions Based on Cross-Validation, where (-i) Denotes
which Observation i is Dropped in the Cross Validation y 0
0 2 4 6 8 10
-5
After selecting and simulating the winning candidate,
-10
the procedure adds the new observation to the current de-
sign. With respect to the augmented design, the procedure -15
selects a new set of candidates. The sequential procedure— x
selecting a set of candidates, estimating the variance of the Figure 3: Final Design for Fourth-Order Polynomial Ex-
predicted output, simulating the winning candidate and ample with n = 24 Observations
augmenting the design—is stopped when a specified crite-
rion is reached. Kleijnen and Van Beers (2004) use the

205

Authorized licensed use limited to: Indian Institute of Technology- Goa. Downloaded on August 13,2021 at 05:28:10 UTC from IEEE Xplore. Restrictions apply.
Van Beers

3.2 Designs for Random Simulation For the M/M/1 example, Figure 4 displays simulation
results for both the CSD and the LHS design. The stopping
To select an experimental design for interpolation in ran- criterion is that n = 10 traffic rates have been simulated.
dom simulation, especially discrete event simulation, Van The figure shows that LHS simulates fewer ‘challenging’
Beers and Kleijnen (2004) propose a new method. Unlike inputs; i.e., high traffic rates.
ρ
LHS, the method accounts for the specifics of the model’s
I/O function. More precisely, the method is customized. To
estimate the prediction uncertainty at unobserved input ---: True I/O function
* : Simulated output for Customized Sequential Design
combinations—caused by the noise and the shape of the
O: Simulated output for Latin Hypercube Design
I/O function—the method uses bootstrapping; i.e., for each
scenario already simulated the outputs are re-sampled (for
bootstrapping in general see the classic textbook, Efron
and Tibshirani 1993; for bootstrapping in the validation of y
regression metamodels in simulation see Kleijnen and
Deflandre, 2005).
Similar to the CSD for deterministic simulation, the
procedure starts with a small pilot design with input com-
binations equally spread over the experimental area. Van
Beers and Kleijnen (2004) use a maximin design, which
maximizes the minimum distance between any two points ρ
of the design; see Koehler and Owen (1996, p. 288).
Next, for each input value x i of the pilot design the Figure 4: Two Designs for M/M/1 with 10 Traffic Rates ρ
procedure simulates (say) m i IID replicates until a pre- and Average Simulation Outputs y
defined accuracy level for the estimated output y i is
reached. Then, per input x i the procedure bootstraps m i Van Beers and Kleijnen (2004) use a test set with N =
outputs; i.e., the m i observed outputs per input x i are re- 32 equidistant traffic rates (Sacks et al. 1989 also use test
sampled with replacement. Further, the procedure com- sets to evaluate their procedure). They compare the
putes the bootstrap averages y i* (m i ) per input x i . (The Kriging predictions of the two designs with the ‘true’ out-
superscript * indicates a bootstrapped value, as traditional puts of the test set. Figure 5 illustrates the 32 predictions
in bootstrap literature.) The re-sampling per input x i is for the CSD and the LHS design.
repeated (say) B times (B is called the bootstrap sample
size). Now, the B bootstrapped designs are used to com-
pute B Kriging predictors for the expected outputs of a new ---: True I/O function
* : Customized Design prediction
set of (say) n c candidates. O: LHS prediction
To quantify the prediction uncertainty, the procedure
computes the bootstrap variance for each candidate:

1 B c* ŷ
vâr( yˆ cj* ) = ∑ ( yˆ j; b − yˆ cj* ) 2
B −1 b =1
(7)

where yˆ cj ;*b is the predicted value at candidate input x cj


( j = 1, K , n c ) , based on the bootstrapped I/O data
xt
( x i , y i*;b (mi )) (i = 1, K, n0 ) and yˆ cj* = ∑b =1 yˆ cj *;b B .
B
Figure 5: 32 Predictions ŷ for the Test Set for M/M/1, for
The candidate that has the largest bootstrap variance (7) is Two Designs
added to the current design and simulated until the desired
accuracy is reached. To compare the performance of the CSD design and
Van Beers and Kleijnen (2004) test the CSD through LHS, they use the EIMSE criterion, defined in (5). How-
two classic academic simulation models, namely the ever, the final numbers of replicates in the two designs
M/M/1 model with one input—the traffic rate (say) ρ — may differ, so they calculate the corrected EIMSE:
and an (s, S) inventory model with two inputs—the reor-
nt

∑ (yˆ ( x )
der level s and order up-to level S. They compare the per- 1 t 2
CEIMSE = C × i ) − y ( x it ) , (8)
formance of the CSD with a LHS design of the same size. nt i =1

206

Authorized licensed use limited to: Indian Institute of Technology- Goa. Downloaded on August 13,2021 at 05:28:10 UTC from IEEE Xplore. Restrictions apply.
Van Beers

where C is the ratio of the total number of replicates in the derived. The CSD is sequential, because in general sequen-
LHS design and in the CSD, n t is the number of I/O com- tial procedures are more ‘efficient’ than fixed-sample proce-
dures; tests confirmed that property. Moreover, the method
binations in the test set (so n t = 32 ), and x it is the i th in-
generates a design that is specific for the given simulation
put of the test set. It turns out that CSD give smaller model: it is customized (tailor-made). For deterministic
CEIMSE than LHS designs. simulation, this customization is achieved through cross-
To test their procedure for a model with two inputs, validation and jackknifing—which are two general statistical
Van Beers and Kleijnen (2004) use the (s, S) features of techniques. For that simulation type, the method is tested
Law and Kelton (2000)’s example 12.9. Law and Kelton through two academic applications, namely a hyperbolic I/O
use an equally spread design of 36 input combinations. function and a fourth degree polynomial. For random simu-
They simulate five replicates for each of the 36 inputs. lation experiments, the customization uses bootstrapping—
Based on these 180 I/O data, they fit a second-order poly- which is also a general statistical technique (related to jack-
nomial regression model for the average monthly total knifing). The procedure is tested for this simulation type
costs R. They compare this model’s predictions with the through two classic Operations Research/Management Sci-
‘true’ E(R) estimated from 10 replicates for each of 420 ence (OR/MS) applications, namely the M/M/1 queueing
new and old combinations. Van Beers and Kleijnen, how- model and an (s, S) inventory management model. Tests
ever, use the CSD to select 36 input combinations. They showed that for both deterministic simulation and random
simulate each of their 36 inputs five times and fit a Kriging simulation, customized designs performed better than the
model to 36 I/O data (implying 36 average outputs). They classic LHS designs with the same sample size. An interest-
compare the Kriging predictions for the 420 inputs from ing property of our procedure is that it simulates relatively
the test set with the 420 ‘true’ outputs. They find that the many input combinations in those sub-areas that have inter-
Kriging model gives substantial better CEIMSE and EMax- esting I/O behavior.
IMSE—defined in (8) and (6)—than the regression model. The main conclusions are summarized as follows:
Figure 6 shows a CSD for 16 input combinations for
Law and Kelton’s (s, S) example. Note that the procedure • Kriging metamodels give more accurate predic-
selects again more input combinations in the sub-area tions than low-order polynomial regression mod-
where the metamodel shows steep slopes. els do,
• Customized Sequential Designs for Kriging
metamodels give smaller prediction errors than
standard one-shot LHS designs of the same size.

REFERENCES

Bates, R.A., R.J. Buck, E. Riccomagno and H.P. Wynn


(1996), Experimental design and observation for large
systems. Royal Statistical Society 58 (1):7-94.
Campolongo, F., J.P.C. Kleijnen, and T. Andres (2000),
Screening methods. In: Sensitivity analysis, edited by
A. Saltelli, K. Chan, and E.M. Scott, Wiley, Chiches-
ter (England), 65-89.
Cressie, N.A.C. (1993). Statistics for spatial Data, New
York: Wiley.
Ghosh, B.K. and P.K. Sen (editors), 1991, Handbook of
Figure 6: I/O Simulation Data for (s, S) Inventory Model sequential analysis. Marcel Dekker, Inc., New York.
with 16 Scenarios Denoted by { De Geest, J., T. Dhaene, N. Fache, and D. De Zutter
(1999), Adaptive CAD-model building algorithm for
4 CONCLUSIONS AND FUTURE RESREARCH general planar microwave structures. IEEE Transac-
tions on Microwave Theory and Techniques, 47 (9):
For expensive simulation, it is important to find an efficient 1801-1809.
design for the experiments with the simulation model. Clas- Efron B. and R.J. Tibshirani (1993). An introduction to the
sic standard designs—such as 2 k − p or LHS designs—are bootstrap. New York: Chapman & Hall.
general designs that do not account for the characteristics of Journel, A.G. and Huijbregts, C.J. (1978), Mining Geosta-
the input/output (I/O) function that is implied by the simula- tistics. Academic Press, London.
tion model at hand. As an alternative design a Customized Kleijnen, J.P.C. (1998), Experimental design for sensitivity
Sequential Design (CSD) for metamodeling in simulation is analysis, optimization, and validation of simulation

207

Authorized licensed use limited to: Indian Institute of Technology- Goa. Downloaded on August 13,2021 at 05:28:10 UTC from IEEE Xplore. Restrictions apply.
Van Beers

models. Handbook of Simulation, edited by J. Banks, Van Beers, W.C.M. and J.P.C. Kleijnen (2004), Custom-
173-223, New York: Wiley. ized sequential designs for random simulation experi-
Kleijnen, J.P.C. and J.Helton (1999), Statistical analyses of ments: Kriging metamodeling and bootstrapping.
scatter plots to identify important factors in large-scale Available via http://center.kub.nl/staff/
simulations. Reliability Engineering and Systems kleijnen/papers.html
Safety, 65 (2): 147-197. [accessed June 24, 2005]
Kleijnen, J.P.C. S.M. Sanchez, T.W. Lucas, and T.M. Van Groenendaal, W.J.H. and J.P.C. Kleijnen (1997), On
Cioppa (2004), A user's guide to the brave new world the assessment of economic risk: factorial design ver-
of designing simulation experiments. Working Paper. sus Monte Carlo methods. Reliability Engineering and
Available via http://center.kub.nl/staff/ Systems Safety, 57: 91-102.
kleijnen/papers.html/ Van Groenigen, J.W. (2000), The influence of variogram
[accessed June 24, 2005] parameters on optimal sampling schemes for mapping
Kleijnen, J.P.C. and W.C.M. van Beers (2004), Applica- by Kriging. Geoderma, 97: 223-236.
tion-driven sequential designs for simulation experi- Ye, K.Q. (1998), Orthogonal column Latin hypercubes and
ments: Kriging metamodeling. Journal of the Opera- their application in computer experiments. Journal As-
tional Research Society, 55: 876-883. sociation Statistical Analysis, Theory and Methods
Kleijnen, J.P.C. and D. Deflandre (2005), Validation of re- (93): 1430-1439.
gression metamodels in simulation: Bootstrap ap-
proach. European Journal of Operational Research (in AUTHOR BIOGRAPHY
press)
Koehler, J.R. and A.B. Owen (1996), Computer experi- WIM VAN BEERS is a researcher in the Department of
ments. In Handbook of statistics, (13), edited by S. Information Systems and Management of Tilburg Univer-
Ghosh, C.R. Rao, 261-308. Amsterdam: Elsevier. sity. His research topic is Kriging in simulation. His e-mail
Law, A.M. and W.D. Kelton (2000), Simulation modeling address is [email protected]
and analysis. 3rd edition. New York: McGraw-Hill.
Matheron, G. (1962), Traité de géostatistique appliquée.
Memoires du Bureau de Recherches Geologiques et
Minieres, Editions Technip, Paris, 14: 57-59.
McKay, M.D., R.J. Beckman, and W.J. Conover (1979), A
comparison of three methods for selecting values of
input variables in the analysis of output from a com-
puter code. Technometrics 21 (2): 239-245 (reprinted
in 2000: Technometrics 42 (1): 55-61).
Park, S., J.W. Fowler, G.T. Mackulak, J.B. Keats, and
W.M. Carlyle (2002), D-optimal sequential experi-
ments for generating a simulation-based cycle time-
throughput curve. Operations Research 50 (6): 981-
990.
Sacks, J., W.J. Welch, and T.J. Mitchell, H.P. Wynn
(1989), Design and analysis of computer experiments.
Statistical Science 4 (4): 409-435.
Santner, T.J., B.J. Williams, and W.I. Notz (2003), The
design and analysis of computer experiments. New
York: Springer-Verlag.
Sasena, M.J, P. Papalambros, and P. Goovaerts (2002),
Exploration of metamodeling sampling criteria for
constrained global optimization. Engineering Optimi-
zation 34 (3): 263-278.
Schonlau, M. (1997), Computer experiments and global
optimization. Doctoral dissertation, Department of Sta-
tistics, University of Waterloo.
Van Beers, W.C.M. and J.P.C. Kleijnen (2003), Kriging
for interpolation in random simulation. Journal of the
Operational Research Society 54: 255-262.

208

Authorized licensed use limited to: Indian Institute of Technology- Goa. Downloaded on August 13,2021 at 05:28:10 UTC from IEEE Xplore. Restrictions apply.

You might also like