0% found this document useful (0 votes)
28 views7 pages

Experimental Design For Simulation

This document discusses experimental design for simulation. It introduces some key questions to consider when planning simulation experiments, such as which model configurations to run, how long the simulation runs should be, how many runs to make, and how to analyze and interpret the output. Careful experimental planning can help make simulation experiments more efficient and effective for understanding model behavior. Traditional experimental design methods and broader questions about planning computer simulations are discussed.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
28 views7 pages

Experimental Design For Simulation

This document discusses experimental design for simulation. It introduces some key questions to consider when planning simulation experiments, such as which model configurations to run, how long the simulation runs should be, how many runs to make, and how to analyze and interpret the output. Careful experimental planning can help make simulation experiments more efficient and effective for understanding model behavior. Traditional experimental design methods and broader questions about planning computer simulations are discussed.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

Proceedings of the 2000 Winter Simulation Conference

J. A. Joines, R. R. Barton, K. Kang, and P. A. Fishwick, eds.

EXPERIMENTAL DESIGN FOR SIMULATION

W. David Kelton

Department of Quantitative Analysis and Operations Management


College of Business Administration
University of Cincinnati
Cincinnati, OH 45221-0130, U.S.A.

ABSTRACT kind of optimal system configuration. Specific questions


of this type might include:
This tutorial introduces some of the ideas, issues,
challenges, solutions, and opportunities in deciding how • What model configurations should you run?
to experiment with a simulation model to learn about its • How long should the runs be?
behavior. Careful planning, or designing, of simulation • How many runs should you make?
experiments is generally a great help, saving time and • How should you interpret and analyze the output?
effort by providing efficient ways to estimate the effects • What’s the most efficient way to make the runs?
of changes in the model’s inputs on its outputs.
Traditional experimental-design methods are discussed in These questions, among others, are what you deal with
the context of simulation experiments, as are the broader when trying to design simulation experiments.
questions pertaining to planning computer-simulation My purpose in this tutorial is to call your attention to
experiments. these issues and indicate in general terms how you can deal
with them. I won’t be going into great depth on a lot of
1 INTRODUCTION technical details, but refer you instead to any of several
texts on simulation that do, and to tutorials and reviews on
The real meat of a simulation project is running your this subject in this and recent Proceedings of the Winter
model(s) and trying to understand the results. To do so Simulation Conference. General book-based references for
effectively, you need to plan ahead before doing the runs, this subject include chapter 12 of Law and Kelton (2000),
since just trying different things to see what happens can chapter 11 of Kelton, Sadowski, and Sadowski (1998),
be a very inefficient way of attempting to learn about your Banks, Carson, and Nelson (1996), and Kleijnen (1998), all
models’ (and hopefully the systems’) behaviors. Careful of which contain numerous references to other books and
planning of how you’re going to experiment with your papers on this subject. Examples of application of some of
model(s) will generally repay big dividends in terms of these ideas can be found in Hood and Welch (1992, 1993)
how effectively you learn about the system(s) and how you and Swain and Farrington (1994). This paper is an update
can exercise your model(s) further. of Kelton (1999), and parts of it are taken from Kelton
This tutorial looks at such experimental-design issues (1997), which also contains further references and
in the broad context of a simulation project. The term discussion on this and closely related subjects.
“experimental design” has specific connotations in its
traditional interpretation, and I will mention some of these 2 WHAT IS THE PURPOSE
below, in Section 5. But I will also try to cover the issues OF THE PROJECT?
of planning your simulations in a broader context, which
consider the special challenges and opportunities you have Though it seems like pretty obvious advice, it might bear
when conducting a computer-based simulation experiment mentioning that you should be clear about what the
rather than a physical experiment. This includes questions ultimate purpose is of doing your simulation project in the
of the overall purpose of the project, what the output first place. Depending on how this question is answered,
performance measures should be, how you use the you can be led to different ways of planning your
underlying random numbers, measuring how changes in experiments. Worse, failure to ask (and answer) the
the inputs might affect the outputs, and searching for some question of just what the point of your project is can often

32
Kelton

leave you adrift without any organized way of carrying out example, perhaps its hours are going to expand to 24 hours
your experiments. a day, seven days a week; in this case you would need a
For instance, if there is just one system of interest to steady-state simulation to estimate the relevant
analyze and understand, then there still could be questions performance measures.
like run length, the number of runs, allocation of random Regardless of the time frame of the simulation, you
numbers, and interpretation of results, but there are no have to decide what aspects of the model’s outputs you
questions of which model configurations to run. Likewise, want. In a stochastic simulation you’d really like to know
if there are just a few model configurations of interest, and all about the output distributions, but that’s asking way too
they have been given to you (or are obvious), then the much in terms of the number and maybe length of the runs.
problem of experimental-design is similar to the single- So you usually have to settle for various summary
configuration situation. measures of the output distributions. Traditionally, people
However, if you are interested more generally in how have focused on estimating the expected value (or mean) of
changes in the inputs affect the outputs, then there clearly the output distribution, and this can be of great interest.
are questions of which configurations to run, as well as the For instance, knowing something about the average hourly
questions mentioned in the previous paragraph. Likewise, production is obviously important.
if you’re searching for a configuration of inputs that But things other than means might be interesting as
maximizes or minimizes some key output performance well, like the standard deviation of hourly production, or
measure, you need to decide very carefully which the probability that the machine utilization for the period of
configurations you’ll run (and which ones you won’t). the simulation will be above 0.80. In another example you
The reality is that often you can’t be completely sure might observe the maximum length of the queue of parts in
what your ultimate goals are until you get into a bit. Often, a buffer somewhere to plan the floor space; in this
your goals may change as you go along, generally connection it might be more reasonable to seek a value
becoming more ambitious as you work with your models (called a quantile) below which the maximum queue length
and learn about their behavior. The good news is that as will fall with probability, say, 0.95.
your goals become more ambitious, what you learned from Even if you want just simple averages, the specifics
your previous experiments can help you decide how to can affect how your model is built. For instance, if you
proceed with your future experiments. want just the time-average number of parts in a queue, you
would need to track the length of this queue but not the
3 WHAT ARE THE RELEVANT OUTPUT- times of entry of parts into the queue. However, if you
PERFORMANCE MEASURES? want the average time parts spend in the queue, you do
need to note their time of entry in order to compute their
Most simulation software produces a lot of numerical time in queue.
output by default, and you can usually specify additional So think beforehand about precisely what you’d like to
output that might not be automatically delivered. Much of get out of your simulation; it’s easier to ignore things you
this output measures traditional time-based quantities like have than go back and get things you forgot.
time durations or counts of entities in various locations.
Increasingly, though, economic-based measures like cost 4 HOW SHOULD YOU USE AND ALLOCATE
or value added are being made available, and are of wide THE UNDERLYING RANDOM NUMBERS?
interest. Planning ahead to make sure you get the output
measures you need is obviously important if the runs are Most simulations are stochastic, i.e., involve random inputs
time-consuming to carry out. from some distribution to represent things like service
One fundamental question relates to the time frame of times and interarrival times. Simulation software has
your simulation runs. Sometimes there is a natural or facilities to generate observations from such distributions,
obvious way to start the simulation, and an equally natural which rely at root on a random-number generator churning
or obvious way to terminate it. For instance, a call center out a sequence of values between 0 and 1 that are supposed
might be open from 8A.M. to 8P.M. but continue to operate to behave as though they are independent and uniformly
as necessary after 8 P.M. to serve all calls on hold (in distributed on the interval [0, 1]. Such generators are in
queue) at 8 P.M. In such a case, often called a terminating fact fixed, recursive formulas that always give you the
simulation, there is no design question about starting or same sequence of “random” numbers in the same order
stopping your simulation. (provided that you don’t override the default seeds for
On the other hand, interest may be in the long-run these generators). The challenge in developing such
(also called infinite-horizon) behavior of the system, in generators is that they behave as intended, in a statistical
which case it is no longer clear how to start or stop the sense, and that they have a long cycle length before they
simulation (though it seems clear that the run length will double back on themselves and repeat the same sequence
have to be comparatively long). Continuing the call-center over again.

33
Kelton

Obviously, it is important that a “good” random- techniques also rely on some kind of careful planning for
number generator be used. And, from the experimental- synchronization of their use.
design viewpoint, you can then dispense with the issue of
randomizing experimental treatments to cases, which is 5 HOW SENSITIVE ARE YOUR OUTPUTS
often a thorny problem in physical experiments. TO CHANGES IN YOUR INPUTS?
But with such controllable random-number generators,
the possibility arises in computer-simulation experiments As part of building a simulation model, you have to specify
to control the basic randomness, which is a fundamentally a variety of input factors. These include quantitative factors
different situation from what you encounter in physical like the mean interarrival time, the number of servers, and
experiments. Doing so carefully is one way of implement- the probabilities of different job types. Other input factors
ing what are known as variance-reduction techniques, are more logical or structural in nature, like whether
which can sometimes sharpen the precision of your output failure/feedback loops are present, and whether a queue is
estimators without having to do more simulating. The processed first-in-first-out or shortest-job-first. There can
basic question in doing so is planning how you are going to also be factors that are somewhere between being purely
allocate the underlying random numbers to generating the quantitative and purely logical/structural, like whether the
various random inputs to your models. service-time distribution is exponential or uniform.
Perhaps the first thought along these lines that seems Another classification dimension of input factors is
like a “good” idea is to ensure that all the random-number whether they are (in reality) controllable or not. However,
usage is independent within your models as well as when exercising a simulation model, all input factors are
between any alternative configurations you might run. controllable, whether or not they can in reality be set or
This is certainly a statistically valid way to proceed, and is changed at will. For instance, you can’t just cause the arriv-
statistically the simplest approach. However, it might not al rate to a call center to double, but you’d have no problem
be the most efficient approach, where “efficiency” could be doing so in your simulation model of that call center.
interpreted in either its statistical sense (i.e., low variance) In any case, exactly how you specify each input factor
or in its computational sense (i.e., amount of computational will presumably have some effect on the output perfor-
effort to produce results of adequate precision). And at a mance measures. Accordingly, it is sometimes helpful to
more practical level, it might actually take specific action think of the simulation as a function that transforms inputs
on your part to accomplish independence between into outputs:
alternative configurations since most simulation software is
set up to start a new run (e.g., for the next model) with the Output1 = f1(Input1, Input2, ...)
same random numbers as before. Output2 = f2(Input1, Input2, ...)
But actually, that feature of simulation software can be .
to your advantage, provided that you plan carefully for .
exactly how the random numbers will be re-used. By using .
the same random numbers for the same purposes between
different alternative configurations you are running them where the functions f1, f2, ... represent the simulation model
under the same or similar external conditions, like exactly itself.
what the service and interarrival times are. In this way, any It is often of interest to estimate how a change in an
differences you see in performance can be attributed to input factor affects an output performance measure, i.e.,
differences in the model structures or parameter settings how sensitive an output is to a change in an input. If you
rather than to differences in what random numbers you knew the form of the simulation functions f1, f2, ..., this
happened to get. This idea is usually called common would essentially be a question of finding the partial
random numbers, and can sometimes greatly reduce the derivative of the output of interest with respect to the input
variance in your estimators of the difference in performance of interest.
between alternative configurations. To implement it But of course you don’t know the form of the simula-
properly, though, you need to take deliberate steps to make tion functions—otherwise you wouldn’t be simulating.
sure that your use of the common random numbers is Accordingly, there are several different strategies for
synchronized between the systems, or else the variance- estimating the sensitivities of outputs to changes in inputs.
reducing effect will diluted or maybe even largely lost. These strategies have their own advantages, disadvantages,
Often, utilizing fixed streams of the random-number realms of appropriate application, and extra information they
generator, which are really just particular subsequences, can might provide you. In the remainder of this section I’ll
facilitate maintaining proper synchronization. mention some of these, describe them in general terms, and
There are several other variance-reduction techniques give references for further details.
that also rely on (carefully) re-using previously used
random numbers, such as antithetic variates. Most of these

34
Kelton

5.1 Classical Experimental Design The main effects of the other factors are computed
similarly.
A wide variety of approaches, methods, and analysis Further, you can ask whether the effect of one factor
techniques, known collectively as experimental design, has might depend in some way on the level of one or more
been around for many decades and is well documented in other factors, which would be called interaction between
books like Box, Hunter, and Hunter (1978) or Montgomery the factors if it seems to be present. To compute the
(1997). One of the principal goals of experimental design interactions from the experimental results, you “multiply”
is to estimate how changes in input factors affect the the columns of the involved factors row by row (like signs
results, or responses, of the experiment. multiply to “+,” unlike signs multiply to “–”), apply the
While these methods were developed with physical resulting signs to the corresponding responses, add, and
experiments in mind (like agricultural or industrial divide by 2k–1 = 4. For instance, the interaction between
applications), they can fairly easily be used in computer- Factors 1 and 3 would be
simulation experiments as well, as described in more detail
in chapter 12 of Law and Kelton (2000). In fact, using (+R 1 – R 2 + R 3 – R 4 – R 5 + R 6 – R 7 + R 8)/4.
them in simulation presents several opportunities for
improvement that are difficult or impossible to use in If an interaction is present between two factors, then the
physical experiments. main effect of those factors cannot be interpreted in
As a basic example of such techniques, suppose that isolation.
you can identify just two values, or levels, of each of your Which brings up the issue of limitations of these kinds
input factors. There is no general prescription on how to of designs. There is a regression model underlying designs
set these levels, but you should set them to be “opposite” in like these, which have present an independent-variable
nature but not so extreme that they are unrealistic. If you term involving each factor on its own (linearly), and then
have k input factors, there are thus 2k different possible cross-products between the factor levels, repre-
combinations of the input factors, each defining a different senting interactions. As suggested, significant interactions
configuration of the model; this is called a 2k factorial cloud the interpretation of main effects, since presence of
design. Referring to the two levels of each factor as the the cross product causes the main effect no longer to be an
“–” and “+” level, you can form what is called a design accurate measure of the effect of moving this factor from
matrix describing exactly what each of the 2k different its “–” level to its “+” level. One way around this
model configurations are in terms of their input factor limitation is to specify a more elaborate and more general
levels. For instance, if there are k = 3 factors, you would underlying regression model, and allow for more than just
have 23 = 8 configurations, and the design matrix would be two levels for each input factor. This gives rise to more
as in Table 1, with Ri denoting the simulation response complex designs, which must be set up and analyzed in
from the ith configuration. more sophisticated ways.
Another difficulty with full-factorial designs is that if
Table 1: Design Matrix for a 23 Factorial Experiment the number of factors becomes even moderately large, the
Run (i) Factor 1 Factor 2 Factor 3 Response number of runs explodes (it is, after all, literally
1 – – – R1 exponential in the number of factors). A way around this
2 + – – R2 is to use what are known as fractional-factorial designs in
3 – + – R3 which only a fraction (sometimes just a small fraction) of
4 + + – R4 all the possible factor-combinations are run. You must
5 – – + R5 take care, however, to pick the subset of the runs very
6 + – + R6 carefully, and there are specific prescriptions on how to do
7 – + + R7 this in the references cited earlier. The downside of doing
8 + + + R8 only a fraction of the runs is that you have to give up the
ability to estimate at least some of the potential
The results from such an experiment can be used in interactions, and the smaller the number of runs the fewer
many ways. For instance, the main effect of Factor 2 in the the number of interactions you can estimate.
above example is defined as the average difference in A final limitation of these kinds of designs is that the
response when this factor moves from its “–” level to its responses are random variables, as are all outputs from
“+” level; it can be computed by applying the signs in the stochastic simulations. Thus, your estimates of things like
Factor 2 column to the corresponding responses, adding, main effects and interactions are subject to possibly-
and then dividing by 2k–1 = 4: considerable variance. Unlike physical experiments,
though, you have the luxury in simulation of replicating
(– R1 – R 2 + R 3 + R 4 – R 5 – R 6 + R 7 + R 8)/4. (independently repeating) the runs many times to reduce
this variance, or perhaps replicating the whole design many

35
Kelton

times to get many estimates of main effects and situation (with two independent input variables) would be a
interactions, which could then be combined to form, say, a three-dimensional surface representing the simulation
confidence interval on the expected main effects and responses, this is also called a response surface.
interactions in the usual way. This is a good approach for The parameters of the model are estimated by making
determining whether a main effect or interaction is really simulation runs at various input values for the Xj’s, record-
present—if the confidence interval for it does not contain ing the corresponding responses, and then using standard
zero, then it appears that it is really present. least-squares regression to estimate the coefficients.
There are certainly many other kinds of more Exactly which sets of input values are used to make the
sophisticated factorial designs than what I have described runs to generate the “data” for the regression fit is itself an
here; see the references cited earlier for examples. experimental-design question, and there are numerous
methods in the references cited above. A more compre-
5.2 Which Inputs are Important? hensive reference on this subject is Box and Draper (1987).
Which are Not? In simulation, an estimated response-surface meta-
model can serve several different purposes. You could
As mentioned above, if the number of factors is even (literally) take partial derivatives of it to estimate the effect
moderately large, the number of possible factor-level of small changes in the factors on the output response, and
combinations simply explodes far beyond anything any interactions that might be present as modeled would
remotely practical. It is unlikely, though, that all of your show up naturally. You could also use the estimated
input factors are really important in terms of having a metamodel as a proxy for the simulation, and very quickly
major impact on the outputs. At the very least, there will explore many different input-factor-level combinations
be big differences among your factors in terms of their without having to run the simulation. And you could try to
impact on your responses. optimize (maximize or minimize, as appropriate) the fitted
Since it is the number of factors that causes the model to give you a sense of where the best input-factor-
explosion in the number of combinations, it would be most combinations might be.
helpful to identify early in the course of experimentation An obvious caution on the use of response surfaces,
which factors are important and which are not. The though, is that they are estimated from simulation-
unimportant factors can then be fixed at some reasonable generated data, and so are themselves subject to variation.
value and dropped from consideration, and further This uncertainty can then have effects on your estimates of
investigation can be done on the important factors, which unsimulated models, derivatives, and optimizers. The
will be fewer in number. There are several such factor- references cited above discuss these issues, which are
screening designs in the literature (see the references cited important in terms of understanding and interpreting your
earlier), and they can be extremely helpful in transforming results and estimates realistically.
a rather hopelessly large number of runs into something
that is eminently manageable. 5.4 Other Techniques

5.3 Response-Surface Methods The discussion above focuses on general approaches that
and Metamodels originated in physical, non-simulation contexts, but
nevertheless can be applied in simulation experiments as
Most experimental designs, including those mentioned well. There are a variety of other methods that are more
above, are based on an algebraic regression-model specific to simulation, including frequency-domain methods
assumption about the way the input factors affect the and perturbation analysis. For discussions of these ideas,
outputs. For instance, if there are two factors (X1 and X2, see advanced or state-of-the-art tutorials in this or recent
say) that are thought to affect an output response Y, you Proceedings of the Winter Simulation Conference.
might approximate this relationship by the regression model
6 WHAT IS THE “BEST”
Y = β0 + β1X1 + β2X2 + β3X1X2 + β4X12 + β5X22 + ε COMBINATION OF INPUTS?

where the βj coefficients are unknown and must be Sometimes you have a single output performance measure
estimated somehow, and ε is a random error term that is of overriding importance in comparison with the
representing whatever inaccuracy such a model might have other outputs (different outputs can conflict with each
in approximating the actual simulation-model response Y. other, like the desirability of both high machine utilization
Since in this case the above regression model is an and short queues). This might be a measure of direct
approximation to another model (your simulation model), economic importance, like profit or cost. If you have such
the regression is a “model of a model” and so is sometimes a measure, you would probably like to look for an input-
called a metamodel. And since a plot of the above factor combination that optimizes this measure (e.g.,

36
Kelton

maximizes profit or minimizes cost). Mathematically, this Glover, F., J.P. Kelly, and M. Laguna. 1999. New
can take the form of some kind of search through the space advances for wedding optimization and simulation. In
of possible factor combinations. For a recent review of the Proceedings of the 1999 Winter Simulation
underlying methods, see Andradóttir (1998). Conference, ed. P.A. Farrington, H.B. Nembhard, D.T.
This is a tall order, from any of several perspectives. If Sturrock, and G.W. Evans, 255–260. Institute of
there are a lot of input factors, the dimension of the search Electrical and Electronics Engineers, Piscataway, New
space is high, requiring a lot of simulations at a lot of Jersey.
different points. And in stochastic simulation, the responses Hood, S.J. and P.D. Welch. 1992. Experimental design
are subject to uncertainty, which must be taken into account issues in simulation with examples from
when deciding how best to proceed with your search. semiconductor manufacturing. In Proceedings of the
Fortunately, several heuristic search methods have 1992 Winter Simulation Conference, ed. J.J. Swain, D.
been developed that “move” you from one point to a more Goldsman, R.C. Crain, and J.R. Wilson, 255–263.
promising one, and make these decisions based on a host of Institute of Electrical and Electronics Engineers,
information that is available. And we are now beginning to Piscataway, New Jersey.
see some of these methods coded into commercial-grade Hood, S.J. and P.D. Welch. 1993. Response surface
software and even integrated in with some simulation- methodology and its application in simulation. In
software products. For example, see Glover, Kelly, and Proceedings of the 1993 Winter Simulation
Laguna (1999). Conference, ed. G.W. Evans, M. Mollaghasemi, E.C.
Russell, and W.E. Biles, 115–122. Institute of
7 CONCLUSIONS Electrical and Electronics Engineers, Piscataway, New
Jersey.
My purpose here has been to make you aware of the issues Kelton, W.D. 1997. Statistical analysis of simulation
in conducting simulation experiments that deserve your output. In Proceedings of the 1997 Winter Simulation
close attention. An unplanned, hit-or-miss course of Conference, ed. S. Andradóttir, K.J. Healy, D.H.
experimentation with a simulation model can often be Withers, and B.L. Nelson, 23–30. Institute of
frustrating, inefficient, and ultimately unhelpful. On the Electrical and Electronics Engineers, Piscataway, New
other hand carefully planned simulation studies can yield Jersey.
valuable information without an undue amount of Kelton, W.D. 1999. Designing simulation experiments.
computational effort or (more importantly) your time. In Proceedings of the 1999 Winter Simulation
Conference, ed. P.A. Farrington, H.B. Nembhard, D.T.
REFERENCES Sturrock, and G.W. Evans, 33–38. Institute of
Electrical and Electronics Engineers, Piscataway, New
Andradóttir, S. 1998. A review of simulation optimization Jersey.
techniques. In Proceedings of the 1998 Winter Kelton, W.D., R.P. Sadowski, and D.A. Sadowski. 1998.
Simulation Conference, ed. D.J. Medeiros, E.F. Simulation with Arena. New York: McGraw-Hill.
Watson, J.S. Carson, and M.S. Manivannan, 151–158. Kleijnen, J.P.C. 1998. Experimental design for sensitivity
Institute of Electrical and Electronics Engineers, analysis, optimization, and validation of simulation
Piscataway, New Jersey. models. In Handbook of simulation, ed. J. Banks,
Banks, J., J.S. Carson, and B.L. Nelson. 1996. Discrete- 173–223. New York: John Wiley.
event system simulation. 2d ed. Upper Saddle River, Law, A.M. and W.D. Kelton. 2000. Simulation modeling
N.J.: Prentice-Hall. and analysis. 3d ed. New York: McGraw-Hill.
Barton, R.R. 1998. Simulation metamodels. In Montgomery, D.C. 1997. Design and analysis of
Proceedings of the 1998 Winter Simulation experiments. 4th ed. New York: John Wiley.
Conference, ed. D.J. Medeiros, E.F. Watson, J.S. Swain, J.J. and P.A. Farrington. 1994. Designing
Carson, and M.S. Manivannan, 167–174. Institute of simulation experiments for evaluating manufacturing
Electrical and Electronics Engineers, Piscataway, New systems. In Proceedings of the 1994 Winter
Jersey. Simulation Conference, ed. J.D. Tew, M.S.
Box, G.E.P. and N.R. Draper. 1987. Empirical model- Manivannan, D.A. Sadowski, and A.F. Seila, 69–76.
building and response surfaces. New York: John Institute of Electrical and Electronics Engineers,
Wiley. Piscataway, New Jersey.
Box, G.E.P., W.G. Hunter, and J.S. Hunter. 1978.
Statistics for experimenters: an introduction to
design, data analysis, and model building. New York:
John Wiley.

37
Kelton

AUTHOR BIOGRAPHY

W. DAVID KELTON is a Professor in the Department of


Quantitative Analysis and Operations Management at the
University of Cincinnati. He received a B.A. in
mathematics from the University of Wisconsin-Madison,
an M.S. in mathematics from Ohio University, and M.S.
and Ph.D. degrees in industrial engineering from
Wisconsin. His research interests and publications are in
the probabilistic and statistical aspects of simulation,
applications of simulation, and stochastic models. He is
co-author of Simulation Modeling and Analysis (3d ed.,
2000, with Averill M. Law), and Simulation With Arena
(1998, with Randall P. Sadowski and Deborah A.
Sadowski), both published by McGraw-Hill. Currently, he
serves as Editor-in-Chief of the INFORMS Journal on
Computing, and has been Simulation Area Editor for
Operations Research, the INFORMS Journal on
Computing, and IIE Transactions, as well as Associate
Editor for Operations Research, the Journal of
Manufacturing Systems, and Simulation. From 1991 to
1999 he was the INFORMS co-representative to the Winter
Simulation Conference Board of Directors and was Board
Chair for 1998. In 1987 he was Program Chair for the
WSC, and in 1991 was General Chair. His email and web
addresses are <[email protected]> and
<http://www.econqa.cba.uc.edu/~keltond>.

38

You might also like