11 - Simulation Optimization
11 - Simulation Optimization
C H A P T E R
11 SIMULATION
OPTIMIZATION
“Man is a goal seeking animal. His life only has meaning if he is reaching out
and striving for his goals.”
—Aristotle
11.1 Introduction
Simulation models of systems are built for many reasons. Some models are built
to gain a better understanding of a system, to forecast the output of a system, or to
compare one system to another. If the reason for building simulation models is to
find answers to questions like “What are the optimal settings for to mini-
mize (or maximize) ?” then optimization is the appropriate technology to
combine with simulation. Optimization is the process of trying different combina-
tions of values for the variables that can be controlled to seek the combination of
values that provides the most desirable output from the simulation model.
For convenience, let us think of the simulation model as a black box that
imitates the actual system. When inputs are presented to the black box, it produces
output that estimates how the actual system responds. In our question, the first
blank represents the inputs to the simulation model that are controllable by the
decision maker. These inputs are often called decision variables or factors. The
second blank represents the performance measures of interest that are computed
from the stochastic output of the simulation model when the decision variables are
set to specific values (Figure 11.1). In the question, “What is the optimal number
of material handling devices needed to minimize the time that workstations are
starved for material?” the decision variable is the number of material handling de-
vices and the performance measure computed from the output of the simulation
model is the amount of time that workstations are starved. The objective, then, is
to seek the optimal value for each decision variable that minimizes, or maximizes,
the expected value of the performance measure(s) of interest. The performance
285
Harrell−Ghosh−Bowden: I. Study Chapters 11. Simulation Optimization © The McGraw−Hill
Simulation Using Companies, 2004
ProModel, Second Edition
FIGURE 11.1
Optimization Decision variables (X1, X2, . . . , Xn)
Relationship between Simulation
optimization algorithm algorithm model
and simulation model.
Output responses
measure is traditionally called the objective function. Note that the expected value
of the objective function is estimated by averaging the model’s output over multi-
ple replications or batch intervals. The simulation optimization problem is more
formally stated as
Min or Max E[ f (X 1 , X 2 , . . . , X n )]
Subject to
Lower Boundi ≤ X i ≤ Upper Boundi for i = 1, 2, . . . , n
where E[ f (X 1 , X 2 , . . . , X n )] denotes the expected value of the objective
function, which is estimated.
The search for the optimal solution can be conducted manually or automated
with algorithms specifically designed to seek the optimal solution without evalu-
ating all possible solutions. Interfacing optimization algorithms that can automat-
ically generate solutions and evaluate them in simulation models is a worthwhile
endeavor because
• It automates part of the analysis process, saving the analyst time.
• A logical method is used to efficiently explore the realm of possible
solutions, seeking the best.
• The method often finds several exemplary solutions for the analyst to
consider.
The latter is particularly important because, within the list of optimized solutions,
there may be solutions that the decision maker may have otherwise overlooked.
In 1995, PROMODEL Corporation and Decision Science, Incorporated,
developed SimRunner based on the research of Bowden (1992) on the use of
modern optimization algorithms for simulation-based optimization and machine
learning. SimRunner helps those wishing to use advanced optimization concepts
to seek better solutions from their simulation models. SimRunner uses an opti-
mization method based on evolutionary algorithms. It is the first widely used,
commercially available simulation optimization package designed for major sim-
ulation packages (ProModel, MedModel, ServiceModel, and ProcessModel).
Although SimRunner is relatively easy to use, it can be more effectively used with
a basic understanding of how it seeks optimal solutions to a problem. Therefore,
the purpose of this chapter is fourfold:
• To provide an introduction to simulation optimization, focusing on the
latest developments in integrating simulation and a class of direct
optimization techniques called evolutionary algorithms.
Harrell−Ghosh−Bowden: I. Study Chapters 11. Simulation Optimization © The McGraw−Hill
Simulation Using Companies, 2004
ProModel, Second Edition
FIGURE 11.2
SimRunner plots the
output responses
generated by a
ProModel simulation
model as it seeks the
optimal solution,
which occurred at the
highest peak.
optimization module for the GASP IV simulation software package. Pegden and
Gately (1980) later developed another optimization module for use with the
SLAM simulation software package. Their optimization packages were based on
a variant of a direct search method developed by Hooke and Jeeves (1961). After
solving several problems, Pegden and Gately concluded that their packages
extended the capabilities of the simulation language by “providing for automatic
optimization of decision.”
The direct search algorithms available today for simulation optimization are
much better than those available in the late 1970s. Using these newer algorithms,
the SimRunner simulation optimization tool was developed in 1995. Following
SimRunner, two other simulation software vendors soon added an optimization
feature to their products. These products are OptQuest96, which was introduced
in 1996 to be used with simulation models built with Micro Saint software, and
WITNESS Optimizer, which was introduced in 1997 to be used with simulation
models built with Witness software. The optimization module in OptQuest96 is
based on scatter search, which has links to Tabu Search and the popular evolu-
tionary algorithm called the Genetic Algorithm (Glover 1994; Glover et al. 1996).
WITNESS Optimizer is based on a search algorithm called Simulated Annealing
(Markt and Mayer 1997). Today most major simulation software packages include
an optimization feature.
SimRunner has an optimization module and a module for determining the
required sample size (replications) and a model’s warm-up period (in the case of
a steady-state analysis). The optimization module can optimize integer and real
decision variables. The design of the optimization module in SimRunner was
influenced by optima-seeking techniques such as Tabu Search (Glover 1990) and
evolutionary algorithms (Fogel 1992; Goldberg 1989; Schwefel 1981), though it
most closely resembles an evolutionary algorithm (SimRunner 1996b).
and Mollaghasemi (1991) suggest that GAs and Simulated Annealing are the
algorithms of choice when dealing with a large number of decision variables.
Tompkins and Azadivar (1995) recommend using GAs when the optimization prob-
lem involves qualitative (logical) decision variables. The authors have extensively
researched the use of genetic algorithms, evolutionary programming, and evolu-
tion strategies for solving manufacturing simulation optimization problems and
simulation-based machine learning problems (Bowden 1995; Bowden, Neppalli,
and Calvert 1995; Bowden, Hall, and Usher 1996; Hall, Bowden, and Usher 1996).
Reports have also appeared in the trade journal literature on how the EA-
based optimizer in SimRunner helped to solve real-world problems. For example,
IBM; Sverdrup Facilities, Inc.; and Baystate Health Systems report benefits from
using SimRunner as a decision support tool (Akbay 1996). The simulation
group at Lockheed Martin used SimRunner to help determine the most efficient
lot sizes for parts and when the parts should be released to the system to meet
schedules (Anonymous 1996).
FIGURE 11.3
Ackley’s function with noise and the ES’s progress over eight generations.
20 Ackley's Function
18
Generation 1
16
Measured response
Generation 2
14
12 Generation 3
10 Generation 4
8 Generation 5
6
Generation 6
4
2 Generation 7
0 Generation 8
⫺10 ⫺5 0 5 10
Decision variable X
Harrell−Ghosh−Bowden: I. Study Chapters 11. Simulation Optimization © The McGraw−Hill
Simulation Using Companies, 2004
ProModel, Second Edition
it by sampling from a uniform (−1, 1) distribution. This simulates the variation that
can occur in the output from a stochastic simulation model. The function is shown
with a single decision variable X that takes on real values between −10 and 10. The
response surface has a minimum expected value of zero, occurring when X is equal
to zero, and a maximum expected value of 19.37 when X is equal to −9.54 or
+9.54. Ackley’s function is multimodal and thus has several locally optimal solu-
tions (optima) that occur at each of the low points (assuming a minimization prob-
lem) on the response surface. However, the local optimum that occurs when X is
equal to zero is the global optimum (the lowest possible response). This is a useful
test function because search techniques can prematurely converge and end their
search at one of the many optima before finding the global optimum.
According to Step 1 of the four-step process outlined in Section 11.4, an initial
population of solutions to the problem is generated by distributing them throughout
the solution space. Using a variant of Schwefel’s (1981) Evolution Strategy (ES)
with two parent solutions and 10 offspring solutions, 10 different values for the de-
cision variable between −10 and 10 are randomly picked to represent an initial off-
spring population of 10 solutions. However, to make the search for the optimal so-
lution more challenging, the 10 solutions in the initial offspring population are
placed far from the global optimum to see if the algorithm can avoid being trapped
by one of the many local optima. Therefore, the 10 solutions for the first generation
were randomly picked between −10 and −8. So the test is to see if the population
of 10 solutions can evolve from one generation to the next to find the global opti-
mal solution without being trapped by one of the many local optima.
Figure 11.3 illustrates the progress that the ES made by following the four-
step process from Section 11.4. To avoid complicating the graph, only the
responses for the two best solutions (parents) in each generation are plotted on the
response surface. Clearly, the process of selecting the best solutions and applying
idealized genetic operators allows the algorithm to focus its search toward the
optimal solution from one generation to the next. Although the ES samples many
of the local optima, it quickly identifies the region of the global optimum and is
beginning to hone in on the optimal solution by the eighth generation. Notice that
in the sixth generation, the ES has placed solutions to the right side of the search
space (X > 0) even though it was forced to start its search at the far left side of the
solution space. This ability of an evolutionary algorithm allows it to conduct a
more globally oriented search.
When a search for the optimal solution is conducted in a noisy simulation
environment, care should be taken in measuring the response generated for a
given input (solution) to the model. This means that to get an estimate of the ex-
pected value of the response, multiple observations (replications) of a solution’s
performance should be averaged. But to test the idea that EAs can deal with noisy
response surfaces, the previous search was conducted using only one observation
of a solution’s performance. Therefore, the potential existed for the algorithm to
become confused and prematurely converge to one of the many local optima
because of the noisy response surface. Obviously, this was not the case with the
EA in this example.
Harrell−Ghosh−Bowden: I. Study Chapters 11. Simulation Optimization © The McGraw−Hill
Simulation Using Companies, 2004
ProModel, Second Edition
The authors are not advocating that analysts can forget about determining the
number of replications needed to satisfactorily estimate the expected value of the
response. However, to effectively conduct a search for the optimal solution, an
algorithm must be able to deal with noisy response surfaces and the resulting
uncertainties that exist even when several observations (replications) are used to
estimate a solution’s true performance.
Step 1. The decision variables believed to affect the output of the simulation
model are first programmed into the model as variables whose values can be
quickly changed by the EA. Decision variables are typically the parameters whose
values can be adjusted by management, such as the number of nurses assigned to
a shift or the number of machines to be placed in a work cell.
Step 2. For each decision variable, define its numeric data type (integer or real)
and its lower bound (lowest possible value) and upper bound (highest possible
value). During the search, the EA will generate solutions by varying the values of
decision variables according to their data types, lower bounds, and upper bounds.
The number of decision variables and the range of possible values affect the size
of the search space (number of possible solutions to the problem). Increasing
Harrell−Ghosh−Bowden: I. Study Chapters 11. Simulation Optimization © The McGraw−Hill
Simulation Using Companies, 2004
ProModel, Second Edition
the number of decision variables or their range of values increases the size of the
search space, which can make it more difficult and time-consuming to identify the
optimal solution. As a rule, include only those decision variables known to signif-
icantly affect the output of the simulation model and judiciously define the range
of possible values for each decision variable. Also, care should be taken when
defining the lower and upper bounds of the decision variables to ensure that a
combination of values will not be created that lead to a solution not envisioned
when the model was built.
Step 3. After selecting the decision variables, construct the objective function to
measure the utility of the solutions tested by the EA. Actually, the foundation for
the objective function would have already been established when the goals for the
simulation project were set. For example, if the goal of the modeling project is to
find ways to minimize a customer’s waiting time in a bank, then the objective
function should measure an entity’s (customer’s) waiting time in the bank. The
objective function is built using terms taken from the output report generated at
the end of the simulation run. Objective function terms can be based on entity
statistics, location statistics, resource statistics, variable statistics, and so on. The
user specifies whether a term is to be minimized or maximized as well as the
overall weighting of that term in the objective function. Some terms may be more
or less important to the user than other terms. Remember that as terms are added
to the objective function, the complexity of the search space may increase, which
makes a more difficult optimization problem. From a statistical point of view,
single-term objective functions are also preferable to multiterm objective func-
tions. Therefore, strive to keep the objective function as specific as possible.
The objective function is a random variable, and a set of initial experiments
should be conducted to estimate its variability (standard deviation). Note that there
is a possibility that the objective function’s standard deviation differs from one
solution to the next. Therefore, the required number of replications necessary to es-
timate the expected value of the objective function may change from one solution
to the next. Thus the objective function’s standard deviation should be measured
for several different solutions and the highest standard deviation recorded used to
compute the number of replications necessary to estimate the expected value of the
objective function. When selecting the set of test solutions, choose solutions that
are very different from one another. For example, form solutions by setting the
decision variables to their lower bounds, middle values, or upper bounds.
A better approach for controlling the number of replications used to estimate
the expected value of the objective function for a given solution would be to incor-
porate a rule into the model that schedules additional replications until the estimate
reaches a desired level of precision (confidence interval half-width). Using this
technique can help to avoid running too many replications for some solutions and
too few replications for others.
Step 4. Select the size of the EA’s population (number of solutions) and begin
the search. The size of the population of solutions used to conduct the search
Harrell−Ghosh−Bowden: I. Study Chapters 11. Simulation Optimization © The McGraw−Hill
Simulation Using Companies, 2004
ProModel, Second Edition
affects both the likelihood that the algorithm will locate the optimal solution and
the time required to conduct the search. In general, as the population size is
increased, the algorithm finds better solutions. However, increasing the popula-
tion size generally increases the time required to conduct the search. Therefore, a
balance must be struck between finding the optimum and the amount of available
time to conduct the search.
Step 5. After the EA’s search has concluded (or halted due to time constraints),
the analyst should study the solutions found by the algorithm. In addition to the
best solution discovered, the algorithm usually finds many other competitive
solutions. A good practice is to rank each solution evaluated based on its utility as
measured by the objective function. Next, select the most highly competitive
solutions and, if necessary, make additional model replications of those solutions
to get better estimates of their true utility. And, if necessary, refer to Chapter 10
for background on statistical techniques that can help you make a final decision
between competing solutions. Also, keep in mind that the database of solutions
evaluated by the EA represents a rich source of information about the behavior, or
response surface, of the simulation model. Sorting and graphing the solutions can
help you interpret the “meaning” of the data and gain a better understanding of
how the system behaves.
If the general procedure presented is followed, chances are that a good course
of action will be identified. This general procedure is easily carried out using
ProModel simulation products. Analysts can use SimRunner to help
• Determine the length of time and warm-up period (if applicable) for
running a model.
• Determine the required number of replications for obtaining estimates
with a specified level of precision and confidence.
• Search for the optimal values for the important decision variables.
Even though it is easy to use SimRunner and other modern optimizers, do
not fall into the trap of letting them become the decision maker. Study the top
solutions found by the optimizers as you might study the performance records of
different cars for a possible purchase. Kick their tires, look under their hoods, and
drive them around the block before buying. Always remember that the optimizer
is not the decision maker. It only suggest a possible course of action. It is the
user’s responsibility to make the final decision.
that disruptions (machine failures, line imbalances, quality problems, or the like)
will shut down the production line. Several strategies have been developed for
determining the amount of buffer storage needed between workstations. However,
these strategies are often developed based on simplifying assumptions, made for
mathematical convenience, that rarely hold true for real production systems.
One way to avoid oversimplifying a problem for the sake of mathematical
convenience is to build a simulation model of the production system and use it to
help identify the amount of buffer storage space needed between workstations.
However, the number of possible solutions to the buffer allocation problem grows
rapidly as the size of the production system (number of possible buffer storage
areas and their possible sizes) increases, making it impractical to evaluate all so-
lutions. In such cases, it is helpful to use simulation optimization software like
SimRunner to identify a set of candidate solutions.
This example is loosely based on the example production system presented
in Chapter 10. It gives readers insight into how to formulate simulation optimiza-
tion problems when using SimRunner. The example is not fully solved. Its com-
pletion is left as an exercise in Lab Chapter 11.
FIGURE 11.4
Production system
with four workstations
and three buffer
storage areas.
.72 4.56
Harrell−Ghosh−Bowden: I. Study Chapters 11. Simulation Optimization © The McGraw−Hill
Simulation Using Companies, 2004
ProModel, Second Edition
next machine, where it waits to be processed. However, if the buffer is full, the
part cannot move forward and remains on the machine until a space becomes
available in the buffer. Furthermore, the machine is blocked and no other parts can
move to the machine for processing. The part exits the system after being
processed by the fourth machine. Note that parts are selected from the buffers to
be processed by a machine in a first-in, first-out order. The processing time at each
machine is exponentially distributed with a mean of 1.0 minute, 1.3 minutes,
0.7 minute, and 1.0 minute for machines one, two, three, and four, respectively.
The time to move parts from one location to the next is negligible.
For this problem, three decision variables describe how buffer space is allo-
cated (one decision variable for each buffer to signify the number of parts that can
be stored in the buffer). The Goal is to find the optimal value for each decision
variable to maximize the profit made from the sale of the parts. The manufacturer
collects $10 per part produced. The limitation is that each unit of space provided
for a part in a buffer costs $1,000. So the buffer storage has to be strategically
allocated to maximize the throughput of the system. Throughput will be measured
as the number of parts completed during a 30-day period.
Step 1. In this step, the decision variables for the problem are identified and
defined. Three decision variables are needed to represent the number of parts that
can be stored in each buffer (buffer capacity). Let Q1, Q2, and Q3 represent the
number of parts that can be stored in buffers 1, 2, and 3, respectively.
Step 2. The numeric data type for each Qi is integer. If it is assumed that each
buffer will hold a minimum of one part, then the lower bound for each Qi is 1. The
upper bound for each decision variable could arbitrarily be set to, say, 20. How-
ever, keep in mind that as the range of values for a decision variable is increased,
the size of the search space also increases and will likely increase the time to
conduct the search. Therefore, this is a good place to apply existing knowledge
about the performance and design of the system.
For example, physical constraints may limit the maximum capacity of each
buffer to no greater than 15 parts. If so, there is no need to conduct a search with
a range of values that produce infeasible solutions (buffer capacities greater than
15 parts). Considering that the fourth machine’s processing time is larger than the
third machine’s processing time, parts will tend to queue up at the third buffer.
Therefore, it might be a good idea to set the upper bound for Q3 to a larger value
than the other two decision variables. However, be careful not to assume too much
Harrell−Ghosh−Bowden: I. Study Chapters 11. Simulation Optimization © The McGraw−Hill
Simulation Using Companies, 2004
ProModel, Second Edition
because it could prevent the optimization algorithm from exploring regions in the
search space that may contain good solutions.
With this information, it is decided to set the upper bound for the capacity of
each buffer to 15 parts. Therefore, the bounds for each decision variable are
1 ≤ Q 1 ≤ 15 1 ≤ Q 2 ≤ 15 1 ≤ Q 3 ≤ 15
Given that each of the three decision variable has 15 different values, there are
153, or 3,375, unique solutions to the problem.
Step 3. Here the objective function is formulized. The model was built to inves-
tigate buffer allocation strategies to maximize the throughput of the system.
Given that the manufacturer collects $10 per part produced and that each unit of
space provided for a part in a buffer costs $1,000, the objective function for the
optimization could be stated as
Maximize [$10(Throughput) − $1,000(Q 1 + Q 2 + Q 3 )]
where Throughput is the total number of parts produced during a 30-day
period.
Next, initial experiments are conducted to estimate the variability of the
objective function in order to determine the number of replications the EA-based
optimization algorithm will use to estimate the expected value of the objective
function for each solution it evaluates. While doing this, it was also noticed that
the throughput level increased very little for buffer capacities beyond a value of
nine. Therefore, it was decided to change the upper bound for each decision vari-
able to nine. This resulted in a search space of 93, or 729, unique solutions, a
reduction of 2,646 solutions from the original formulation. This will likely reduce
the search time.
Step 4. Select the size of the population that the EA-based optimization algorithm
will use to conduct its search. SimRunner allows the user to select an optimization
profile that influences the degree of thoroughness used to search for the optimal
solution. The three optimization profiles are aggressive, moderate, and cautious,
which correspond to EA population sizes of small, medium, and large. The aggres-
sive profile generally results in a quick search for locally optimal solutions and is
used when computer time is limited. The cautious profile specifies that a more
thorough search for the global optimum be conducted and is used when computer
time is plentiful. At this point, the analyst knows the amount of time required to
evaluate a solution. Only one more piece of information is needed to determine
how long it will take the algorithm to conduct its search. That is the fraction of the
729 solutions the algorithm will evaluate before converging to a final solution.
Unfortunately, there is no way of knowing this in advance. With time running out
before a recommendation for the system must be given to management, the ana-
lyst elects to use a small population size by selecting SimRunner’s aggressive op-
timization profile.
Harrell−Ghosh−Bowden: I. Study Chapters 11. Simulation Optimization © The McGraw−Hill
Simulation Using Companies, 2004
ProModel, Second Edition
Step 5. After the search concludes, the analyst selects for further evaluation some
of the top solutions found by the optimization algorithm. Note that this does not
necessarily mean that only those solutions with the best objective function values
are chosen, because the analyst should conduct both a quantitative and a qualitative
analysis. On the quantitative side, statistical procedures presented in Chapter 10
are used to gain a better understanding of the relative differences in performance
between the candidate solutions. On the qualitative side, one solution may be
preferred over another based on factors such as ease of implementation.
Figure 11.5 illustrates the results from a SimRunner optimization of the buffer
allocation problem using an aggressive optimization profile. The warm-up time for
the simulation was set to 10 days, with each day representing a 24-hour production
period. After the warm-up time of 10 days (240 hours), the system is simulated for
an additional 30 days (720 hours) to determine throughput. The estimate for the ex-
pected value of the objective function was based on five replications of the simula-
tion. The smoother line that is always at the top of the Performance Measures Plot
in Figure 11.5 represents the value of the objective function for the best solution
identified by SimRunner during the optimization. Notice the rapid improvement in
the value of the objective function during the early part of the search as SimRunner
identifies better buffer capacities. The other, more irregular line represents the
value of the objective function for all the solutions that SimRunner tried.
SimRunner’s best solution to the problem specifies a Buffer 1 capacity of
nine, a Buffer 2 capacity of seven, and a Buffer 3 capacity of three and was the
33rd solution (experiment) evaluated by SimRunner. The best solution is located
at the top of the table in Figure 11.5. SimRunner sorts the solutions in the table
FIGURE 11.5
SimRunner results for
the buffer allocation
problem.
Harrell−Ghosh−Bowden: I. Study Chapters 11. Simulation Optimization © The McGraw−Hill
Simulation Using Companies, 2004
ProModel, Second Edition
from best to worst. SimRunner evaluated 82 out of the possible 729 solutions.
Note that there is no guarantee that SimRunner’s best solution is in fact the opti-
mum solution to the problem. However, it is likely to be one of the better solutions
to the problem and could be the optimum one.
The last two columns in the SimRunner table shown in Figure 11.5 display
the lower and upper bounds of a 95 percent confidence interval for each solution
evaluated. Notice that there is significant overlap between the confidence inter-
vals. Although this is not a formal hypothesis-testing procedure, the overlapping
confidence intervals suggest the possibility that there is not a significant differ-
ence in the performance of the top solutions displayed in the table. Therefore, it
would be wise to run additional replications of the favorite solutions from the list
and/or use one of the hypothesis-testing procedures in Chapter 10 before selecting
a particular solution as the best. The real value here is that SimRunner automati-
cally conducted the search, without the analyst having to hover over it, and re-
ported back several good solutions to the problem for the analyst to consider.
FIGURE 11.6
The two-stage pull Trigger ⫽ 1 Trigger ⫽ 2
production system. Customer
Raw
demand Assembly line
Stage Stage materials
One Stage One Two Stage Two
WIP process WIP line
Assembly line
Kanban posts
Final product Component
Subassembly Kanban card
Figure 11.6 illustrates the relationship of the processes in the two-stage pull
production system of interest that produces several different types of parts.
Customer demand for the final product causes containers of subassemblies to be
pulled from the Stage One WIP location to the assembly lines. As each container is
withdrawn from the Stage One WIP location, a production-ordering kanban card
representing the number of subassemblies in a container is sent to the kanban post
for the Stage One processing system. When the number of kanban cards for a given
subassembly meets its trigger value, the necessary component parts are pulled from
the Stage Two WIP to create the subassemblies. Upon completing the Stage One
process, subassemblies are loaded into containers, the corresponding kanban card
is attached to the container, and both are sent to Stage One WIP. The container and
card remain in the Stage One WIP location until pulled to an assembly line.
In Stage Two, workers process raw materials to fill the Stage Two WIP
location as component parts are pulled from it by Stage One. As component parts
are withdrawn from the Stage Two WIP location and placed into the Stage One
process, a production-ordering kanban card representing the quantity of compo-
nent parts in a container is sent to the kanban post for the Stage Two line. When
the number of kanban cards for a given component part meets a trigger value,
production orders equal to the trigger value are issued to the Stage Two line. As
workers move completed orders of component parts from the Stage Two line to
WIP, the corresponding kanban cards follow the component parts to the Stage
Two WIP location.
While an overall reduction in WIP is sought, production planners desire a so-
lution (kanban cards and trigger values for each stage) that gives preference to
minimizing the containers in the Stage One WIP location. This requirement is due
to space limitations.
where the safety coefficient represents the amount of WIP needed in the system.
Production planners assumed a safety coefficient that resulted in one day of WIP
for each part type. Additionally, they decided to use one setup per day for each
part type. Although this equation provides an estimate of the minimum number of
kanban cards, it does not address trigger values. Therefore, trigger values for the
Stage Two line were set at the expected number of containers consumed for each
part type in one day. The Toyota equation recommended using a total of 243
kanban cards. The details of the calculation are omitted for brevity. When evalu-
ated in the simulation model, this solution yielded a performance score of 35.140
(using the performance function defined in Section 11.7.2) based on four inde-
pendent simulation runs (replications).
Measured response
solutions found by the
optimization module 37
are better than the
solutions generated 36
using the Toyota
method.
35
34
33
0 50 100 150
Generation
11.8 Summary
In recent years, major advances have been made in the development of user-
friendly simulation software packages. However, progress in developing sim-
ulation output analysis tools has been especially slow in the area of simulation
optimization because conducting simulation optimization with traditional tech-
niques has been as much an art as a science (Greenwood, Rees, and Crouch 1993).
There are such an overwhelming number of traditional techniques that only indi-
viduals with extensive backgrounds in statistics and optimization theory have
realized the benefits of integrating simulation and optimization concepts. Using
newer optimization techniques, it is now possible to narrow the gap with user-
friendly, yet powerful, tools that allow analysts to combine simulation and
optimization for improved decision support. SimRunner is one such tool.
Our purpose is not to argue that evolutionary algorithms are the panacea
for solving simulation optimization problems. Rather, our purpose is to introduce
the reader to evolutionary algorithms by illustrating how they work and how to
use them for simulation optimization and to expose the reader to the wealth of
literature that demonstrates that evolutionary algorithms are a viable choice for
reliable optimization.
There will always be debate on what are the best techniques to use for simu-
lation optimization. Debate is welcome as it results in better understanding of the
real issues and leads to the development of better solution procedures. It must be
remembered, however, that the practical issue is not that the optimization tech-
nique guarantees that it locates the optimal solution in the shortest amount of time
for all possible problems that it may encounter; rather, it is that the optimization
technique consistently finds good solutions to problems that are better than the
solutions analysts are finding on their own. Newer techniques such as evolution-
ary algorithms and scatter search meet this requirement because they have proved
robust in their ability to solve a wide variety of problems, and their ease of use
makes them a practical choice for simulation optimization today (Boesel et al.
2001; Brady and Bowden 2001).
References
Ackley, D. A Connectionist Machine for Genetic Hill Climbing. Boston, MA: Kluwer, 1987.
Akbay, K. “Using Simulation Optimization to Find the Best Solution.” IIE Solutions, May
1996, pp. 24–29.
Anonymous, “Lockheed Martin.” IIE Solutions, December, 1996, pp. SS48–SS49.
Azadivar, F. “A Tutorial on Simulation Optimization.” 1992 Winter Simulation Confer-
ence. Arlington Virginia, ed. Swain, J., D. Goldsman, R. Crain and J. Wilson. Institute
of Electrical and Electronics Engineers, Piscataway, NJ: 1992, pp. 198–204.
Bäck, T.; T. Beielstein; B. Naujoks; and J. Heistermann. “Evolutionary Algorithms for
the Optimization of Simulation Models Using PVM.” Euro PVM 1995—Second
European PVM 1995, User’s Group Meeting. Hermes, Paris, ed. Dongarra, J., M.
Gengler, B. Tourancheau and X. Vigouroux, 1995, pp. 277–282.
Bäck, T., and H.-P. Schwefel. “An Overview of Evolutionary Algorithms for Parameter
Optimization.” Evolutionary Computation 1, no. 1 (1993), pp. 1–23.
Barton, R., and J. Ivey. “Nelder-Mead Simplex Modifications for Simulation Optimization.”
Management Science 42, no. 7 (1996), pp. 954–73.
Biethahn, J., and V. Nissen. “Combinations of Simulation and Evolutionary Algorithms in
Management Science and Economics.” Annals of Operations Research 52 (1994),
pp. 183–208.
Boesel, J.; Bowden, R. O.; Glover, F.; and J. P. Kelly. “Future of Simulation Optimization.”
Proceedings of the 2001 Winter Simulation Conference, 2001, pp. 1466–69.
Bowden, R. O. “Genetic Algorithm Based Machine Learning Applied to the Dynamic
Routing of Discrete Parts.” Ph.D. Dissertation, Department of Industrial Engineering,
Mississippi State University, 1992.
Bowden, R. “The Evolution of Manufacturing Control Knowledge Using Reinforcement
Learning.” 1995 Annual International Conference on Industry, Engineering, and
Management Systems, Cocoa Beach, FL, ed. G. Lee. 1995, pp. 410–15.
Bowden, R., and S. Bullington. “An Evolutionary Algorithm for Discovering Manufactur-
ing Control Strategies.” In Evolutionary Algorithms in Management Applications,
ed. Biethahn, J. and V. Nissen. Berlin: Springer, 1995, pp. 124–38.
Bowden, R., and S. Bullington. “Development of Manufacturing Control Strategies Using
Unsupervised Learning.” IIE Transactions 28 (1996), pp. 319–31.
Bowden, R.; J. Hall; and J. Usher. “Integration of Evolutionary Programming and Simula-
tion to Optimize a Pull Production System.” Computers and Industrial Engineering
31, no. 1/2 (1996), pp. 217–20.
Bowden, R.; R. Neppalli; and A. Calvert. “A Robust Method for Determining Good Com-
binations of Queue Priority Rules.” Fourth International Industrial Engineering Re-
search Conference, Nashville, TN, ed. Schmeiser, B. and R. Uzsoy. Norcross, GA:
IIE, 1995, pp. 874–80.
Harrell−Ghosh−Bowden: I. Study Chapters 11. Simulation Optimization © The McGraw−Hill
Simulation Using Companies, 2004
ProModel, Second Edition
SimRunner Online Software Help. Ypsilanti, MI: Decision Science, Inc., 1996b.
SimRunner User’s Guide ProModel Edition. Ypsilanti, MI: Decision Science, Inc., 1996a.
Stuckman, B.; G. Evans; and M. Mollaghasemi. “Comparison of Global Search Methods
for Design Optimization Using Simulation.” 1991 Winter Simulation Conference,
Phoenix, AZ, ed. Nelson, B., W. Kelton and G. Clark. Piscataway, NJ: Institute of
Electrical and Electronics Engineers, 1991, pp. 937–44.
Tompkins, G., and F. Azadivar. “Genetic Algorithms in Optimizing Simulated Systems.”
1991 Winter Simulation Conference, Arlington, VA, ed. Alexopoulos, C., K. Kang,
W. Lilegdon, and D. Goldsman. Piscataway, NJ: Institute of Electrical and Electron-
ics Engineers, 1995, pp. 757–62.
Usher, J., and R. Bowden. “The Application of Genetic Algorithms to Operation Sequenc-
ing for Use in Computer-Aided Process Planning.” Computers and Industrial Engi-
neering Journal 30, no. 4 (1996), pp. 999–1013.