Simulation Using Promodel
Simulation Using Promodel
© The
wden: Simulation Chapters Introduction McGraw−Hill
Using ProModel, to Companies,
Second Edition
C H A P T E R
1 INTRODUCTION
TO
SIMULATION
1.1 Introduction
On March 19, 1999, the following story appeared in The Wall Street Journal:
Captain Chet Rivers knew that his 747-400 was loaded to the limit. The giant plane,
weighing almost 450,000 pounds by itself, was carrying a full load of passengers
and baggage, plus 400,000 pounds of fuel for the long flight from San
Francisco to Australia. As he revved his four engines for takeoff, Capt. Rivers
noticed that San Francisco’s famous fog was creeping in, obscuring the hills to the
north and west of the airport.
At full throttle the plane began to roll ponderously down the runway, slowly at
first but building up to flight speed well within normal limits. Capt. Rivers pulled
the throt- tle back and the airplane took to the air, heading northwest across the San
Francisco peninsula towards the ocean. It looked like the start of another routine
flight. Suddenly the plane began to shudder violently. Several loud explosions
shook the craft and smoke and flames, easily visible in the midnight sky,
illuminated the right wing. Although the plane was shaking so violently that it was
hard to read the instruments, Capt. Rivers was able to tell that the right inboard
engine was malfunctioning, back- firing violently. He immediately shut down the
engine, stopping the explosions and shaking.
However this introduced a new problem. With two engines on the left wing at
full power and only one on the right, the plane was pushed into a right turn,
bringing it directly towards San Bruno Mountain, located a few miles northwest of
the airport. Capt. Rivers instinctively turned his control wheel to the left to bring
the plane back on course. That action extended the ailerons—control surfaces on
the trailing edges of the wings—to tilt the plane back to the left. However, it
also extended the
3
4 Part I Study Chapters
spoilers—panels on the tops of the wings—increasing drag and lowering lift. With
the nose still pointed up, the heavy jet began to slow. As the plane neared stall speed,
the control stick began to shake to warn the pilot to bring the nose down to gain air
speed. Capt. Rivers immediately did so, removing that danger, but now San Bruno
Mountain was directly ahead. Capt. Rivers was unable to see the mountain due to
the thick fog that had rolled in, but the plane’s ground proximity sensor sounded an
au- tomatic warning, calling “terrain, terrain, pull up, pull up.” Rivers frantically
pulled back on the stick to clear the peak, but with the spoilers up and the plane
still in a skidding right turn, it was too late. The plane and its full load of 100
tons of fuel crashed with a sickening explosion into the hillside just above a
densely populated housing area.
“Hey Chet, that could ruin your whole day,” said Capt. Rivers’s supervisor, who
was sitting beside him watching the whole thing. “Let’s rewind the tape and see
what you did wrong.” “Sure Mel,” replied Chet as the two men stood up and
stepped out- side the 747 cockpit simulator. “I think I know my mistake already. I
should have used my rudder, not my wheel, to bring the plane back on course. Say,
I need a breather after that experience. I’m just glad that this wasn’t the real thing.”
The incident above was never reported in the nation’s newspapers, even though
it would have been one of the most tragic disasters in aviation history, because it
never really happened. It took place in a cockpit simulator, a device which uses com-
puter technology to predict and recreate an airplane’s behavior with gut-wrenching
realism.
The relief you undoubtedly felt to discover that this disastrous incident was
just a simulation gives you a sense of the impact that simulation can have in
avert- ing real-world catastrophes. This story illustrates just one of the many
ways sim- ulation is being used to help minimize the risk of making costly and
sometimes fatal mistakes in real life. Simulation technology is finding its way
into an in- creasing number of applications ranging from training for aircraft
pilots to the testing of new product prototypes. The one thing that these
applications have in common is that they all provide a virtual environment that
helps prepare for real- life situations, resulting in significant savings in time,
money, and even lives.
One area where simulation is finding increased application is in manufactur-
ing and service system design and improvement. Its unique ability to accurately
predict the performance of complex systems makes it ideally suited for
systems planning. Just as a flight simulator reduces the risk of making costly
errors in ac- tual flight, system simulation reduces the risk of having systems
that operate inef- ficiently or that fail to meet minimum performance
requirements. While this may not be life-threatening to an individual, it certainly
places a company (not to men- tion careers) in jeopardy.
In this chapter we introduce the topic of simulation and answer the
following questions:
• What is simulation?
• Why is simulation used?
• How is simulation performed?
• When and where should simulation be used?
Chapter 1 Introduction to Simulation 5
FIGURE 1.1
Simulation provides animation capability.
FIGURE 1.2
Simulation provides System
a virtual method for
doing system
experimentation.
Concept
Model
FIGURE 1.3
Start
The process of
simulation
experimentation
.
Formulate a
hypothesis
Develop a
simulation model
No
Run simulation
experiment
Hypothesis
correct?
Yes
End
not mean that there can be no uncertainty in the system. If random behavior can
be described using probability expressions and distributions, they can be simu-
lated. It is only when it isn’t even possible to make reasonable assumptions of
how a system operates (because either no information is available or behavior is
totally erratic) that simulation (or any other analysis tool for that matter)
becomes useless. Likewise, one-time projects or processes that are never
repeated the same way twice are poor candidates for simulation. If the scenario
you are modeling is likely never going to happen again, it is of little benefit to
do a simulation.
Activities and events should be interdependent and variable. A system may
have lots of activities, but if they never interfere with each other or are determin-
istic (that is, they have no variation), then using simulation is probably un-
necessary. It isn’t the number of activities that makes a system difficult to
analyze. It is the number of interdependent, random activities. The effect of
simple interde- pendencies is easy to predict if there is no variability in the
activities. Determining the flow rate for a system consisting of 10 processing
activities is very straightfor- ward if all activity times are constant and activities
are never interrupted. Likewise, random activities that operate independently of
each other are usually easy to ana- lyze. For example, 10 machines operating in
isolation from each other can be ex- pected to produce at a rate that is based on
the average cycle time of each machine less any anticipated downtime. It is the
combination of interdependencies and ran- dom behavior that really produces
the unpredictable results. Simpler analytical methods such as mathematical
calculations and spreadsheet software become less adequate as the number of
activities that are both interdependent and random in- creases. For this reason,
simulation is primarily suited to systems involving both interdependencies and
variability.
The cost impact of the decision should be greater than the cost of doing the
simulation. Sometimes the impact of the decision itself is so insignificant that it
doesn’t warrant the time and effort to conduct a simulation. Suppose, for
example, you are trying to decide whether a worker should repair rejects as
they occur or wait until four or five accumulate before making repairs. If you are
certain that the next downstream activity is relatively insensitive to whether
repairs are done sooner rather than later, the decision becomes inconsequential
and simulation is a wasted effort.
The cost to experiment on the actual system should be greater than the cost
of simulation. While simulation avoids the time delay and cost associated with
ex- perimenting on the real system, in some situations it may actually be quicker
and more economical to experiment on the real system. For example, the
decision in a customer mailing process of whether to seal envelopes before or
after they are addressed can easily be made by simply trying each method and
comparing the results. The rule of thumb here is that if a question can be
answered through direct experimentation quickly, inexpensively, and with
minimal impact to the current operation, then don’t use simulation.
Experimenting on the actual system also eliminates some of the drawbacks
associated with simulation, such as proving model validity.
There may be other situations where simulation is appropriate independent
of the criteria just listed (see Banks and Gibson 1997). This is certainly true in
the
14 Part I Study Chapters
case of models built purely for visualization purposes. If you are trying to sell a
system design or simply communicate how a system works, a realistic animation
created using simulation can be very useful, even though nonbeneficial from an
analysis point of view.
• Systems engineering.
• Statistical analysis and design of experiments.
• Modeling principles and concepts.
• Basic programming and computer skills.
• Training on one or more simulation products.
• Familiarity with the system being investigated.
Experience has shown that some people learn simulation more rapidly and
become more adept at it than others. People who are good abstract thinkers yet
also pay close attention to detail seem to be the best suited for doing simulation.
Such individuals are able to see the forest while still keeping an eye on the trees
(these are people who tend to be good at putting together 1,000-piece puzzles).
They are able to quickly scope a project, gather the pertinent data, and get a use-
ful model up and running without lots of starts and stops. A good modeler is
some- what of a sleuth, eager yet methodical and discriminating in piecing
together all of the evidence that will help put the model pieces together.
If short on time, talent, resources, or interest, the decision maker need not
despair. Plenty of consultants who are professionally trained and experienced
can provide simulation services. A competitive bid will help get the best price,
but one should be sure that the individual assigned to the project has good
credentials. If the use of simulation is only occasional, relying on a consultant
may be the preferred approach.
FIGURE 1.4
Cost of making Concept Desig Installation Operation
changes at n
subsequent stages of
system development.
Cost
System stage
FIGURE 1.5
System costs
Comparison of Cost
cumulative system without
costs with and simulation
without simulation.
Cost with
simulatio
n
the short-term cost may be slightly higher due to the added labor and software
costs associated with simulation, the long-term costs associated with capital
investments and system operation are considerably lower due to better
efficiencies realized through simulation. Dismissing the use of simulation on
the basis of sticker price is myopic and shows a lack of understanding of the
long-term sav- ings that come from having well-designed, efficiently operating
systems.
Many examples can be cited to show how simulation has been used to avoid
costly errors in the start-up of a new system. Simulation prevented an
unnecessary expenditure when a Fortune 500 company was designing a facility
for producing and storing subassemblies and needed to determine the number of
containers re- quired for holding the subassemblies. It was initially felt that
3,000 containers
18 Part I Study Chapters
were needed until a simulation study showed that throughput did not improve
sig- nificantly when the number of containers was increased from 2,250 to
3,000. By purchasing 2,250 containers instead of 3,000, a savings of $528,375
was expected in the first year, with annual savings thereafter of over $200,000
due to the savings in floor space and storage resulting from having 750 fewer
containers (Law and McComas 1988).
Even if dramatic savings are not realized each time a model is built, simula-
tion at least inspires confidence that a particular system design is capable of
meet- ing required performance objectives and thus minimizes the risk often
associated with new start-ups. The economic benefit associated with instilling
confidence was evidenced when an entrepreneur, who was attempting to secure
bank financ- ing to start a blanket factory, used a simulation model to show the
feasibility of the proposed factory. Based on the processing times and
equipment lists supplied by industry experts, the model showed that the
output projections in the business plan were well within the capability of the
proposed facility. Although unfamiliar with the blanket business, bank officials
felt more secure in agreeing to support the venture (Bateman et al. 1997).
Often simulation can help improve productivity by exposing ways of
making better use of existing assets. By looking at a system holistically, long-
standing problems such as bottlenecks, redundancies, and inefficiencies that
previously went unnoticed start to become more apparent and can be eliminated.
“The trick is to find waste, or muda,” advises Shingo; “after all, the most
damaging kind of waste is the waste we don’t recognize” (Shingo 1992).
Consider the following actual examples where simulation helped uncover and
eliminate wasteful practices:
• GE Nuclear Energy was seeking ways to improve productivity without
investing large amounts of capital. Through the use of simulation, the
company was able to increase the output of highly specialized reactor
parts by 80 percent. The cycle time required for production of each part
was reduced by an average of 50 percent. These results were obtained
by running a series of models, each one solving production problems
highlighted by the previous model (Bateman et al. 1997).
• A large manufacturing company with stamping plants located throughout
the world produced stamped aluminum and brass parts on order according
to customer specifications. Each plant had from 20 to 50 stamping presses
that were utilized anywhere from 20 to 85 percent. A simulation study
was conducted to experiment with possible ways of increasing capacity
utilization. As a result of the study, machine utilization improved from an
average of 37 to 60 percent (Hancock, Dissen, and Merten 1977).
• A diagnostic radiology department in a community hospital was
modeled to evaluate patient and staff scheduling, and to assist in
expansion planning over the next five years. Analysis using the
simulation model enabled improvements to be discovered in operating
procedures that precluded the necessity for any major expansions in
department size (Perry and Baum 1976).
Chapter 1 Introduction to Simulation 19
behavior such as the way entities arrive and their routings can be defined with
lit- tle, if any, programming using the data entry tables that are provided.
ProModel is used by thousands of professionals in manufacturing and service-
related indus- tries and is taught in hundreds of institutions of higher learning.
Part III contains case study assignments that can be used for student
projects to apply the theory they have learned from Part I and to try out the skills
they have ac- quired from doing the lab exercises (Part II). It is recommended
that students be as- signed at least one simulation project during the course.
Preferably this is a project performed for a nearby company or institution so it
will be meaningful. If such a project cannot be found, or as an additional
practice exercise, the case studies pro- vided should be useful. Student projects
should be selected early in the course so that data gathering can get started and
the project completed within the allotted time. The chapters in Part I are
sequenced to parallel an actual simulation project.
1.11Summary
Businesses today face the challenge of quickly designing and implementing
com- plex production and service systems that are capable of meeting
growing de- mands for quality, delivery, affordability, and service. With recent
advances in computing and software technology, simulation tools are now
available to help meet this challenge. Simulation is a powerful technology that
is being used with increasing frequency to improve system performance by
providing a way to make better design and management decisions. When used
properly, simulation can re- duce the risks associated with starting up a new
operation or making improve- ments to existing operations.
Because simulation accounts for interdependencies and variability, it pro-
vides insights that cannot be obtained any other way. Where important system
decisions are being made of an operational nature, simulation is an invaluable
decision-making tool. Its usefulness increases as variability and interdependency
increase and the importance of the decision becomes greater.
Lastly, simulation actually makes designing systems fun! Not only can a de-
signer try out new design concepts to see what works best, but the visualization
makes it take on a realism that is like watching an actual system in operation.
Through simulation, decision makers can play what-if games with a new system
or modified process before it actually gets implemented. This engaging process
stimulates creative thinking and results in good design decisions.
3. What are two specific questions that simulation might help answer in a
bank? In a manufacturing facility? In a dental office?
4. What are three advantages that simulation has over alternative
approaches to systems design?
5. Does simulation itself optimize a system design? Explain.
6. How does simulation follow the scientific method?
7. A restaurant gets extremely busy during lunch (11:00 A.M. to 2:00 P.M.)
and is trying to decide whether it should increase the number of waitresses
from two to three. What considerations would you look at to determine
whether simulation should be used to make this decision?
8. How would you develop an economic justification for using simulation?
9. Is a simulation exercise wasted if it exposes no problems in a system
design? Explain.
10.A simulation run was made showing that a modeled factory could produce
130 parts per hour. What information would you want to know about the
simulation study before placing any confidence in the results?
11.A PC board manufacturer has high work-in-process (WIP) inventories, yet
machines and equipment seem underutilized. How could simulation help
solve this problem?
12.How important is a statistical background for doing simulation?
13.How can a programming background be useful in doing simulation?
14.Why are good project management and communication skills important in
simulation?
15.Why should the process owner be heavily involved in a simulation
project?
16.For which of the following problems would simulation likely be useful?
a. Increasing the throughput of a production line.
b. Increasing the pace of a worker on an assembly line.
c. Decreasing the time that patrons at an amusement park spend
waiting in line.
d. Determining the percentage defective from a particular machine.
e. Determining where to place inspection points in a process.
f. Finding the most efficient way to fill out an order form.
Reference
s Audon, Wyston Hugh, and L. Kronenberger. The Faber Book of Aphorisms. London: Faber
and Faber, 1964.
Banks, J., and R. Gibson. “10 Rules for Determining When Simulation Is Not Appropri-
ate.” IIE Solutions, September 1997, pp. 30–32.
Bateman, Robert E.; Royce O. Bowden; Thomas J. Gogg; Charles R. Harrell; and Jack
R. A. Mott. System Improvement Using Simulation. Utah: PROMODEL Corp., 1997.
22 Part I Study Chapters
Deming, W. E. Foundation for Management of Quality in the Western World. Paper read
at a meeting of the Institute of Management Sciences, Osaka, Japan, 24 July 1989.
Glenney, Neil E., and Gerald T. Mackulak. “Modeling & Simulation Provide Key to CIM
Implementation Philosophy.” Industrial Engineering, May 1985.
Hancock, Walton; R. Dissen; and A. Merten. “An Example of Simulation to Improve
Plant Productivity.” AIIE Transactions, March 1977, pp. 2–10.
Harrell, Charles R., and Donald Hicks. “Simulation Software Component Architecture
for Simulation-Based Enterprise Applications.” In Proceedings of the 1998 Winter
Simulation Conference, ed. D. J. Medeiros, E. F. Watson, J. S. Carson, and
M. S. Manivannan, pp. 1717–21. Institute of Electrical and Electronics Engineers,
Piscataway, New Jersey.
Harrington, H. James. Business Process Improvement. New York: McGraw-Hill, 1991.
Hoover, Stewart V., and Ronald F. Perry. Simulation: A Problem-Solving Approach.
Reading, MA: Addison-Wesley, 1989.
Law, A. M., and M. G. McComas. “How Simulation Pays Off.” Manufacturing
Engineer- ing, February 1988, pp. 37–39.
Mott, Jack, and Kerim Tumay. “Developing a Strategy for Justifying Simulation.” Indus-
trial Engineering, July 1992, pp. 38–42.
Oxford American Dictionary. New York: Oxford University Press, 1980. [compiled by]
Eugene Enrich et al.
Perry, R. F., and R. F. Baum. “Resource Allocation and Scheduling for a Radiology
Department.” In Cost Control in Hospitals. Ann Arbor, MI: Health Administration
Press, 1976.
Rohrer, Matt, and Jerry Banks. “Required Skills of a Simulation Analyst.” IIE Solutions,
May 1998, pp. 7–23.
Schriber, T. J. “The Nature and Role of Simulation in the Design of Manufacturing
Systems.” Simulation in CIM and Artificial Intelligence Techniques, ed. J. Retti and
K. E. Wichmann. S.D., CA.: Society for Computer Simulation, 1987, pp. 5–8.
Shannon, Robert E. “Introduction to the Art and Science of Simulation.” In
Proceedings of the 1998 Winter Simulation Conference, ed. D. J. Medeiros,
E. F. Watson,
J. S. Carson, and M. S. Manivannan, pp. 7–14. Piscataway, NJ: Institute of Electrical
and Electronics Engineers.
Shingo, Shigeo. The Shingo Production Management System—Improving Process Func-
tions. Trans. Andrew P. Dillon. Cambridge, MA: Productivity Press, 1992.
Solberg, James. “Design and Analysis of Integrated Manufacturing Systems.” In W. Dale
Compton. Washington, D.C.: National Academy Press, 1988, p. 4.
The Wall Street Journal, March 19, 1999. “United 747’s Near Miss Sparks a Widespread
Review of Pilot Skills,” p. A1.
Harrell−Ghosh−Bo I. Study 2. System © The
wden: Simulation Chapters Dynamics McGraw−Hill
Using ProModel, Companies,
Second Edition
C H A P T E R
2 SYSTEM DYNAMICS
1 Introduction
Knowing how to do simulation doesn’t make someone a good systems designer
any more than knowing how to use a CAD system makes one a good product
de- signer. Simulation is a tool that is useful only if one understands the nature
of the problem to be solved. It is designed to help solve systemic problems that
are op- erational in nature. Simulation exercises fail to produce useful results
more often because of a lack of understanding of system dynamics than a lack
of knowing how to use the simulation software. The challenge is in
understanding how the system operates, knowing what you want to achieve
with the system, and being able to identify key leverage points for best
achieving desired objectives. To illustrate the nature of this challenge, consider
the following actual scenario:
The pipe mill for the XYZ Steel Corporation was an important profit center, turning
steel slabs selling for under $200/ton into a product with virtually unlimited demand
selling for well over $450/ton. The mill took coils of steel of the proper thickness
and width through a series of machines that trimmed the edges, bent the steel
into a cylinder, welded the seam, and cut the resulting pipe into appropriate lengths,
all on a continuously running line. The line was even designed to weld the end of
one coil to the beginning of the next one “on the fly,” allowing the line to run
continually for days on end.
Unfortunately the mill was able to run only about 50 percent of its theoretical ca-
pacity over the long term, costing the company tens of millions of dollars a year in
lost revenue. In an effort to improve the mill’s productivity, management studied
each step in the process. It was fairly easy to find the slowest step in the line, but
additional study showed that only a small percentage of lost production was due to
problems at this “bottleneck” operation. Sometimes a step upstream from the
bottleneck would
23
24 Part I Study Chapters
have a problem, causing the bottleneck to run out of work, or a downstream step
would go down temporarily, causing work to back up and stop the bottleneck. Some-
times the bottleneck would get so far behind that there was no place to put incoming,
newly made pipe. In this case the workers would stop the entire pipe-making process
until the bottleneck was able to catch up. Often the bottleneck would then be idle
wait- ing until the newly started line was functioning properly again and the new
pipe had a chance to reach it. Sometimes problems at the bottleneck were actually
caused by im- proper work at a previous location.
In short, there was no single cause for the poor productivity seen at this plant.
Rather, several separate causes all contributed to the problem in complex ways.
Man- agement was at a loss to know which of several possible improvements
(additional or faster capacity at the bottleneck operation, additional storage space
between stations, better rules for when to shut down and start up the pipe-forming
section of the mill, better quality control, or better training at certain critical
locations) would have the most impact for the least cost. Yet the poor performance
of the mill was costing enor- mous amounts of money. Management was under
pressure to do something, but what should it be?
This example illustrates the nature and difficulty of the decisions that an
operations manager faces. Managers need to make decisions that are the “best”
in some sense. To do so, however, requires that they have clearly defined goals
and understand the system well enough to identify cause-and-effect
relationships.
While every system is different, just as every product design is different,
the basic elements and types of relationships are the same. Knowing how the
elements of a system interact and how overall performance can be improved are
essential to the effective use of simulation. This chapter reviews basic system
dynamics and answers the following questions:
• What is a system?
• What are the elements of a system?
• What makes systems so complex?
• What are useful system metrics?
• What is a systems approach to systems planning?
• How do traditional systems analysis techniques compare with simulation?
2 System Definition
We live in a society that is composed of complex, human-made systems that
we depend on for our safety, convenience, and livelihood. Routinely we rely on
transportation, health care, production, and distribution systems to provide
needed goods and services. Furthermore, we place high demands on the quality,
conve- nience, timeliness, and cost of the goods and services that are provided
by these systems. Remember the last time you were caught in a traffic jam, or
waited for what seemed like an eternity in a restaurant or doctor’s office?
Contrast that ex- perience with the satisfaction that comes when you find a
store that sells quality merchandise at discount prices or when you locate a
health care organization that
Chapter 2 System Dynamics 25
3 System Elements
From a simulation perspective, a system can be said to consist of entities, activi-
ties, resources, and controls (see Figure 2.1). These elements define the
who, what, where, when, and how of entity processing. This model for
describing a
Resources Controls
System
26 Part I Study Chapters
2.3.1 Entities
Entities are the items processed through the system such as products, customers,
and documents. Different entities may have unique characteristics such as cost,
shape, priority, quality, or condition. Entities may be further subdivided into the
following types:
• Human or animate (customers, patients, etc.).
• Inanimate (parts, documents, bins, etc.).
• Intangible (calls, electronic mail, etc.).
For most manufacturing and service systems, the entities are discrete items.
This is the case for discrete part manufacturing and is certainly the case for
nearly all service systems that process customers, documents, and others. For
some pro- duction systems, called continuous systems, a nondiscrete substance
is processed rather than discrete entities. Examples of continuous systems are oil
refineries and paper mills.
2.3.2 Activities
Activities are the tasks performed in the system that are either directly or
indirectly involved in the processing of entities. Examples of activities include
servicing a customer, cutting a part on a machine, or repairing a piece of equip-
ment. Activities usually consume time and often involve the use of resources.
Activities may be classified as
• Entity processing (check-in, treatment, inspection, fabrication, etc.).
• Entity and resource movement (forklift travel, riding in an elevator, etc.).
• Resource adjustments, maintenance, and repairs (machine setups, copy
machine repair, etc.).
2.3.3 Resources
Resources are the means by which activities are performed. They provide the
supporting facilities, equipment, and personnel for carrying out activities. While
resources facilitate entity processing, inadequate resources can constrain
process- ing by limiting the rate at which processing can take place.
Resources have characteristics such as capacity, speed, cycle time, and
reliability. Like entities, resources can be categorized as
• Human or animate (operators, doctors, maintenance personnel, etc.).
• Inanimate (equipment, tooling, floor space, etc.).
• Intangible (information, electrical power, etc.).
Chapter 2 System Dynamics 27
2.3.4 Controls
Controls dictate how, when, and where activities are performed. Controls impose
order on the system. At the highest level, controls consist of schedules, plans,
and policies. At the lowest level, controls take the form of written procedures
and ma- chine control logic. At all levels, controls provide the information and
decision logic for how the system should operate. Examples of controls include
• Routing sequences.
• Production plans.
• Work schedules.
• Task prioritization.
• Control software.
• Instruction sheets.
While the sheer number of elements in a system can stagger the mind (the
number of different entities, activities, resources, and controls can easily exceed
100), the interactions of these elements are what make systems so complex and
28 Part I Study Chapters
These two factors characterize virtually all human-made systems and make
system behavior difficult to analyze and predict. As shown in Figure 2.2, the de-
gree of analytical difficulty increases exponentially as the number of interdepen-
dencies and random variables increases.
2.4.1 Interdependencies
Interdependencies cause the behavior of one element to affect other elements in
the system. For example, if a machine breaks down, repair personnel are put into
action while downstream operations become idle for lack of parts. Upstream
operations may even be forced to shut down due to a logjam in the entity flow
causing a blockage of activities. Another place where this chain reaction or
domino effect manifests itself is in situations where resources are shared
between
Degree of analytical difficulty
FIGURE 2.2
Analytical difficulty
as a function of the
number of
interdependencies and
random variables.
Number of interdependencies
and random variables
Chapter 2 System Dynamics 29
two or more activities. A doctor treating one patient, for example, may be
unable to immediately respond to another patient needing his or her attention.
This delay in response may also set other forces in motion.
It should be clear that the complexity of a system has less to do with
the number of elements in the system than with the number of interdependent
rela- tionships. Even interdependent relationships can vary in degree, causing
more or less impact on overall system behavior. System interdependency may
be either tight or loose depending on how closely elements are linked.
Elements that are tightly coupled have a greater impact on system operation
and performance than elements that are only loosely connected. When an
element such as a worker or machine is delayed in a tightly coupled system, the
impact is immediately felt by other elements in the system and the entire
process may be brought to a screech- ing halt.
In a loosely coupled system, activities have only a minor, and often delayed,
impact on other elements in the system. Systems guru Peter Senge (1990) notes
that for many systems, “Cause and effect are not closely related in time and
space.” Sometimes the distance in time and space between cause-and-effect rela-
tionships becomes quite sizable. If enough reserve inventory has been
stockpiled, a truckers’ strike cutting off the delivery of raw materials to a
transmission plant in one part of the world may not affect automobile assembly
in another part of the world for weeks. Cause-and-effect relationships are like a
ripple of water that di- minishes in impact as the distance in time and space
increases.
Obviously, the preferred approach to dealing with interdependencies is to
eliminate them altogether. Unfortunately, this is not entirely possible for most
situations and actually defeats the purpose of having systems in the first place.
The whole idea of a system is to achieve a synergy that otherwise would be un-
attainable if every component were to function in complete isolation. Several
methods are used to decouple system elements or at least isolate their influence
so disruptions are not felt so easily. These include providing buffer inventories,
implementing redundant or backup measures, and dedicating resources to sin-
gle tasks. The downside to these mitigating techniques is that they often lead to
excessive inventories and underutilized resources. The point to be made here
is that interdependencies, though they may be minimized somewhat, are sim-
ply a fact of life and are best dealt with through effective coordination and
management.
2.4.2 Variability
Variability is a characteristic inherent in any system involving humans and
machinery. Uncertainty in supplier deliveries, random equipment failures, unpre-
dictable absenteeism, and fluctuating demand all combine to create havoc in
plan- ning system operations. Variability compounds the already unpredictable
effect of interdependencies, making systems even more complex and
unpredictable. Vari- ability propagates in a system so that “highly variable
outputs from one worksta- tion become highly variable inputs to another”
(Hopp and Spearman 2000).
30 Part I Study Chapters
Activity times Operation times, repair times, setup times, move times
Decisions To accept or reject a part, where to direct a particular customer, which
task to perform next
Quantities Lot sizes, arrival quantities, number of workers absent
Event intervals Time between arrivals, time between equipment failures
Attributes Customer preference, part size, skill level
Table 2.1 identifies the types of random variability that are typical of most manu-
facturing and service systems.
The tendency in systems planning is to ignore variability and calculate sys-
tem capacity and performance based on average values. Many commercial
sched- uling packages such as MRP (material requirements planning) software
work this way. Ignoring variability distorts the true picture and leads to
inaccurate perfor- mance predictions. Designing systems based on average
requirements is like deciding whether to wear a coat based on the average
annual temperature or pre- scribing the same eyeglasses for everyone based on
average eyesight. Adults have been known to drown in water that was only
four feet deep—on the average! Wherever variability occurs, an attempt should
be made to describe the nature or pattern of the variability and assess the range
of the impact that variability might have on system performance.
Perhaps the most illustrative example of the impact that variability can have
on system behavior is the simple situation where entities enter into a single
queue to wait for a single server. An example of this might be customers
lining up in front of an ATM. Suppose that the time between customer arrivals
is exponen- tially distributed with an average time of one minute and that they
take an average time of one minute, exponentially distributed, to transact their
business. In queu- ing theory, this is called an M/M/1 queuing system. If we
calculate system per- formance based solely on average time, there will never
be any customers waiting in the queue. Every minute that a customer arrives the
previous customer finishes his or her transaction. Now if we calculate the
number of customers waiting in line, taking into account the variation, we will
discover that the waiting line grows to infinity (the technical term is that the
system “explodes”). Who would guess that in a situation involving only one
interdependent relationship that variation alone would make the difference
between zero items waiting in a queue and an in- finite number in the queue?
By all means, variability should be reduced and even eliminated wherever
possible. System planning is much easier if you don’t have to contend with it.
Where it is inevitable, however, simulation can help predict the impact it will
have on system performance. Likewise, simulation can help identify the
degree of improvement that can be realized if variability is reduced or even
eliminated. For
Chapter 2 System Dynamics 31
example, it can tell you how much reduction in overall flow time and flow time
variation can be achieved if operation time variation can be reduced by, say,
20 percent.
• Variance—the degree of fluctuation that can and often does occur in any
of the preceding metrics. Variance introduces uncertainty, and therefore
risk, in achieving desired performance goals. Manufacturers and service
providers are often interested in reducing variance in delivery and service
times. For example, cycle times and throughput rates are going to have
some variance associated with them. Variance is reduced by controlling
activity times, improving resource reliability, and adhering to schedules.
These metrics can be given for the entire system, or they can be broken down by
individual resource, entity type, or some other characteristic. By relating these
metrics to other factors, additional meaningful metrics can be derived that are
useful for benchmarking or other comparative analysis. Typical relational
metrics include minimum theoretical flow time divided by actual flow time
(flow time efficiency), cost per unit produced (unit cost), annual inventory sold
divided by average inventory (inventory turns or turnover ratio), or units
produced per cost or labor input (productivity).
often based on whether the cost to implement a change produces a higher return
in performance.
FIGURE 2.3
Cost curves showing
optimum number of
resources to minimize
total cost.
Optimum
Total cost
Resource costs
Cost
Waiting costs
Number of resources
36 Part I Study Chapters
As shown in Figure 2.3, the number of resources at which the sum of the
resource costs and waiting costs is at a minimum is the optimum number of
resources to have. It also becomes the optimum acceptable waiting time.
In systems design, arriving at an optimum system design is not always
realistic, given the almost endless configurations that are sometimes possible and
limited time that is available. From a practical standpoint, the best that can be
expected is a near optimum solution that gets us close enough to our objective,
given the time constraints for making the decision.
FIGURE 2.4
Four-step iterative
approach to Identify problems
systems and
improvement. opportunities.
Select and
implement
the best
solution. Develop
alternative
solutions.
Evaluate
the
solutions.
identify possible areas of focus and leverage points for applying a solution.
Techniques such as cause-and-effect analysis and pareto analysis are useful here.
Once a problem or opportunity has been identified and key decision variables
isolated, alternative solutions can be explored. This is where most of the design
and engineering expertise comes into play. Knowledge of best practices for com-
mon types of processes can also be helpful. The designer should be open to all
possible alternative feasible solutions so that the best possible solutions don’t get
overlooked.
Generating alternative solutions requires creativity as well as organizational
and engineering skills. Brainstorming sessions, in which designers exhaust every
conceivably possible solution idea, are particularly useful. Designers should use
every stretch of the imagination and not be stifled by conventional solutions
alone. The best ideas come when system planners begin to think innovatively
and break from traditional ways of doing things. Simulation is particularly
helpful in this process in that it encourages thinking in radical new ways.
these techniques still can provide rough estimates but fall short in producing the
insights and accurate answers that simulation provides. Systems implemented
using these techniques usually require some adjustments after implementation to
compensate for inaccurate calculations. For example, if after implementing a
sys- tem it is discovered that the number of resources initially calculated is
insufficient to meet processing requirements, additional resources are added.
This adjustment can create extensive delays and costly modifications if special
personnel training or custom equipment is involved. As a precautionary
measure, a safety factor is sometimes added to resource and space calculations
to ensure they are adequate. Overdesigning a system, however, also can be
costly and wasteful.
As system interdependency and variability increase, not only does system
performance decrease, but the ability to accurately predict system performance
decreases as well (Lloyd and Melton 1997). Simulation enables a planner to
ac- curately predict the expected performance of a system design and ultimately
make better design decisions.
Systems analysis tools, in addition to simulation, include simple
calculations, spreadsheets, operations research techniques (such as linear
programming and queuing theory), and special computerized tools for
scheduling, layout, and so forth. While these tools can provide quick and
approximate solutions, they tend to make oversimplifying assumptions, perform
only static calculations, and are lim- ited to narrow classes of problems.
Additionally, they fail to fully account for interdependencies and variability of
complex systems and therefore are not as ac- curate as simulation in predicting
complex system performance (see Figure 2.5). They all lack the power,
versatility, and visual appeal of simulation. They do pro- vide quick solutions,
however, and for certain situations produce adequate results. They are
important to cover here, not only because they sometimes provide a good
alternative to simulation, but also because they can complement simulation by
providing initial design estimates for input to the simulation model. They also
FIGURE 2.5
System predictability
100%
Simulation
improves With
simulation
performance
predictability.
50%
Without
simulation
can be useful to help validate the results of a simulation by comparing them with
results obtained using an analytic model.
2.9.2 Spreadsheets
Spreadsheet software comes in handy when calculations, sometimes involving
hundreds of values, need to be made. Manipulating rows and columns of
numbers on a computer is much easier than doing it on paper, even with a
calculator handy. Spreadsheets can be used to perform rough-cut analysis
such as calculating average throughput or estimating machine requirements.
The drawback to spread- sheet software is the inability (or, at least, limited
ability) to include variability in activity times, arrival rates, and so on, and to
account for the effects of inter- dependencies.
What-if experiments can be run on spreadsheets based on expected
values (average customer arrivals, average activity times, mean time between
equipment failures) and simple interactions (activity A must be performed
before activity B). This type of spreadsheet simulation can be very useful for
getting rough perfor- mance estimates. For some applications with little
variability and component in- teraction, a spreadsheet simulation may be
adequate. However, calculations based on only average values and
oversimplified interdependencies potentially can be misleading and result in
poor decisions. As one ProModel user reported, “We just completed our final
presentation of a simulation project and successfully saved approximately
$600,000. Our management was prepared to purchase an addi- tional
overhead crane based on spreadsheet analysis. We subsequently built a
ProModel simulation that demonstrated an additional crane will not be
necessary. The simulation also illustrated some potential problems that were not
readily ap- parent with spreadsheet analysis.”
Another weakness of spreadsheet modeling is the fact that all behavior is
assumed to be period-driven rather than event-driven. Perhaps you have tried to
Chapter 2 System Dynamics 41
figure out how your bank account balance fluctuated during a particular period
when all you had to go on was your monthly statements. Using ending balances
does not reflect changes as they occurred during the period. You can know the
cur- rent state of the system at any point in time only by updating the state
variables of the system each time an event or transaction occurs. When it comes
to dynamic models, spreadsheet simulation suffers from the “curse of
dimensionality” be- cause the size of the model becomes unmanageable.
Prescriptive Techniques
Prescriptive OR techniques provide an optimum solution to a problem, such as
the optimum amount of resource capacity to minimize costs, or the optimum
product mix that will maximize profits. Examples of prescriptive OR optimiza-
tion techniques include linear programming and dynamic programming. These
techniques are generally applicable when only a single goal is desired for mini-
mizing or maximizing some objective function—such as maximizing profits or
minimizing costs.
Because optimization techniques are generally limited to optimizing for a
single goal, secondary goals get sacrificed that may also be important. Addition-
ally, these techniques do not allow random variables to be defined as input data,
thereby forcing the analyst to use average process times and arrival rates that
can produce misleading results. They also usually assume that conditions are
constant over the period of study. In contrast, simulation is capable of
analyzing much more complex relationships and time-varying circumstances.
With optimization capabilities now provided in simulation, simulation software
has even taken on a prescriptive roll.
Descriptive Techniques
Descriptive techniques such as queuing theory are static analysis techniques that
provide good estimates for basic problems such as determining the expected
average number of entities in a queue or the average waiting times for entities in
a queuing system. Queuing theory is of particular interest from a simulation
perspective because it looks at many of the same system characteristics and
issues that are addressed in simulation.
Queuing theory is essentially the science of waiting lines (in the United
Kingdom, people wait in queues rather than lines). A queuing system consists
of
42 Part I Study Chapters
one or more queues and one or more servers (see Figure 2.6). Entities, referred
to in queuing theory as the calling population, enter the queuing system and
either are immediately served if a server is available or wait in a queue until a
server be- comes available. Entities may be serviced using one of several
queuing disci- plines: first-in, first-out (FIFO); last-in, first-out (LIFO); priority;
and others. The system capacity, or number of entities allowed in the system at
any one time, may be either finite or, as is often the case, infinite. Several
different entity queuing be- haviors can be analyzed such as balking (rejecting
entry), reneging (abandoning the queue), or jockeying (switching queues).
Different interarrival time distribu- tions (such as constant or exponential) may
also be analyzed, coming from either a finite or infinite population. Service
times may also follow one of several distri- butions such as exponential or
constant.
Kendall (1953) devised a simple system for classifying queuing systems in
the form A/B/s, where A is the type of interarrival distribution, B is the type of
service time distribution, and s is the number of servers. Typical distribution
types for A and B are
M for Markovian or exponential distribution
G for a general distribution
D for a deterministic or constant value
An M/D/1 queuing system, for example, is a system in which interarrival times
are exponentially distributed, service times are constant, and there is a single
server.
The arrival rate in a queuing system is usually represented by the Greek
letter lambda (λ) and the service rate by the Greek letter mu (µ). The mean
interarrival time then becomes 1/λ and the mean service time is 1/µ. A traffic
intensity factor λ/µ is a parameter used in many of the queuing equations and
is represented by the Greek letter rho (ρ).
Common performance measures of interest in a queuing system are based
on steady-state or long-term expected values and include
L = expected number of entities in the system (number in the queue and in
service)
Lq = expected number of entities in the queue (queue length)
Chapter 2 System Dynamics 43
L= ρ λ
(1 =
− ρ) (µ − λ)
ρ2
L q = L − ρ = — ρ)
(1
1
W=
µ −λ
λ
Wq =
µ(µ − λ)
Pn = (1 − ρ)ρ n n = 0, 1, . . .
If either the expected number of entities in the system or the expected waiting
time is known, the other can be calculated easily using Little’s law (1961):
L = λW
Little’s law also can be applied to the queue length and waiting time:
L q = λWq
Example: Suppose customers arrive to use an automatic teller machine (ATM) at
an interarrival time of 3 minutes exponentially distributed and spend an average
of
2.4 minutes exponentially distributed at the machine. What is the expected number
of customers the system and in the queue? What is the expected waiting time for
cus- tomers in the system and in the queue?
λ = 20 per hour
µ = 25 per hour
λ
ρ= = .8
µ
Solving for L:
λ
L=
(µ − λ)
20
=
(25 − 20)
20
= =4
5
44 Part I Study Chapters
.82
=
(1 − .8)
= .64
.2
= 3.2
Solving for W using Little’s formula:
L
W=
λ
4
=
20
= .20 hrs
= 12 minutes
Solving for Wq using Little’s formula:
Wq = L q
λ
= 3.2
20
= .16 hrs
= 9.6 minutes
Descriptive OR techniques such as queuing theory are useful for the most
basic problems, but as systems become even moderately complex, the problems
get very complicated and quickly become mathematically intractable. In
contrast, simulation provides close estimates for even the most complex
systems (assum- ing the model is valid). In addition, the statistical output of
simulation is not limited to only one or two metrics but instead provides
information on all per- formance measures. Furthermore, while OR techniques
give only average per- formance measures, simulation can generate detailed
time-series data and histograms providing a complete picture of performance
over time.
2.10 Summary
An understanding of system dynamics is essential to using any tool for planning
system operations. Manufacturing and service systems consist of interrelated
elements (personnel, equipment, and so forth) that interactively function to pro-
duce a specified outcome (an end product, a satisfied customer, and so on).
Systems are made up of entities (the objects being processed), resources (the
per- sonnel, equipment, and facilities used to process the entities), activities
(the process steps), and controls (the rules specifying the who, what, where,
when, and how of entity processing).
The two characteristics of systems that make them so difficult to analyze are
interdependencies and variability. Interdependencies cause the behavior of one
element to affect other elements in the system either directly or indirectly. Vari-
ability compounds the effect of interdependencies in the system, making system
behavior nearly impossible to predict without the use of simulation.
The variables of interest in systems analysis are decision, response, and
state variables. Decision variables define how a system works; response
variables indicate how a system performs; and state variables indicate system
conditions at specific points in time. System performance metrics or response
variables are gen- erally time, utilization, inventory, quality, or cost related.
Improving system per- formance requires the correct manipulation of decision
variables. System opti- mization seeks to find the best overall setting of
decision variable values that maximizes or minimizes a particular response
variable value.
Given the complex nature of system elements and the requirement to make
good design decisions in the shortest time possible, it is evident that simulation
can play a vital role in systems planning. Traditional systems analysis techniques
are effective in providing quick but often rough solutions to dynamic systems
problems. They generally fall short in their ability to deal with the complexity
and dynamically changing conditions in manufacturing and service systems.
Simula- tion is capable of imitating complex systems of nearly any size and to
nearly any level of detail. It gives accurate estimates of multiple performance
metrics and leads designers toward good design decisions.
References
Blanchard, Benjamin S. System Engineering Management. New York: John Wiley &
Sons, 1991.
Hopp, Wallace J., and M. Spearman. Factory Physics. New York: Irwin/McGraw-Hill,
2000, p. 282.
Kendall, D. G. “Stochastic Processes Occurring in the Theory of Queues and Their
Analysis by the Method of Imbedded Markov Chains.” Annals of
Mathematical Statistics 24 (1953), pp. 338–54.
Kofman, Fred, and P. Senge. Communities of Commitment: The Heart of Learning
Organizations. Sarita Chawla and John Renesch, (eds.), Portland, OR. Productivity
Press, 1995.
Law, Averill M., and David W. Kelton. Simulation Modeling and Analysis. New York:
McGraw-Hill, 2000.
Little, J. D. C. “A Proof for the Queuing Formula: L = λW.” Operations Research 9, no.
3 (1961), pp. 383–87.
Lloyd, S., and K. Melton. “Using Statistical Process Control to Obtain More Precise
Distribution Fitting Using Distribution Fitting Software.” Simulators
International XIV 29, no. 3 (April 1997), pp. 193–98.
Senge, Peter. The Fifth Discipline. New York: Doubleday, 1990.
Simon, Herbert A. Models of Man. New York: John Wiley & Sons, 1957, p. 198.
Harrell−Ghosh−Bo I. Study 3. Simulation © The
wden: Simulation Chapters Basics McGraw−Hill
Using ProModel, Companies,
Second Edition
C H A P T E R
3 SIMULATION BASICS
3.1 Introduction
Simulation is much more meaningful when we understand what it is actually
doing. Understanding how simulation works helps us to know whether we are
applying it correctly and what the output results mean. Many books have been
written that give thorough and detailed discussions of the science of simulation
(see Banks et al. 2001; Hoover and Perry 1989; Law and Kelton 2000; Pooch
and Wall 1993; Ross 1990; Shannon 1975; Thesen and Travis 1992; and
Widman, Loparo, and Nielsen 1989). This chapter attempts to summarize the
basic technical issues related to simulation that are essential to understand in
order to get the greatest benefit from the tool. The chapter discusses the different
types of simulation and how random behavior is sim- ulated. A spreadsheet
simulation example is given in this chapter to illustrate how various techniques
are combined to simulate the behavior of a common system.
47
48 Part I Study Chapters
will look at what the first two characteristics mean in this chapter and focus on
what the third characteristic means in Chapter 4.
FIGURE 3.1
Examples of (a) a deterministic simulation and (b) a stochastic simulation.
Constant
inputs Constant Random Random
outputs inputs outputs
7 12.3
Simulatio Simulation
3.4 n 106
(a) (b)
.4
.2
0
0 1 2 3 4 5 6 7 x
Discrete Value
f (x) 1.0
.8
.6
(b )
.4
.2
0
0 1 2 3 4 5 6 7 x
Continuous Value
FIGURE 3.3
f(x)
The
uniform(0,1)
distribution of a
random number f(x) ={ 1 for 0 < x <
1
generator. 0 elsewhere 1.
1 0
Mean = µ, =
2
1
Variance = u2 =
12 0 1.0 x
where the constant a is called the multiplier, the constant c the increment, and
the constant m the modulus (Law and Kelton 2000). The user must provide a
seed or starting value, denoted Z 0, to begin generating the sequence of integer
values. Z 0, a, c, and m are all nonnegative integers. The value of Zi is computed
by dividing
(aZ i−1 + c) by m and setting Z i equal to the remainder part of the division, which
is the result returned by the mod function. Therefore, the Zi values are bounded
by 0 ≤ Zi ≤ m − 1 and are uniformly distributed in the discrete case.
However, we desire the continuous version of the uniform distribution with
values ranging between a low of zero and a high of one, which we will denote
as Ui for i = 1, 2, 3, . . . . Accordingly, the value of Ui is computed by dividing
Zi by m.
In a moment, we will consider some requirements for selecting the values
for a, c, and m to ensure that the random number generator produces a long
sequence of numbers before it begins to repeat them. For now, however, let’s
assign the fol- lowing values a = 21, c = 3, and m = 16 and generate a few
pseudo-random numbers. Table 3.1 contains a sequence of 20 random numbers
generated from the recursive formula
Zi = (21Zi−1 + 3) mod(16)
An integer value of 13 was somewhat arbitrarily selected between 0 and
m − 1 = 16 − 1 = 15 as the seed (Z 0 = 13) to begin generating the sequence of
i 21Zi−1 + 3 Zi Ui = Zi/16
0 13
1 276 4 0.2500
2 87 7 0.4375
3 150 6 0.3750
4 129 1 0.0625
5 24 8 0.5000
6 171 11 0.6875
7 234 10 0.6250
8 213 5 0.3125
9 108 12 0.7500
10 255 15 0.9375
11 318 14 0.8750
12 297 9 0.5625
13 192 0 0.0000
14 3 3 0.1875
15 66 2 0.1250
16 45 13 0.8125
17 276 4 0.2500
18 87 7 0.4375
19 150 6 0.3750
20 129 1 0.0625
Chapter 3 Simulation Basics 53
Following this guideline, the LCG can achieve a full cycle length of over 2.1
billion (231 to be exact) random numbers.
Frequently, the long sequence of random numbers is subdivided into smaller
segments. These subsegments are referred to as streams. For example, Stream 1
could begin with the random number in the first position of the sequence and
continue down to the random number in the 200,000th position of the sequence.
Stream 2, then, would start with the random number in the 200,001st position of
the sequence and end at the 400,000th position, and so on. Using this approach,
each type of random event in the simulation model can be controlled by a unique
stream of random numbers. For example, Stream 1 could be used to generate the
arrival pattern of cars to a restaurant’s drive-through window and Stream 2 could
be used to generate the time required for the driver of the car to place an order.
This assumes that no more than 200,000 random numbers are needed to simulate
each type of event. The practical and statistical advantages of assigning unique
streams to each type of event in the model are described in Chapter 10.
To subdivide the generator’s sequence of random numbers into streams, you
first need to decide how many random numbers to place in each stream. Next,
you begin generating the entire sequence of random numbers (cycle length)
produced by the generator and recording the Zi values that mark the beginning of
each stream. Therefore, each stream has its own starting or seed value. When
using the random number generator to drive different events in a simulation
model, the previously generated random number from a particular stream is
used as input to the generator to generate the next random number from that
stream. For convenience, you may want to think of each stream as a separate
random number generator to be used in different places in the model. For
example, see Figure 10.5 in Chapter 10.
There are two types of linear congruential generators: the mixed
congruential generator and the multiplicative congruential generator. Mixed
congruential gener- ators are designed by assigning c > 0. Multiplicative
congruential generators are designed by assigning c = 0. The multiplicative
generator is more efficient than the mixed generator because it does not require
the addition of c. The maximum cycle length for a multiplicative generator can
be set within one unit of the maximum
cycle length of the mixed generator by carefully selecting values for a and m.
From a practical standpoint, the difference in cycle length is insignificant
considering that both types of generators can boast cycle lengths of more than
2.1 billion.
ProModel uses the following multiplicative generator:
31
Zi = (630,360,016Zi−1) − 1)
mod(2
Specifically, it is a prime modulus multiplicative linear congruential generator
(PMMLCG) with a = 630,360,016, c = 0, and m = 231 − 1. It has been
exten- sively tested and is known to be a reliable random number generator for
simula- tion (Law and Kelton 2000). The ProModel implementation of this
generator
divides the cycle length of 231 − 1 = 2,147,483,647 into 100 unique streams.
important properties defined at the beginning of this section. The numbers pro-
duced by the random number generator must be (1) independent and (2)
uniformly distributed between zero and one (uniform(0,1)). To verify that the
generator satisfies these properties, you first generate a sequence of random
numbers U1, U2, U3, . . . and then subject them to an appropriate test of
hypothesis.
The hypotheses for testing the independence property are
H 0: Ui values from the generator are independent
H1: Ui values from the generator are not independent
Several statistical methods have been developed for testing these hypotheses at
a specified significance level α. One of the most commonly used methods is
the runs test. Banks et al. (2001) review three different versions of the runs test
for conducting this independence test. Additionally, two runs tests are imple-
mented in Stat::Fit—the Runs Above and Below the Median Test and the Runs
Up and Runs Down Test. Chapter 6 contains additional material on tests for
independence.
The hypotheses for testing the uniformity property are
H 0: Ui values are uniform(0,1)
H1: Ui values are not uniform(0,1)
Several statistical methods have also been developed for testing these
hypotheses at a specified significance level α. The Kolmogorov-Smirnov test
and the chi- square test are perhaps the most frequently used tests. (See
Chapter 6 for a de- scription of the chi-square test.) The objective is to
determine if the uniform(0,1) distribution fits or describes the sequence of
random numbers produced by the random number generator. These tests are
included in the Stat::Fit software and are further described in many
introductory textbooks on probability and statistics (see, for example, Johnson
1994).
discrete and continuous distributions, is described starting first with the continu-
ous case. For a review of the other methods, see Law and Kelton (2000).
Continuous Distributions
The application of the inverse transformation method to generate random
variates from continuous distributions is straightforward and efficient for many
continu- ous distributions. For a given probability density function f (x), find
the cumula- tive distribution function of X. That is, F(x) = P(X ≤ x). Next,
set U = F(x),
where U is uniform(0,1), and solve for x. Solving for x yields x = F−1(U ). The
equation x = F−1(U ) transforms U into a value for x that conforms to the
given distribution f (x).
As an example, suppose that we need to generate variates from the
exponential distribution with mean β. The probability density function f (x) and
corresponding cumulative distribution function F(x) are
−x/β
e for x > 0
f (x) = 1 β
0 elsewhere
r
1 − e−x/β for x > 0
F(x) =
0 elsewhere
Setting U = F(x) and solving for x yields
U = 1 − e−x/β
−x/β
e =1−U
ln (e−x/β) = ln (1 − U) where ln is the natural logarithm
−x/β = ln (1 − U)
x = −β ln (1 − U)
The random variate x in the above equation is exponentially distributed with mean
β provided U is uniform(0,1).
Suppose three observations of an exponentially distributed random variable
with mean β = 2 are desired. The next three numbers generated by the
random number generator are U1 = 0.27, U2 = 0.89, and U3 = 0.13. The three
numbers are transformed into variates x1, x2, and x3 from the exponential
distribution with mean β = 2 as follows:
x1 = −2 ln (1 − U1) = −2 ln (1 − 0.27) = 0.63
x2 = −2 ln (1 − U2) = −2 ln (1 − 0.89) = 4.41
x3 = −2 ln (1 − U3) = −2 ln (1 − 0.13) = 0.28
Figure 3.4 provides a graphical representation of the inverse transformation
method in the context of this example. The first step is to generate U, where U is
uniform(0,1). Next, locate U on the y axis and draw a horizontal line from that
point to the cumulative distribution function [F(x) = 1 − e−x/2]. From this point
Chapter 3 Simulation Basics 57
FIGURE 3.4
Graphical explanation F(x)
of inverse
transformation 1.00
method for continuous U2 = 1 – e–x2/ 2 = 0.89
variates.
0.50
U1 = 1 – e–x1/ 2 = 0.27
of intersection with F(x), a vertical line is dropped down to the x axis to obtain
the corresponding value of the variate. This process is illustrated in Figure
3.4 for generating variates x1 and x2 given U1 = 0.27 and U2 = 0.89.
Application of the inverse transformation method is straightforward as long
as there is a closed-form formula for the cumulative distribution function, which
is the case for many continuous distributions. However, the normal distribution
is one exception. Thus it is not possible to solve for a simple equation to
generate normally distributed variates. For these cases, there are other methods
that can be used to generate the random variates. See, for example, Law and
Kelton (2000) for a description of additional methods for generating random
variates from continuous distributions.
Discrete Distributions
The application of the inverse transformation method to generate variates from
discrete distributions is basically the same as for the continuous case. The differ-
ence is in how it is implemented. For example, consider the following
probability mass function:
0.10 for x = 1
p(x) = P(X = x) = 0.30 for x = 2
0.60 for x = 3
The random variate x has three possible values. The probability that x is equal to
1 is 0.10, P(X = 1) = 0.10; P(X = 2) = 0.30; and P(X = 3) = 0.60. The
cumula-
tive distribution function F(x) is shown in Figure 3.5. The random variable x
could be used in a simulation to represent the number of defective components
on a circuit board or the number of drinks ordered from a drive-through window,
for example.
Suppose that an observation from the above discrete distribution is desired.
The first step is to generate U, where U is uniform(0,1). Using Figure 3.5, the
58 Part I Study Chapters
0.40
U1 = 0.27
0.1
U3 = 0.05 0 1 2 3
x3 = 1 x1 = 2 x2 = 3
FIGURE 3.6
Descriptive drawing of the automatic teller machine (ATM) system.
8 7 6 5 4 3 2 1
Interarrival time
4.8 minutes
6
1
62 Part I Study Chapters
time and X 2i denote the service time generated for the ith customer simulated in
the system. The equation for transforming a random number into an interarrival
time observation from the exponential distribution with mean β = 3.0 minutes
becomes
X 1i = −3.0 ln (1 − U 1i ) for i = 1, 2, 3, . . . , 25
where U 1i denotes the ith value drawn from the random number generator using
Stream 1. This equation is used in the Arrivals to ATM section of Table 3.2
under the Interarrival Time (X 1i ) column.
The equation for transforming a random number into an ATM service time
ob- servation from the exponential distribution with mean β = 2.4 minutes
becomes
X 2i = −2.4 ln (1 − U 2i ) for i = 1, 2, 3, . . . , 25
where U 2i denotes the ith value drawn from the random number generator using
Stream 2. This equation is used in the ATM Processing Time section of Table 3.2
under the Service Time (X 2i ) column.
Let’s produce the sequence of U 1i values that feeds the transformation
equa- tion (X 1i ) for interarrival times using a linear congruential generator
(LCG) similar to the one used in Table 3.1. The equations are
Z 1i = (21Z 1i −1 + 3) mod(128)
U 1i = Z 1i /128 for i = 1, 2, 3, . . . , 25
The authors defined Stream 1’s starting or seed value to be 3. So we will use
Z 10 = 3 to kick off this stream of 25 random numbers. These equations are
used in the Arrivals to ATM section of Table 3.2 under the Stream 1 (Z 1i ) and
Random Number (U 1i ) columns.
Likewise, we will produce the sequence of U 2i values that feeds the trans-
formation equation (X 2i ) for service times using
Z 2i = (21Z 2i −1 + 3) mod(128)
U 2i = Z 2i /128 for i = 1, 2, 3, . . . , 25
and will specify a starting seed value of Z 20 = 122, Stream 2’s seed value, to
kick off the second stream of 25 random numbers. These equations are used
in the ATM Processing Time section of Table 3.2 under the Stream 2 (Z 2i ) and
Random Number (U 2i ) columns.
The spreadsheet presented in Table 3.2 illustrates 25 random variates for
both the interarrival time, column (X 1i ), and service time, column (X 2i ).
All time values are given in minutes in Table 3.2. To be sure we pull this
together correctly, let’s compute a couple of interarrival times with mean β
= 3.0 minutes and compare them to the values given in Table 3.2.
Given Z 10 = 3
Z 11 = (21Z 10 + 3) mod(128) = (21(3) + 3) mod(128)
= (66) mod(128) = 66
U 11 = Z 11/128 = 66/128 = 0.516
X 11 = −β ln (1 − U 11) = −3.0 ln (1 − 0.516) = 2.18 minutes
Chapter 3 Simulation Basics 63
FIGURE 3.7
Microsoft Excel
snapshot of the
ATM spreadsheet
illustrating the
equations for the
Arrivals to ATM
section.
The value of 2.18 minutes is the first value appearing under the column,
Interarrival Time (X 1i ). To compute the next interarrival time value X 12 , we
start by using the value of Z 11 to compute Z 12.
Given Z 11 = 66
Z 12 = (21Z 11 + 3) mod(128) = (21(66) + 3) mod(128) = 109
U 12 = Z 12/128 = 109/128 = 0.852
X 12 = −3 ln (1 − U 12) = −3.0 ln (1 − 0.852) = 5.73 minutes
Figure 3.7 illustrates how the equations were programmed in Microsoft Excel
for the Arrivals to ATM section of the spreadsheet. Note that the U 1i and X 1i
values in Table 3.2 are rounded to three and two places to the right of the
decimal, respectively. The same rounding rule is used for U 2i and X 2i .
It would be useful for you to verify a few of the service time values with
mean β = 2.4 minutes appearing in Table 3.2 using
Z 20 = 122
Z 2i = (21Z 2i −1 + 3) mod(128)
U 2i = Z 2i /128
X 2i = −2.4 ln (1 − U 2i ) for i = 1, 2, 3, . . .
The equations started out looking a little difficult to manipulate but turned
out not to be so bad when we put some numbers in them and organized them
in a spreadsheet—though it was a bit tedious. The important thing to note here
is that although it is transparent to the user, ProModel uses a very similar
method to produce exponentially distributed random variates, and you now
understand how it is done.
64 Part I Study Chapters
The LCG just given has a maximum cycle length of 128 random numbers
(you may want to verify this), which is more than enough to generate 25 interar-
rival time values and 25 service time values for this simulation. However, it is a
poor random number generator compared to the one used by ProModel. It was
chosen because it is easy to program into a spreadsheet and to compute by hand
to facilitate our understanding. The biggest difference between it and the
random number generator in ProModel is that the ProModel random number
generator manipulates much larger numbers to pump out a much longer stream
of numbers that pass all statistical tests for randomness.
Before moving on, let’s take a look at why we chose Z 10 = 3 and Z 20 =
122. Our goal was to make sure that we did not use the same uniform(0,1)
random
number to generate both an interarrival time and a service time. If you look
carefully at Table 3.2, you will notice that the seed value Z 20 = 122 is the Z
125 value from random number Stream 1. Stream 2 was merely defined to start
where Stream 1 ended. Thus our spreadsheet used a unique random number to
generate each interarrival and service time. Now let’s add the necessary
logic to our spreadsheet to conduct the simulation of the ATM system.
The Service Time column simply records the simulated amount of time
required for the customer to complete their transaction at the ATM. These values
are copies of the service time X2i values generated in the ATM Processing Time
section of the spreadsheet.
The Departure Time column records the moment in time at which a
customer departs the system after completing their transaction at the ATM. To
compute the time at which a customer departs the system, we take the time at
which the cus- tomer gained access to the ATM to begin service, column (3),
and add to that the length of time the service required, column (4). For
example, the first customer gained access to the ATM to begin service at 2.18
minutes, column (3). The ser-
vice time for the customer was determined to be 0.10 minutes in column (4). So,
the customer departs 0.10 minutes later or at time 2.18 + 0.10 = 2.28
minutes. This customer’s short service time must be because they forgot their
PIN number and could not conduct their transaction.
The Time in Queue column records how long a customer waits in the queue
before gaining access to the ATM. To compute the time spent in the queue, we
take the time at which the ATM began serving the customer, column (3), and
sub- tract from that the time at which the customer arrived to the system,
column (2). The fourth customer arrives to the system at time 15.17 and begins
getting service from the ATM at 18.25 minutes; thus, the fourth customer’s time
in the queue is
18.25 − 15.17 = 3.08 minutes.
The Time in System column records how long a customer was in the system.
To compute the time spent in the system, we subtract the customer’s departure
time, column (5), from the customer’s arrival time, column (2). The fifth
customer arrives to the system at 15.74 minutes and departs the system at
24.62 minutes. Therefore, this customer spent 24.62 − 15.74 = 8.88 minutes in
the system.
Now let’s go back to the Begin Service Time column, which records the time
at which a customer begins to be served by the ATM. The very first customer to
arrive to the system when it opens for service advances directly to the ATM.
There is no waiting time in the queue; thus the value recorded for the time that
the first customer begins service at the ATM is the customer’s arrival time.
With the exception of the first customer to arrive to the system, we have to
capture the logic that a customer cannot begin service at the ATM until the
previous customer using the ATM completes his or her transaction. One way to
do this is with an IF state- ment as follows:
IF (logical test, use this value if test is true, else use this
value if test is false)
66 Part I Study Chapters
FIGURE 3.8
Microsoft Excel
snapshot of the
ATM spreadsheet
illustrating the IF
statement for the
Begin Service Time
column.
The Excel spreadsheet cell L10 (column L, row 10) in Figure 3.8 is the
Begin Service Time for the second customer to arrive to the system and is
programmed with IF(K10<N9,N9,K10). Since the second customer’s arrival
time (Excel cell K10) is not less than the first customer’s departure time (Excel
cell N9), the logical test evaluates to “false” and the second customer’s time to
begin service is set to his or her arrival time (Excel cell K10). The fourth
customer shown in Figure 3.8 pro- vides an example of when the logical test
evaluates to “true,” which results in the fourth customer beginning service when
the third customer departs the ATM.
3.6 Summary
Modeling random behavior begins with transforming the output produced by a
random number generator into observations (random variates) from an appropri-
ate statistical distribution. The values of the random variates are combined with
logical operators in a computer program to compute output that mimics the per-
formance behavior of stochastic systems. Performance estimates for stochastic
68 Part I Study Chapters
=
15
p(x) = P(X = x) =
0 elsewhere
7. How would a random number generator be used to simulate a 12 percent
chance of rejecting a part because it is defective?
8. Reproduce the spreadsheet simulation of the ATM system presented
in Section 3.5. Set the random numbers seeds Z 10 = 29 and Z 20 =
92 to compute the average time customers spend in the queue and in
the system.
a. Verify that the average time customers spend in the queue and in the
system match the values given for the second replication in Table 3.3.
Chapter 3 Simulation Basics 69
Reference
s
Banks, Jerry; John S. Carson II; Barry L. Nelson; and David M. Nicol. Discrete-Event
System Simulation. Englewood Cliffs, NJ: Prentice Hall, 2001.
Hoover, Stewart V., and Ronald F. Perry. Simulation: A Problem-Solving Approach.
Reading, MA: Addison-Wesley, 1989.
Johnson, R. A. Miller and Freund’s Probability and Statistics for Engineers. 5th ed.
Englewood Cliffs, NJ: Prentice Hall, 1994.
Law, Averill M., and David W. Kelton. Simulation Modeling and Analysis. New York:
McGraw-Hill, 2000.
L’Ecuyer, P. “Random Number Generation.” In Handbook of Simulation: Principles,
Methodology, Advances, Applications, and Practice, ed. J. Banks, pp. 93–137.
New York: John Wiley & Sons, 1998.
Pooch, Udo W., and James A. Wall. Discrete Event Simulation: A Practical Approach.
Boca Raton, FL: CRC Press, 1993.
Pritsker, A. A. B. Introduction to Simulation and SLAM II. 4th ed. New York: John Wiley
& Sons, 1995.
Ross, Sheldon M. A Course in Simulation. New York: Macmillan, 1990.
Shannon, Robert E. System Simulation: The Art and Science. Englewood Cliffs, NJ:
Prentice Hall, 1975.
Thesen, Arne, and Laurel E. Travis. Simulation for Decision Making. Minneapolis, MN:
West Publishing, 1992.
Widman, Lawrence E.; Kenneth A. Loparo; and Norman R. Nielsen. Artificial Intelligence,
Simulation, and Modeling. New York: John Wiley & Sons, 1989.
Harrell−Ghosh−Bo I. Study 4. © The
wden: Simulation Chapters Discrete−Eve McGraw−Hill
Using ProModel, nt Companies,
Second Edition
C H A P T E R
4 DISCRETE-EVENT
SIMULATION
“When the only tool you have is a hammer, every problem begins to resemble a
nail.”
—Abraham Maslow
4.1 Introduction
Building on the foundation provided by Chapter 3 on how random numbers and
random variates are used to simulate stochastic systems, the focus of this chapter
is on discrete-event simulation, which is the main topic of this book. A
discrete- event simulation is one in which changes in the state of the
simulation model occur at discrete points in time as triggered by events. The
events in the automatic teller machine (ATM) simulation example of Chapter
3 that occur at discrete points in time are the arrivals of customers to the ATM
queue and the completion of their transactions at the ATM. However, you will
learn in this chapter that the spreadsheet simulation of the ATM system in
Chapter 3 was not technically exe- cuted as a discrete-event simulation.
This chapter first defines what a discrete-event simulation is compared to a
continuous simulation. Next the chapter summarizes the basic technical issues
re- lated to discrete-event simulation to facilitate your understanding of how to
effec- tively use the tool. Questions that will be answered include these:
• How does discrete-event simulation work?
• What do commercial simulation software packages provide?
• What are the differences between simulation languages and simulators?
• What is the future of simulation technology?
A manual dynamic, stochastic, discrete-event simulation of the ATM example
system from Chapter 3 is given to further illustrate what goes on inside this type
of simulation.
71
72 Part I Study Chapters
1 …
Time
Event 1 Event 2 Event 3
Start (customer (custome (custome
Simulatio arrives) r arrives) r departs)
n
Chapter 4 Discrete-Event Simulation 73
FIGURE 4.2
Comparison of a Continuous-
change state
discrete-change
variable
state variable and a
continuous-change
state variable. Value
Discrete-change
state variable
Time
Batch processing in which fluids are pumped into and out of tanks can often be
modeled using difference equations.
1.2 minutes. At the start of the activity, a normal random variate is generated
based on these parameters, say 4.2 minutes, and an activity completion event is
scheduled for that time into the future. Scheduled events are inserted chronologi-
cally into an event calendar to await the time of their occurrence. Events that
occur at predefined intervals theoretically all could be determined in advance
and therefore be scheduled at the beginning of the simulation. For example,
entities arriving every five minutes into the model could all be scheduled easily
at the start of the simulation. Rather than preschedule all events at once that
occur at a set fre- quency, however, they are scheduled only when the next
occurrence must be de- termined. In the case of a periodic arrival, the next
arrival would not be scheduled until the current scheduled arrival is actually
pulled from the event calendar for processing. This postponement until the
latest possible moment minimizes the size of the event calendar and eliminates
the necessity of knowing in advance how many events to schedule when the
length of the simulation may be unknown.
Conditional events are triggered by a condition being met rather than by the
passage of time. An example of a conditional event might be the capturing of a
resource that is predicated on the resource being available. Another example
would be an order waiting for all of the individual items making up the order to
be assembled. In these situations, the event time cannot be known beforehand,
so the pending event is simply placed into a waiting list until the conditions can
be satisfied. Often multiple pending events in a list are waiting for the same
condi- tion. For example, multiple entities might be waiting to use the same
resource when it becomes available. Internally, the resource would have a
waiting list for all items currently waiting to use it. While in most cases events
in a waiting list are processed first-in, first-out (FIFO), items could be inserted
and removed using a number of different criteria. For example, items may be
inserted according to item priority but be removed according to earliest due
date.
Events, whether scheduled or conditional, trigger the execution of logic that
is associated with that event. For example, when an entity frees a resource, the
state and statistical variables for the resource are updated, the graphical
animation is up- dated, and the input waiting list for the resource is examined to
see what activity to respond to next. Any new events resulting from the
processing of the current event are inserted into either the event calendar or
another appropriate waiting list.
In real life, events can occur simultaneously so that multiple entities can be
doing things at the same instant in time. In computer simulation, however, espe-
cially when running on a single processor, events can be processed only one at a
time even though it is the same instant in simulated time. As a consequence, a
method or rule must be established for processing events that occur at the exact
same simulated time. For some special cases, the order in which events are
processed at the current simulation time might be significant. For example, an
entity that frees a resource and then tries to immediately get the same resource
might have an unfair advantage over other entities that might have been waiting
for that particular resource.
In ProModel, the entity, downtime, or other item that is currently being
processed is allowed to continue processing as far as it can at the current simula-
tion time. That means it continues processing until it reaches either a conditional
76 Part I Study Chapters
Advance clock
to next event
time.
Yes Update
statistics and
Termination
generate
event?
output report.
No
Process event Stop
and schedule any
new events.
Update
statistics, state
variables, and
animation.
No
event that cannot be satisfied or a timed delay that causes a future event to be
scheduled. It is also possible that the object simply finishes all of the processing
defined for it and, in the case of an entity, exits the system. As an object is being
processed, any resources that are freed or other entities that might have been cre-
ated as byproducts are placed in an action list and are processed one at a time in
a similar fashion after the current object reaches a stopping point. To
deliberately
Chapter 4 Discrete-Event Simulation 77
suspend the current object in order to allow items in the action list to be
processed, a zero delay time can be specified for the current object. This puts the
current item into the future events list (event calendar) for later processing,
even though it is still processed at the current simulation time.
When all scheduled and conditional events have been processed that are
possible at the current simulation time, the clock advances to the next scheduled
event and the process continues. When a termination event occurs, the
simulation ends and statistical reports are generated. The ongoing cycle of
processing sched- uled and conditional events, updating state and statistical
variables, and creating new events constitutes the essence of discrete-event
simulation (see Figure 4.3).
FIGURE 4.4
Entity flow diagram for example automatic teller machine (ATM) system.
8 7 6 5 4 3 2 1
78 Part I Study Chapters
Simulation Clock
As the simulation transitions from one discrete-event to the next, the simulation
clock is fast forwarded to the time that the next event is scheduled to occur.
There is no need for the clock to tick seconds away until reaching the time at
which the next event in the list is scheduled to occur because nothing will
happen that changes the state of the system until the next event occurs. Instead,
the simulation clock advances through a series of time steps. Let ti denote the
value of the simu- lation clock at time step i, for i = 0 to the number of
discrete events to process. Assuming that the simulation starts at time zero, then
the initial value of the sim- ulation clock is denoted as t0 = 0. Using this
nomenclature, t1 denotes the value of the simulation clock when the first
discrete event in the list is processed, t2 de- notes the value of the simulation
clock when the second discrete-event in the list is processed, and so on.
Entity Attributes
To capture some statistics about the entities being processed through the system,
a discrete-event simulation maintains an array of entity attribute values.
Entity
Chapter 4 Discrete-Event Simulation 79
attributes are characteristics of the entity that are maintained for that entity until
the entity exits the system. For example, to compute the amount of time an
entity waited in a queue location, an attribute is needed to remember when the
entity en- tered the location. For the ATM simulation, one entity attribute is used
to remem- ber the customer’s time of arrival to the system. This entity attribute
is called the Arrival Time attribute. The simulation program computes how
long each cus- tomer entity waited in the queue by subtracting the time that the
customer entity arrived to the queue from the value of the simulation clock when
the customer en- tity gained access to the ATM.
State Variables
Two discrete-change state variables are needed to track how the status (state) of
the system changes as customer entities arrive in and depart from the ATM
system.
• Number of Entities in Queue at time step i, NQi.
• ATM Statusi to denote if the ATM is busy or idle at time step i.
Statistical Accumulators
The objective of the example manual simulation is to estimate the expected
amount of time customers wait in the queue and the expected number of cus-
tomers waiting in the queue. The average time customers wait in queue is a
simple average. Computing this requires that we record how many customers
passed through the queue and the amount of time each customer waited in the
queue. The average number of customers in the queue is a time-weighted
average, which is usually called a time average in simulation. Computing this
requires that we not only observe the queue’s contents during the simulation but
that we also measure the amount of time that the queue maintained each of the
observed values. We record each observed value after it has been multiplied
(weighted) by the amount of time it was maintained.
Here’s what the simulation needs to tally at each simulation time step i to
compute the two performance measures at the end of the simulation.
Simple-average time in queue.
• Record the number of customer entities processed through the queue,
Total Processed. Note that the simulation may end before all customer
entities in the queue get a turn at the ATM. This accumulator keeps track
of how many customers actually made it through the queue.
• For a customer processed through the queue, record the time that it waited
in the queue. This is computed by subtracting the value of the simulation
clock time when the entity enters the queue (stored in the entity attribute
array Arrival Time) from the value of the simulation clock time when the
entity leaves the queue, ti − Arrival Time.
Time-average number of customers in the queue.
• For the duration of the last time step, which is ti − ti −1 , and the number
of customer entities in the queue during the last time step, which is NQi
−1 , record the product of ti − ti −1 and NQi −1 . Call the product
(ti − ti −1)NQi −1 the Time-Weighted Number of Entities in the Queue.
80 Part I Study Chapters
Events
There are two types of recurring scheduled events that change the state of the
sys- tem: arrival events and departure events. An arrival event occurs when a
customer entity arrives to the queue. A departure event occurs when a customer
entity com- pletes its transaction at the ATM. Each processing of a customer
entity’s arrival to the queue includes scheduling the future arrival of the next
customer entity to the ATM queue. Each time an entity gains access to the ATM,
its future departure from the system is scheduled based on its expected service
time at the ATM. We actually need a third event to end the simulation. This
event is usually called the termination event.
To schedule the time at which the next entity arrives to the system, the
simu- lation needs to generate an interarrival time and add it to the current
simulation clock time, ti . The interarrival time is exponentially distributed
with a mean of
3.0 minutes for our example ATM system. Assume that the function E (3.0) re-
turns an exponentially distributed random variate with a mean of 3.0 minutes.
The future arrival time of the next customer entity can then be scheduled by
using the equation ti + E (3.0).
The customer service time at the ATM is exponentially distributed with a
mean of 2.4 minutes. The future departure time of an entity gaining access to the
ATM is scheduled by the equation ti + E (2.4).
Event Calendar
The event calendar maintains the list of active events (events that have been
scheduled and are waiting to be processed) in chronological order. The
simulation progresses by removing the first event listed on the event calendar,
setting the simulation clock, ti , equal to the time at which the event is scheduled
to occur, and processing the event.
compare the results of the manual simulation with those produced by the spread-
sheet simulation.
Notice that Table 3.2 contains a subscript i in the leftmost column. This sub-
script denotes the customer entity number as opposed to the simulation time
step. We wanted to point this out to avoid any confusion because of the different
uses of the subscript. In fact, you can ignore the subscript in Table 3.2 as you
pick val- ues from the Service Time and Interarrival Time columns.
A discrete-event simulation logic diagram for the ATM system is shown in
Figure 4.5 to help us carry out the manual simulation. Table 4.1 presents the re-
sults of the manual simulation after processing 12 events using the simulation
logic diagram presented in Figure 4.5. The table tracks the creation and schedul-
ing of events on the event calendar as well as how the state of the system
changes and how the values of the statistical accumulators change as events are
processed from the event calendar. Although Table 4.1 is completely filled in, it
was initially blank until the instructions presented in the simulation logic
diagram were exe- cuted. As you work through the simulation logic diagram,
you should process the information in Table 4.1 from the first row down to the
last row, one row at a time (completely filling in a row before going down to the
next row). A dash (—) in a cell in Table 4.1 signifies that the simulation logic
diagram does not require you to update that particular cell at the current
simulation time step. An arrow (↑) in a cell in the table also signifies that the
simulation logic diagram does not require you to update that cell at the current
time step. However, the arrows serve as a re- minder to look up one or more
rows above your current position in the table to de- termine the state of the ATM
system. Arrows appear under the Number of Entities in Queue, NQi column, and
ATM Statusi column. The only exception to the use of dashes or arrows is that
we keep a running total in the two Cumulative sub- columns in the table for
each time step. Let’s get the manual simulation started.
i = 0, t0 = 0. As shown in Figure 4.5, the first block after the start position
indicates that the model is initialized to its starting conditions. The
simulation time step begins at i = 0. The initial value of the simulation
clock is zero, t0 = 0. The system state variables are set to ATM Status0 =
“Idle”; Number of Entities in Queue, NQ0 = 0; and the Entity Attribute
Array is cleared. This reflects the initial conditions of no customer entities
in the queue and an idle ATM. The statistical accumulator Total Processed is
set to zero. There are two different Cumulative variables in Table 4.1: one to
accumulate the time in queue values of ti − Arrival Time, and the other to
accumulate the values of the time-weighted number of entities in the queue,
(ti − ti −1)NQi −1 . Recall that ti − Arrival Time is the amount of time that
entities, which gained access to the ATM, waited in queue. Both Cumulative
variables (ti − Arrival Time) and (ti − ti −1)NQi −1 are initialized to zero.
Next, an initial arrival event and termination event are scheduled and placed
under the Scheduled Future Events column. The listing of an event is
formatted as “(Entity Number, Event, and Event Time)”. Entity Number
denotes the customer number that the event pertains to (such as the first,
second, or third customer). Event is the type of event: a customer arrives, a
Start
i=0
8 Harrell
2 Initialize variables and schedule initial arrival event and termination event (Scheduled Future Events). −Ghos
i=i+1 h−Bow
i=i+1 den:
i=i+1 • Update Event Calendar: Insert Scheduled Future Events in chronological order. Simula
• Advance Clock, ti, to the Time of the first event on the calendar and process the event. tion
Using
ProMo
del,
Schedule arrival event Arrive
Event type? Depart I.
for next customer entity S
to occur at time ti + E(3). End tu
Update statistics and generate output report. Any d
No Yes No y
Is ATM customers
Yes C
idle? in queue? h
End
4.
• Store current customer
• Schedule departureentity’s Arrival Time in last position of Entity
event for • Update Entity Attribute Array by • Update Entity Attribute Disc
Attribute Array to reflect customer joining the queue. deleting departed customer entity Array by deleting departed
current customer entity entering rete
• Add ATM
1 to NQ i, Number
to occur of Entities
at time in Queue.
ti + E(2.4). from first position in the array and customer entity from first −Ev
• Store current customer’s • Update Time-Weighted shifting waiting customers up. position of the array. ent
Arrival Time in first position of Number of Entities in Queue • Subtract 1 from NQi, Number of • Change ATM Statusi to Idle. Sim
Entity Attribute Array. statistic. Entities in Queue.
• Change ATM Statusi to Busy. - Compute value for • Schedule departure event for
• Update Entities Processed (ti – ti – 1)NQi – 1 and customer entity entering the ATM to
through Queue statistics to update Cumulative. occur at time ti + E(2.4).
reflect customer entering ATM. • Update Entities Processed
- Add 1 to Total Processed. through Queue statistics to reflect
- Record Time in Queue of 0 for customer entering ATM.
ti - Arrival Time and update - Add 1 to Total Processed.
Cumulative. - Compute Time in Queue
• Update Time-Weighted Number value for ti - Arrival Time and
of Entities in Queue statistic. update Cumulative.
- Record value of 0 for • Update Time-Weighted Number of
(ti – ti – 1)NQi – 1 and update Entities in Queue statistic. ©
Cumulative. - Compute value for (ti – ti – 1)NQi –1 The
and update Cumulative. McGr
aw−
Hill
Comp
FIGURE 4.5
Discrete-event simulation logic diagram for ATM system.
Harrell
Entity Number
Array (Entity Time-Weighted tion
ATM Statusi
Clock, ti
Event
Number, Arrival Entities Processed Number of Using
Time ti –
in ATM,
Time) through Queue Entities in Queue ProMo
position 1
Time)
(Entity Number, Event, Time)
Cumulative,
ti−1)NQi−1
(ti −Cumulative,
in Queue,
arraypositions
Event, Time)
del,
Using
Number,
(Entity
Total Processed
(ti − ti−1)NQi−1
(ti – Arrival
2, 3,———
Waiting
I.
Arrival
...
S
*Entity
Queue, array
Time
tu
Entities
d
y
i C
h
0 — 0 — — *( ) 0 Idle 0 — 0 — 0 (1, Arrive, 2.18)
( ) (_, End, 22.00)
1 (1, Arrive, 2.18) 2.18 1 Arrive *(1, 2.18) ↑ Busy 1 0 0 0 0 (2, Arrival, 7.91)
(_, End, 22.00) ( ) (1, Depart, 2.28)
4.
2 (1, Depart, 2.28) 2.28 1 Depart *( ) ↑ Idle — — 0 — 0
(2, Arrive, 7.91) ( ) No new events Disc
(_, End, 22.00) rete
3 (2, Arrive, 7.91) 7.91 2 Arrive *(2, 7.91) ↑ Busy 2 0 0 0 0 (3, Arrive, 15.00) −Ev
(_, End, 22.00) ( ) (2, Depart, 12.37) ent
4 (2, Depart, 12.37) 12.37 2 Depart *( ) ↑ Idle — — 0 — 0 Sim
(3, Arrive, 15.00) ( ) No new events
(_, End, 22.00)
5 (3, Arrive, 15.00) 15.00 3 Arrive *(3, 15.00) ↑ Busy 3 0 0 0 0 (4, Arrive, 15.17)
(_, End, 22.00) ( ) (3, Depart, 18.25)
6 (4, Arrive, 15.17) 15.17 4 Arrive *(3, 15.00) 1 ↑ — — 0 0 0 (5, Arrive, 15.74)
(3, Depart, 18.25) (4, 15.17)
(_, End, 22.00)
7 (5, Arrive, 15.74) 15.74 5 Arrive *(3, 15.00) 2 ↑ — — 0 0.57 0.57 (6, Arrive, 18.75)
(3, Depart, 18.25) (4, 15.17)
(_, End, 22.00) (5, 15.74)
8 (3, Depart, 18.25) 18.25 3 Depart *(4, 15.17) 1 ↑ 4 3.08 3.08 5.02 5.59 (4, Depart, 20.50)
(6, Arrive, 18.75) (5, 15.74)
(_, End, 22.00)
9 (6, Arrive, 18.75) 18.75 6 Arrive *(4, 15.17) 2 ↑ — — 3.08 0.50 6.09 (7, Arrive, 19.88)
(4, Depart, 20.50) (5, 15.74)
(_, End, 22.00) (6, 18.75)
10 (7, Arrive, 19.88) 19.88 7 Arrive *(4, 15.17) 3 ↑ — — 3.08 2.26 8.35 (8, Arrive, 22.53) ©
(4, Depart, 20.50) (5, 15.74) The
(_, End, 22.00) (6, 18.75) McGr
(7, 19.88)
aw−
11 (4, Depart, 20.50) 20.50 4 Depart *(5, 15.74) 2 ↑ 5 4.76 7.84 1.86 10.21 (5, Depart, 24.62)
(_, End, 22.00) (6, 18.75) Hill
(8, Arrive, 22.53) (7, 19.88) Comp
12 (_, End, 22.00) 22.00 End — ↑ ↑ 5 — 7.84 3.00 13.21 —
(8, Arrive, 22.53)
8 (5, Depart, 24.62)
3
Harrell−Ghosh−Bo I. Study 4. © The
wden: Simulation Chapters Discrete−Eve McGraw−Hill
Using ProModel, nt Companies,
Second Edition
customer departs, or the simulation ends. Time is the future time that the
event is to occur. The event “(1, Arrive, 2.18)” under the Scheduled Future
Events column prescribes that the first customer entity is scheduled to arrive
at time 2.18 minutes. The arrival time was generated using the equation
t0 + E (3.0). To obtain the value returned from the function E (3), we went
to Table 3.2, read the first random variate from the Interarrival Time column
(a value of 2.18 minutes), and added it to the current value of the simulation
clock, t0 = 0. The simulation is to be terminated after 22 minutes. Note the
“( , End, 22.00)” under the Scheduled Future Events column. For the
termination event, no value is assigned to Entity Number because it is not
relevant.
i = 1, t1 = 2.18. After the initialization step, the list of scheduled future
events is added to the event calendar in chronological order in preparation
for the next simulation time step i = 1. The simulation clock is fast
forwarded to the time that the next event is scheduled to occur, which is t1
= 2.18
(the arrival time of the first customer to the ATM queue), and then the event
is processed. Following the simulation logic diagram, arrival events are
processed by first scheduling the future arrival event for the next customer
entity using the equation t1 + E (3.0) = 2.18 + 5.73 = 7.91 minutes.
Note the value of 5.73 returned by the function E (3.0) is the second
random variate listed under the Interarrival Time column of Table 3.2. This
future event is placed under the Scheduled Future Events column in Table
4.1 as “(2, Arrive, 7.91)”. Checking the status of the ATM from the
previous simulation time step reveals that the ATM is idle (ATM Status0 =
“Idle”). Therefore, the arriving customer entity immediately flows through
the queue to the ATM to conduct its transaction. The future departure event
of this entity from the ATM is scheduled using the equation t1 + E (2.4) =
2.18 + 0.10 = 2.28 minutes. See “(1, Depart, 2.28)” under the Scheduled
Future Events column, denoting that the first customer entity is scheduled to
depart the ATM at time 2.28 minutes. Note that the value of 0.10 returned
by the function E (2.4) is the first random variate listed under the Service
Time column of Table 3.2. The arriving customer entity’s arrival time is
then stored in the first position of the Entity Attribute Array to signify that
it is being served by the ATM. The ATM Status1 is set to “Busy,” and the
statistical accumulators for Entities Processed through Queue are updated.
Add 1 to Total Processed and since this entity entered the queue and
immediately advanced to the idle ATM for processing, record zero minutes
in the Time in Queue, t1 − Arrival Time, subcolumn and update this
statistic’s cumulative value. The statistical accumulators for Time-Weighted
Number of Entities in the Queue are updated next. Record zero for
(t1 − t0)NQ0 since there were no entities in queue during the previous
time step, NQ0 = 0, and update this statistic’s cumulative value. Note the
arrow “↑” entered under the Number of Entities in Queue, NQ1 column.
Recall that the arrow is placed there to signify that the number of entities
waiting
in the queue has not changed from its previous value.
Chapter 4 Discrete-Event Simulation 85
customers waited in the queue is 7.84 minutes. The final cumulative value for
Time-Weighted Number of Entities in the Queue is 13.21 minutes. Note that at
the end of the simulation, two customers are in the queue (customers 6 and 7)
and one is at the ATM (customer 5). A few quick observations are worth
considering before we discuss how the accumulated values are used to calculate
summary sta- tistics for a simulation output report.
This simple and brief (while tedious) manual simulation is relatively easy to
follow. But imagine a system with dozens of processes and dozens of factors in-
fluencing behavior such as downtimes, mixed routings, resource contention, and
others. You can see how essential computers are for performing a simulation of
any magnitude. Computers have no difficulty tracking the many relationships
and updating the numerous statistics that are present in most simulations.
Equally as important, computers are not error prone and can perform millions of
instructions per second with absolute accuracy. We also want to point out that
the simulation logic diagram (Figure 4.5) and Table 4.1 were designed to
convey the essence of what happens inside a discrete-event simulation program.
When you view a trace report of a ProModel simulation in Lab Chapter 8 you
will see the simularities between the trace report and Table 4.1. Although the
basic process presented is sound, its efficiency could be improved. For
example, there is no need to keep both a “scheduled future events” list and
an “event calendar.” Instead, future events are inserted directly onto the event
calendar as they are created. We sepa- rated them to facilitate our describing the
flow of information in the discrete-event framework.
Simple-Average Statistic
A simple-average statistic is calculated by dividing the sum of all observation
val- ues of a response variable by the number of observations:
in=1
Simple average = xi n
The average time that customer entities waited in the queue for their turn on
the ATM during the manual simulation reported in Table 4.1 is a simple-average
statistic. Recall that the simulation processed five customers through the queue.
Let xi denote the amount of time that the i th customer processed spent in the
queue. The average waiting time in queue based on the n = 5 observations is
Time-Average Statistic
A time-average statistic, sometimes called a time-weighted average, reports
the average value of a response variable weighted by the time duration for
each observed value of the variable:
n (T x )
i i
i
Time average = =1
where xi denotes the value of the i th observation, Ti denotes the time duration of
the i th observation (the weighting factor), and T denotes the total duration
over which the observations were collected. Example time-average statistics
include the average number of entities in a system, the average number of
entities at a lo- cation, and the average utilization of a resource. An average of
a time-weighted response variable in ProModel is computed as a time average.
The average number of customer entities waiting in the queue location for
their turn on the ATM during the manual simulation is a time-average statistic.
Figure 4.6 is a plot of the number of customer entities in the queue during the
manual simulation recorded in Table 4.1. The 12 discrete-events manually simu-
lated in Table 4.1 are labeled t1, t2, t 3 ,..., t11, t12 on the plot. Recall that ti de-
notes the value of the simulation clock at time step i in Table 4.1, and that its ini-
tial value is zero, t0 = 0.
Using the notation from the time-average equation just given, the total
simu- lation time illustrated in Figure 4.6 is T = 22 minutes. The Ti denotes
the dura- tion of time step i (distance between adjacent discrete-events in Figure
4.6). That is, Ti = ti − ti −1 for i = 1, 2, 3, . . . , 12. The xi denotes the
queue’s contents (number of customer entities in the queue) during each Ti
time interval. There- fore, xi = NQi −1 for i = 1, 2, 3, . . . , 12 (recall that in
Table 4.1, NQi −1 denotes the number of customer entities in the queue
from ti −1 to ti ). The time-average
8 Harrell
8 −Ghos
h−Bow
den:
Simula
tion
Using
ProMo
del,
Number of customers in queue
I.
S
tu
4 d
y
C
h
3
4.
2 Disc
rete
−Ev
ent
1 Sim
0
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22
T1 =
T3 = 7.91 - 2.28 = 5.63 T4 = 12.37 - 7.91 = 4.46 T5 = 2.63 T8 = 2.51 …
2.18
t0 t1 t2 t3 t4 t5 t6 t7 t8 t11 t12 ©
The
Simulation time, T = 22 McGr
aw−
Hill
FIGURE 4.6 Comp
Number of customers in the queue during the manual simulation.
Chapter 4 Discrete-Event Simulation 89
Average NQ = 12 12
(Ti (ti − −1)NQi −1
xi ) ti
i =1 i
T = =1
T
(2.18)(0) + (0.1)(0) + (5.63)(0) + (4.46)(0) + (2.63)(0) + (0.17)(0) + (0.57)(1) + (2.51)(2) + ·· · +
Average NQ =
(1.5)(2)
22
13.21
Average NQ = 0.60 customers
22 12
=
You may recognize that the numerator of this equation ( (ti −1 )NQ )
ti
i =1 − i −1
calculates the area under the plot of the queue’s contents during the simulation
(Figure 4.6). The values necessary for computing this area are accumulated
under the Time-Weighted Number of Entities in Queue column of Table 4.1
(see the Cumulative value of 13.21 in the table’s last row).
4.4.5 Issues
Even though this example is a simple and somewhat crude simulation, it
provides a good illustration of basic simulation issues that need to be addressed
when con- ducting a simulation study. First, note that the simulation start-up
conditions can bias the output statistics. Since the system started out empty,
queue content statis- tics are slightly less than what they might be if we
began the simulation with customers already in the system. Second, note that
we ran the simulation for only 22 minutes before calculating the results. Had we
ran longer, it is very likely that the long-run average time in the queue would
have been somewhat different (most likely greater) than the time from the
short run because the simulation did not have a chance to reach a steady state.
These are the kinds of issues that should be addressed whenever running a
simulation. The modeler must carefully analyze the output and understand the
sig- nificance of the results that are given. This example also points to the
need for considering beforehand just how long a simulation should be run.
These issues are addressed in Chapters 9 and 10.
FIGURE 4.7
Typical components
of simulation
software.
Simulation Simulation
processor processor
entering and editing model information. External files used in the simulation are
specified here as well as run-time options (number of replications and so on).
request snapshot reports, pan or zoom the layout, and so forth. If visual
interactive capability is provided, the user is even permitted to make changes
dynamically to model variables with immediate visual feedback of the effects of
such changes.
The animation speed can be adjusted and animation can even be disabled by
the user during the simulation. When unconstrained, a simulation is capable of
running as fast as the computer can process all of the events that occur within
the simulated time. The simulation clock advances instantly to each scheduled
event; the only central processing unit (CPU) time of the computer that is used is
what is necessary for processing the event logic. This is how simulation is able
to run in compressed time. It is also the reason why large models with millions
of events take so long to simulate. Ironically, in real life activities take time
while events take no time. In simulation, events take time while activities take
no time. To slow down a simulation, delay loops or system timers are used to
create pauses between events. These techniques give the appearance of elapsing
time in an animation. In some applications, it may even be desirable to run a
simulation at the same rate as a real clock. These real-time simulations are
achieved by synchronizing the simu- lation clock with the computer’s internal
system clock. Human-in-the-loop (such as operator training simulators) and
hardware-in-the-loop (testing of new equip- ment and control systems) are
examples of real-time simulations.
displayed during the simulation itself, although some simulation products create
an animation file that can be played back at the end of the simulation. In addition
to animated figures, dynamic graphs and history plots can be displayed during
the simulation.
Animation and dynamically updated displays and graphs provide a visual
rep- resentation of what is happening in the model while the simulation is
running. Animation comes in varying degrees of realism from three-dimensional
animation to simple animated flowcharts. Often, the only output from the
simulation that is of interest is what is displayed in the animation. This is
particularly true when simula- tion is used for facilitating conceptualization or
for communication purposes.
A lot can be learned about model behavior by watching the animation (a
pic- ture is worth a thousand words, and an animation is worth a thousand
pictures). Animation can be as simple as circles moving from box to box, to
detailed, real- istic graphical representations. The strategic use of graphics
should be planned in advance to make the best use of them. While insufficient
animation can weaken the message, excessive use of graphics can distract
from the central point to be made. It is always good to dress up the simulation
graphics for the final presenta- tion; however, such embellishments should be
deferred at least until after the model has been debugged.
For most simulations where statistical analysis is required, animation is no
substitute for the postsimulation summary, which gives a quantitative overview
of the entire system performance. Basing decisions on the animation alone
reflects shallow thinking and can even result in unwarranted conclusions.
FIGURE 4.8
Sample of ProModel
graphic objects.
94 Part I Study Chapters
FIGURE 4.9
ProModel animation provides useful feedback.
takes place. This background might be a CAD layout imported into the model.
The dynamic animation objects that move around on the background during the
simu- lation include entities (parts, customers, and so on) and resources
(people, fork trucks, and so forth). Animation also includes dynamically
updated counters, indicators, gauges, and graphs that display count, status, and
statistical information (see Figure 4.9).
FIGURE 4.10
Summary report of simulation activity.
--------------------------------------------------------------------------------
General Report
Output from C:\ProMod4\models\demos\Mfg_cost.mod [Manufacturing Costing Optimization]
Date: Feb/27/2003 Time: 06:50:05 PM
--------------------------------------------------------------------------------
Scenario : Model Parameters
Replication : 1 of 1
Warmup Time : 5 hr
Simulation Time : 15 hr
--------------------------------------------------------------------------------
LOCATIONS
Average
Location Scheduled Total Minutes Average Maximum Current
Name Hours Capacity Entries Per Entry Contents Contents Contents
----------- --------- -------- ------- --------- -------- -------- --------
Receive 10 2 21 57.1428 2 2 2
NC Lathe 1 10 1 57 10.1164 0.961065 1 1
NC Lathe 2 10 1 57 9.8918 0.939725 1 1
Degrease 10 2 114 10.1889 1.9359 2 2
Inspect 10 1 113 4.6900 0.883293 1 1
Bearing Que 10 100 90 34.5174 5.17762 13 11
Loc1 10 5 117 25.6410 5 5 5
RESOURCES
ENTITY ACTIVITY
FIGURE 4.11
Time-series graph
showing changes in
queue size over time.
FIGURE 4.12
Histogram of queue
contents.
Chapter 4 Discrete-Event Simulation 97
FIGURE 4.14
Easy
New paradigm that
views ease of use Early Current best-of-breed
and flexibility as Ease of use simulators products
independent
characteristics.
Early
languages
Hard
Low High
Flexibility
Simulation products targeted at vertical markets are on the rise. This trend is
driven by efforts to make simulation easier to use and more solution oriented.
Specific areas where dedicated simulators have been developed include call cen-
ter management, supply chain management, and high-speed processing. At the
same time many simulation applications are becoming more narrowly focused,
others are becoming more global and look at the entire enterprise or value chain
in a hierarchical fashion from top to bottom.
Perhaps the most dramatic change in simulation will be in the area of soft-
ware interoperability and technology integration. Historically, simulation has
been viewed as a stand-alone, project-based technology. Simulation models were
built to support an analysis project, to predict the performance of complex
systems, and to select the best alternative from a few well-defined alternatives.
Typically these projects were time-consuming and expensive, and relied heavily
on the expertise of a simulation analyst or consultant. The models produced
were generally “single use” models that were discarded after the project.
In recent years, the simulation industry has seen increasing interest in ex-
tending the useful life of simulation models by using them on an ongoing basis
(Harrell and Hicks 1998). Front-end spreadsheets and push-button user
interfaces are making such models more accessible to decision makers. In these
flexible sim- ulation models, controlled changes can be made to models
throughout the system life cycle. This trend is growing to include dynamic links
to databases and other data sources, enabling entire models actually to be built
and run in the background using data already available from other enterprise
applications.
The trend to integrate simulation as an embedded component in enterprise
applications is part of a larger development of software components that can be
distributed over the Internet. This movement is being fueled by three emerging
information technologies: (1) component technology that delivers true object
orientation; (2) the Internet or World Wide Web, which connects business com-
munities and industries; and (3) Web service technologies such as JZEE and
Microsoft’s .NET (“DOTNET”). These technologies promise to enable parallel
and distributed model execution and provide a mechanism for maintaining dis-
tributed model repositories that can be shared by many modelers (Fishwick
1997). The interest in Web-based simulation, like all other Web-based
applications, con- tinues to grow.
4.9 Summary
Most manufacturing and service systems are modeled using dynamic, stochastic,
discrete-event simulation. Discrete-event simulation works by converting all ac-
tivities to events and consequent reactions. Events are either time-triggered or
condition-triggered, and are therefore processed either chronologically or when
a satisfying condition has been met.
Simulation models are generally defined using commercial simulation soft-
ware that provides convenient modeling constructs and analysis tools.
Simulation
100 Part I Study Chapters
software consists of several modules with which the user interacts. Internally,
model data are converted to simulation data, which are processed during the
simulation. At the end of the simulation, statistics are summarized in an output
database that can be tabulated or graphed in various forms. The future of simula-
tion is promising and will continue to incorporate exciting new technologies.
Reference
s Bowden, Royce. “The Spectrum of Simulation Software.” IIE Solutions, May 1998,
pp. 44–46.
Fishwick, Paul A. “Web-Based Simulation.” In Proceedings of the 1997 Winter
Simulation Conference, ed. S. Andradottir, K. J. Healy, D. H. Withers, and B. L.
Nelson. Institute of Electric and Electronics Engineers, Piscataway, NJ, 1997. pp.
100–109.
Gottfried, Byron S. Elements of Stochastic Process Simulation. Englewood Cliffs, NJ:
Prentice Hall, 1984, p. 8.
Haider, S. W., and J. Banks. “Simulation Software Products for Analyzing Manufacturing
Systems.” Industrial Engineering, July 1986, p. 98.
Harrell, Charles R., and Don Hicks. “Simulation Software Component Architecture for
Simulation-Based Enterprise Applications.” Proceedings of the 1998 Winter Simula-
tion Conference, ed. D. J. Medeiros, E. F. Watson, J. S. Carson, and M. S.
Manivannan. Institute of Electrical and Electronics Engineers, Piscataway, New
Jersey, 1998, pp. 1717–21.
Harrell−Ghosh−Bo I. Study 5. Getting © The
wden: Simulation Chapters Started McGraw−Hill
Using ProModel, Companies,
Second Edition
C H A P T E R
5 GETTING STARTED
“For which of you, intending to build a tower, sitteth not down first, and
counteth the cost, whether he have sufficient to finish it? Lest haply, after he
hath laid the foundation, and is not able to finish it, all that behold it begin to
mock him, Saying, This man began to build, and was not able to finish.”
—Luke 14:28–30
5.1 Introduction
In this chapter we look at how to begin a simulation project. Specifically, we
discuss how to select a project and set up a plan for successfully completing it.
Simulation is not something you do simply because you have a tool and a
process to which it can be applied. Nor should you begin a simulation without
forethought and preparation. A simulation project should be carefully planned
following basic project management principles and practices. Questions to be
answered in this chapter are
• How do you prepare to do a simulation study?
• What are the steps for doing a simulation study?
• What are typical objectives for a simulation study?
• What is required to successfully complete a simulation project?
• What are some pitfalls to avoid when doing simulation?
While specific tasks may vary from project to project, the basic procedure
for doing simulation is essentially the same. Much as in building a house, you
are better off following a time-proven methodology than approaching it
haphazardly. In this chapter, we present the preliminary activities for preparing
to conduct a simulation study. We then cover the steps for successfully
completing a simulation project. Subsequent chapters elaborate on these steps.
Here we focus primarily
103
104 Part I Study Chapters
on the first step: defining the objective, scope, and requirements of the study.
Poor planning, ill-defined objectives, unrealistic expectations, and unanticipated
costs can turn a simulation project sour. For a simulation project to succeed, the
objec- tives and scope should be clearly defined and requirements identified and
quantified for conducting the project.
Many vendors offer guarantees on their products so they may be returned after
some trial period. This allows you to try out the software to see how well it fits
your needs.
The services provided by the software provider can be a lifesaver. If
working late on a project, it may be urgent to get immediate help with a
modeling or soft- ware problem. Basic and advanced training classes, good
documentation, and lots of example models can provide invaluable resources
for becoming proficient in the use of the software.
When selecting simulation software, it is important to assess the total cost
of ownership. There often tends to be an overemphasis on the purchase price of
the software with little regard for the cost associated with learning and using
the software. It has been recommended that simulation software be
purchased on the basis of productivity rather than price (Banks and Gibson
1997). The purchase price of the software can sometimes be only a small
fraction of the cost in time and labor that results from having a tool that is
difficult to use or inadequate for the application.
Other considerations that may come into play when selecting a product in-
clude quality of the documentation, hardware requirements (for example, is a
graphics accelerator card required?), and available consulting services.
are refined and sometimes redefined with each iteration. The decision to push
toward further refinement should be dictated by the objectives and constraints of
the study as well as by sensitivity analysis, which determines whether additional
refinement will yield meaningful results. Even after the results are presented,
there are often requests to conduct additional experiments. Describing this itera-
tive process, Pritsker and Pegden (1979) observe,
The stages of simulation are rarely performed in a structured sequence beginning
with problem definition and ending with documentation. A simulation project may
involve false starts, erroneous assumptions which must later be abandoned,
reformulation of the problem objectives, and repeated evaluation and redesign of the
model. If properly done, however, this iterative process should result in a simulation
model which prop- erly assesses alternatives and enhances the decision making
process.
Figure 5.1 illustrates this iterative process.
FIGURE 5.1
Define objective,
Iterative nature of scope, and
simulation. requirements
Collect and
analyze system
data
Build model
Validate model
Conduct
experiments
Present
results
Chapter 5 Getting Started 109
The remainder of this chapter focuses on the first step of defining the objec-
tive, scope, and requirements of the study. The remaining steps will be discussed
in later chapters.
Design Decisions
1. What division and sequence of processing activities provide the best
flow?
2. What is the best layout of offices, machines, equipment, and other work
areas for minimizing travel time and congestion?
3. How many operating personnel are needed to meet required production
or service levels?
4. What level of automation is the most cost-effective?
5. How many machines, tools, fixtures, or containers are needed to meet
throughput requirements?
6. What is the least-cost method of material handling or transportation that
meets processing requirements?
7. What are the appropriate number and location of pickup and drop-off
points in a material handling or transportation system that minimizes
waiting times?
8. What are the optimum number and size of waiting areas, storage areas,
queues, and buffers?
9. What is the effect of localizing rather than centralizing material storage,
resource pools, and so forth?
10.What automation control logic provides the best utilization of resources?
11.What is the optimum unit load or batch size for processing?
12.Where are the bottlenecks in the system, and how can they be
eliminated?
13.How many shifts are needed to meet a specific production or service
level?
14.What is the best speed to operate conveyors and other handling or
transportation equipment to meet move demands?
Operational Decisions
1. What is the best way to route material, customers, or calls through the
system?
2. What is the best way to allocate personnel for a particular set of tasks?
3. What is the best schedule for preventive maintenance?
4. How much preventive maintenance should be performed?
5. What is the best priority rule for selecting jobs and tasks?
Chapter 5 Getting Started 111
• Data-gathering responsibilities.
• Experimentation.
• Form of results.
FIGURE 5.2
Confining the model
to impacting
activities. Activity Activity Activity Activity Activity
A B C D E
Scope of model
114 Part I Study Chapters
FIGURE 5.3
Effect of level of detail on model development time.
One-to-one
correspondence
Minimum
required
Level of
detail
“white box” model that is very detailed and produces a one-to-one correspon-
dence between the model and the system.
Determining the appropriate level of detail is an important decision. Too
much detail makes it difficult and time-consuming to develop and debug the
model. Too little detail may make the model unrealistic by oversimplifying the
process. Figure 5.3 illustrates how the time to develop a model is affected by
the level of detail. It also highlights the importance of including only enough de-
tail to meet the objectives of the study.
The level of detail is determined largely by the degree of accuracy required
in the results. If only a rough estimate is being sought, it may be sufficient to
model just the flow sequence and processing times. If, on the other hand, a
close answer is needed, all of the elements that drive system behavior should be
precisely modeled.
Dates should be set for completing the data-gathering phase because, left
un- controlled, it could go on indefinitely. One lesson you learn quickly is that
good data are elusive and you can always spend more time trying to refine the
data. Nearly all models are based partially on assumptions, simply because
complete and accurate information is usually lacking. The project team, in
cooperation with stakeholders in the system, will need to agree on the
assumptions to be made in the model.
that focus attention on the area of interest. If lots of color and detail are added to
the animation, it may detract from the key issues. Usually the best approach is to
keep stationary or background graphics simple, perhaps displaying only a
schematic of the layout using neutral colors. Entities or other dynamic
graphics can then be displayed more colorfully to make them stand out.
Sometimes the most effective presentation is a realistic 3-D animation. Other
times the flow of entities along a flowchart consisting of simple boxes and
arrows may be more effective.
Another effective use of animation in presenting the results is to run two or
more scenarios side by side, displaying a scoreboard that shows how they
com- pare on one or two key performance measures. The scoreboard may even
include a bar graph or other chart that is dynamically updated to compare
results.
Most decision makers such as managers need to have only a few key items
of information for making the decision. It should be remembered that people,
not the model, make the final decision. With this in mind, every effort should be
made to help the decision maker clearly understand the options and their
associated consequences. The use of charts helps managers visualize and focus
their atten- tion on key decision factors. Charts are attention grabbers and are
much more ef- fective in making a point than plowing through a written
document or sheets of computer printout.
5.8 Summary
Simulation projects are almost certain to fail if there is little or no planning.
Doing simulation requires some preliminary work so that the appropriate
resources and personnel are in place. Beginning a simulation project requires
selecting the right application, defining objectives, acquiring the necessary tools
and resources, and planning the work to be performed. Applications should be
selected that hold the greatest promise for achieving company goals.
Simulation is most effective when it follows a logical procedure. Objectives
should be clearly stated and a plan developed for completing the project. Data
gathering should focus on defining the system and formulating a conceptual
model. A simulation should then be built that accurately yet minimally captures
the system definition. The model should be verified and validated to ensure that
the results can be relied upon. Experiments should be run that are oriented
toward meeting the original objectives. Finally, the results should be presented
in a way
118 Part I Study Chapters
that clearly represents the findings of the study. Simulation is an iterative process
that requires redefinition and fine-tuning at all stages in the process.
Objectives should be clearly defined and agreed upon to avoid wasted
efforts. They should be documented and followed to avoid “scope creep.” A
concisely stated objective and a written scope of work can help keep a
simulation study on track.
Specific items to address when planning a study include defining the model
scope, describing the level of detail required, assigning data-gathering responsi-
bilities, specifying the types of experiments, and deciding on the form of the re-
sults. Simulation objectives together with time, resources, and budget constraints
drive the rest of the decisions that are made in completing a simulation project.
Finally, it is people, not the model, who ultimately make the decision.
The importance of involvement of process owners and stakeholders
through- out the project cannot be overemphasized. Management support
throughout the project is vital. Ultimately, it is only when expectations are met
or exceeded that a simulation can be deemed a success.
CASE STUDY A
AST COMPUTES BIG BENEFITS USING SIMULATION
John Perry
Manufacturing Engineer
The Problem
AST Research Inc., founded in 1980, has become a multibillion-dollar PC manufacturer.
We assemble personal computers and servers in Fort Worth, Texas, and offshore. For a
long time, we “optimized” (planned) our assembly procedures using traditional
methods— gathering time-and-motion data via stopwatches and videotape, performing
simple arithmetic calculations to obtain information about the operation and
performance of the assembly line, and using seat-of-the-pants guesstimates to “optimize”
assembly line output and labor utilization.
The Model
In December 1994 a new vice president joined AST. Management had long been commit-
ted to increasing the plant’s efficiency and output, and our new vice president had experi-
ence in using simulation to improve production. We began using ProModel simulation as
a tool for optimizing our assembly lines, and to improve our confidence that changes
pro- posed in the assembly process would work out in practice. The results have been
signifi- cant. They include
• Reduced labor input to each unit.
• Shortened production cycle times.
• Reduced reject rate.
• Increased ability to change assembly instructions quickly when changes become
necessary.
Now when we implement changes, we are confident that those changes in fact will
improve things.
The first thing we did was learn how to use the simulation software. Then we
attempted to construct a model of our current operations. This would serve two important
functions: it would tell us whether we really understood how to use the software, and it
would validate our understanding of our own assembly lines. If we could not construct
a model of our assembly line that agreed with the real one, that would mean that there
was a major flaw either in our ability to model the system or in our understanding of our
own operations.
Building a model of our own operations sounded simple enough. After all, we had an
exhaustive catalog of all the steps needed to assemble a computer, and (from previous
data collection efforts) information on how long each operation took. But building a
model that agreed with our real-world situation turned out to be a challenging yet
tremendously edu- cational activity.
For example, one of our early models showed that we were producing several
thousand units in a few hours. Since we were not quite that good—we were off by at least
a factor of 10—we concluded that we had a major problem in the model we had built, so
we went back to study things in more detail. In every case, our early models failed
because we had over- looked or misunderstood how things actually worked on our
assembly lines.
Eventually, we built a model that worked and agreed reasonably well with our real-
world system out in the factory. To make use of the model, we generated ideas that we
thought would cut down our assembly time and then simulated them in our model.
We examined a number of changes proposed by our engineers and others, and then
sim- ulated the ones that looked most promising. Some proposed changes were
counterpro- ductive according to our simulation results. We also did a detailed
investigation of our test- ing stations to determine whether it was more efficient to move
computers to be tested into
Chapter 5 Getting Started 121
the testing station via FIFO (first-in, first-out) or LIFO (last-in, first-out). Modeling
showed us that FIFO was more efficient. When we implemented that change, we realized
the gain we had predicted.
Simulation helped us avoid buying more expensive equipment. Some of our material-
handling specialists predicted, based on their experience, that if we increased throughput
by 30 percent, we would have to add some additional, special equipment to the assembly
floor or risk some serious blockages. Simulation showed us that was not true and in prac-
tice the simulation turned out to be correct.
We determined that we could move material faster if we gave material movers a
specific pattern to follow instead of just doing things sequentially. For example, in
moving certain items from our testing area, we determined that the most time-efficient
way would be to move shelf 1 first, followed by shelf 4, then shelf 3, and so on.
After our first round of making “serious” changes to our operation and simulating
them, our actual production was within a few percentage points of our predicted
production. Also, by combining some tasks, we were able to reduce our head count on
each assembly line significantly.
We have completed several rounds of changes, and today, encouraged by the
experience of our new investor, Samsung, we have made a significant advance that we
call Vision 5. The idea of Vision 5 is to have only five people in each cell
assembling computers. Although there was initially some skepticism about whether this
concept would work, our simulations showed that it would, so today we have
converted one of our “focused factories” to this concept and have experienced
additional benefits. Seeing the benefits from that effort has caused our management to
increase its commitment to simulation.
The Results
Simulation has proven its effectiveness at AST Research. We have achieved a number of
useful, measurable goals. For competitive reasons, specific numbers cannot be provided;
however, in order of importance, the benefits we have achieved are
• Reduced the reject rate.
• Reduced blockage by 25 percent.
• Increased operator efficiency by 20 percent.
• Increased overall output by 15 percent.
• Reduced the labor cost of each computer.
Other benefits included increased ability to explain and justify proposed changes to
management through the use of the graphic animation. Simulation helped us make fewer
missteps in terms of implementing changes that could have impaired our output. We were
able to try multiple scenarios in our efforts to improve productivity and efficiency at com-
paratively low cost and risk. We also learned that the best simulation efforts invite partici-
pation by more disciplines in the factory, which helps in terms of team-building. All of
these benefits were accomplished at minimal cost. These gains have also caused a cultural
shift at AST, and because we have a tool that facilitates production changes, the company
is now committed to continuous improvement of our assembly practices.
Our use of simulation has convinced us that it produces real, measurable results—and
equally important, it has helped us avoid making changes that we thought made common
sense, but when simulated turned out to be ineffective. Because of that demonstrable
payoff, simulation has become a key element of our toolkit in optimizing production.
122 Part I Study Chapters
Questions
1. What were the objectives for using simulation at AST?
2. Why was simulation better than the traditional methods they were using to achieve
these objectives?
3. What common-sense solution was disproved by using simulation?
4. What were some of the unexpected side benefits from using simulation?
5. What insights on the use of simulation did you gain from this case study?
CASE STUDY B
DURHAM REGIONAL HOSPITAL SAVES $150,000
ANNUALLY USING SIMULATION TOOLS
Bonnie Lowder
Management Engineer, Premier
Durham Regional Hospital, a 450-bed facility located in Durham, North Carolina, has
been serving Durham County for 25 years. This public, full-service, acute-care facility is
facing the same competition that is now a part of the entire health care industry. With that
in mind, Durham Regional Hospital is making a conscious effort to provide the highest
quality of care while also controlling costs.
To assist with cost control efforts, Durham Regional Hospital uses Premier’s Customer-
Based Management Engineering program. Premier’s management engineers are very
Chapter 5 Getting Started 123
involved with the hospital’s reengineering and work redesign projects. Simulation is one
of the tools the management engineers use to assist in the redesign of hospital services
and processes. Since the hospital was preparing to add an outpatient services area that
was to open in May 1997, a MedModel simulation project was requested by Durham
Regional Hospital to see how this Express Services area would impact their other
outpatient areas. This project involved the addition of an outpatient Express Services
addition. The Express Services area is made up of two radiology rooms, four phlebotomy
lab stations, a patient interview room, and an EKG room. The model was set up to
examine which kind of patients would best be serviced in that area, what hours the
clinic would operate, and
what staffing levels would be necessary to provide optimum care.
The Model
Data were collected from each department with potential Express Services patients. The
new Express Services area would eliminate the current reception desk in the main radiol-
ogy department; all radiology outpatients would have their order entry at Express
Services. In fiscal year 1996, the radiology department registered 21,159 outpatients. Of
those, one- third could have had their procedure performed in Express Services. An
average of 18 out- patient surgery patients are seen each week for their preadmission
testing. All these patients could have their preadmission tests performed in the Express
Services area. The laboratory sees approximately 14 walk-in phlebotomy patients per
week. Of those, 10 patients are simple collections and 4 are considered complex. The
simple collections can be performed by anyone trained in phlebotomy. The complex
collections should be done by skilled lab personnel. The collections for all of these
patients could be performed in Express Services. Based on the data, 25 patients a day
from the Convenient Care Clinic will need simple X rays and could also use the
Express Services area. Procedure times for each patient were determined from previous
data collection and observation.
The model was built in two months. Durham Regional Hospital had used simulation
in the past for both Emergency Department and Ambulatory Care Unit redesign projects
and thus the management team was convinced of its efficacy. After the model was
completed, it was presented to department managers from all affected areas. The model
was presented to the assembled group in order to validate the data and assumptions. To
test for validity, the model was run for 30 replications, with each replication lasting a
period of two weeks. The results were measured against known values.
The Results
The model showed that routing all Convenient Care Clinic patients through Express
Services would create a bottleneck in the imaging rooms. This would create unacceptable
wait times for the radiology patients and the Convenient Care patients. Creating a model
scenario where Convenient Care patients were accepted only after 5:00 P.M. showed
that the anticipated problem could be eliminated. The model also showed that the
weekend vol- ume would be very low. Even at minimum staffing levels, the radiology
technicians and clerks would be underutilized. The recommendation was made to close
the Express Ser- vices area on the weekends. Finally, the model showed that the
staffing levels could be lower than had been planned. For example, the workload for the
outpatient lab tech drops off after 6:00 P.M. The recommendation was to eliminate
outpatient lab techs after 6:00 P.M. Further savings could be achieved by cross-training
the radiology technicians and possibly the clerks to perform simple phlebotomy. This
would also provide for backup during busy
124 Part I Study Chapters
times. The savings for the simulation efforts were projected to be $148,762 annually.
These savings were identified from the difference in staffing levels initially requested for
Express Services and the levels that were validated after the simulation model results
were ana- lyzed, as well as the closing of the clinic on weekends. This model would also
be used in the future to test possible changes to Express Services. Durham Regional
Hospital would be able to make minor adjustments to the area and visualize the
outcome before imple- mentation. Since this was a new area, they would also be able to
test minor changes before the area was opened.
The results of the model allowed the hospital to avoid potential bottlenecks in the
radi- ology department, reduce the proposed staffing levels in the Express Services area,
and val- idate that the clinic should be closed on weekends. As stated by Dottie Hughes,
director of Radiology Services: “The simulation model allowed us to see what changes
needed to be made before the area is opened. By making these changes now, we
anticipate a shorter wait time for our patients than if the simulation had not been used.”
The simulation results were able to show that an annual savings of $148,762 could be
expected by altering some of the preconceived Express Services plan.
Future Applications
Larry Suitt, senior vice president, explains, “Simulation has proved to be a valuable tool
for our hospital. It has allowed us to evaluate changes in processes before money is
spent on construction. It has also helped us to redesign existing services to better meet the
needs of our patients.” Durham Regional Hospital will continue to use simulation in new
projects to improve its health care processes. The hospital is responsible for the
ambulance service for the entire county. After the 911 call is received, the hospital’s
ambulance service picks up the patient and takes him or her to the nearest hospital.
Durham Regional Hospital is plan- ning to use simulation to evaluate how relocating
some of the ambulances to other stations will affect the response time to the 911 calls.
Questions
1. Why was this a good application for simulation?
2. What key elements of the study made the project successful?
3. What specific decisions were made as a result of the simulation study?
4. What economic benefit was able to be shown from the project?
5. What insights did you gain from this case study about the way simulation is used?
References
Banks, Jerry, and Randall R. Gibson. “Selecting Simulation Software.” IIE Solutions,
May 1997, pp. 29–32.
Kelton, W. D. “Statistical Issues in Simulation.” In Proceedings of the 1996 Winter
Simulation Conference, ed. J. Charnes, D. Morrice, D. Brunner, and J. Swain, 1996,
pp. 47–54.
Pritsker, Alan B., and Claude Dennis Pegden. Introduction to Simulation and SLAM.
New York: John Wiley & Sons, 1979.
Schrage, Michael. Serious Play: How the World’s Best Companies Simulate to Innovate.
Cambridge, MA: Harvard Business School Press, 1999.
Harrell−Ghosh−Bo I. Study 6. Data © The
wden: Simulation Chapters Collection and McGraw−Hill
Using ProModel, Analysis Companies,
Second Edition
C H A P T E R
6 DATA COLLECTION
AND ANALYSIS
6.1 Introduction
In the previous chapter, we discussed the importance of having clearly defined
objectives and a well-organized plan for conducting a simulation study. In this
chapter, we look at the data-gathering phase of a simulation project that defines
the system being modeled. The result of the data-gathering effort is a conceptual
or mental model of how the system is configured and how it operates. This con-
ceptual model may take the form of a written description, a flow diagram, or
even a simple sketch on the back of an envelope. It becomes the basis for the
simula- tion model that will be created.
Data collection is the most challenging and time-consuming task in simula-
tion. For new systems, information is usually very sketchy and only roughly
esti- mated. For existing systems, there may be years of raw, unorganized data
to sort through. Information is seldom available in a form that is directly usable
in build- ing a simulation model. It nearly always needs to be filtered and
massaged to get it into the right format and to reflect the projected conditions
under which the system is to be analyzed. Many data-gathering efforts end up
with lots of data but little use- ful information. Data should be gathered
purposefully to avoid wasting not only the modeler’s time but also the time of
individuals who are supplying the data.
This chapter presents guidelines and procedures for gathering data.
Statistical techniques for analyzing data and fitting probability distributions to
data are also discussed. The following questions are answered:
• What is the best procedure to follow when gathering data?
• What types of data should be gathered?
125
126 Part I Study Chapters
records of repair times, for example, often lump together the time spent
waiting for repair personnel to become available and the actual time
spent performing the repair. What you would like to do is separate the
waiting time from the actual repair time because the waiting time is
a function of the availability of the repair person, which may vary
depending on the system operation.
4. Look for common groupings. When dealing with lots of variety in a
simulation such as hundreds of part types or customer profiles, it helps to
look for common groupings or patterns. If, for example, you are modeling a
process that has 300 entity types, it may be difficult to get information on
the exact mix and all of the varied routings that can occur. Having such
detailed information is usually too cumbersome to work with even if you
did have it. The solution is to reduce the data
to common behaviors and patterns. One way to group common data is to
first identify general categories into which all data can be assigned.
Then the percentage of cases that fall within each category is calculated
or estimated. It is not uncommon for beginning modelers to attempt to
use actual logged input streams such as customer arrivals or material
shipments when building a model. After struggling to define hundreds of
individual arrivals or routings, it begins to dawn on them that they can
group this information into a few categories and assign probabilities that
any given instance will fall within a particular category. This allows
dozens and sometimes hundreds of unique instances to be described in a
few brief commands. The secret to identifying common groupings is to
“think probabilistically.”
5. Focus on essence rather than substance. A system definition for modeling
purposes should capture the cause-and-effect relationships and ignore the
meaningless (to simulation) details. This is called system abstraction and
seeks to define the essence of system behavior rather than the substance.
A system should be abstracted to the highest level possible while still
preserving the essence of the system operation. Using this “black box”
approach to system definition, we are not concerned about the nature of
the activity being performed, such as milling, grinding, or inspection. We
are interested only in the impact that the activity has on the use of
resources and the delay of entity flow. A proficient modeler constantly is
thinking abstractly about the system operation and avoids getting caught
up in the mechanics of the process.
6. Separate input variables from response variables. First-time modelers often
confuse input variables that define the operation of the system with response
variables that report system performance. Input variables define how the
system works (activity times, routing sequences, and the like) and should be
the focus of the data gathering. Response variables describe how the system
responds to a given set of input variables (amount of work in process,
resource utilization, throughput times, and so on).
128 Part I Study Chapters
operational information is easy to define. If, on the other hand, the process has
evolved into an informal operation with no set rules, it can be very difficult to
de- fine. For a system to be simulated, operating policies that are undefined
and ambiguous must be codified into defined procedures and rules. If decisions
and outcomes vary, it is important to at least define this variability statistically
using probability expressions or distributions.
Station 3B
go and defines what happens to the entity, not where it happens. An entity flow diagram, on the other
hand, is more of a routing chart that shows the physical movement of entities through the system from
location to location. An entity flow diagram should depict any branching that may occur in the flow such
as routings to alternative work centers or rework loops. The purpose of the entity flow dia- gram is to
document the overall flow of entities in the system and to provide a visual aid for communicating the
entity flow to others. A flow diagram is easy to understand and gets everyone thinking in the same way
about the system. It can easily be expanded as additional information is gathered to show activity times,
where resources are used, and so forth.
Notice that the description of operation really provides the details of the en-
tity flow diagram. This detail is needed for defining the simulation model. The
times associated with activities and movements can be just estimates at this
stage. The important thing to accomplish at this point is simply to describe how
entities are processed through the system.
The entity flow diagram, together with the description of operation,
provides a good data document that can be expanded as the project
progresses. At this point, it is a good idea to conduct a structured walk-through
of the operation using the entity flow diagram as the focal point. Individuals
should be involved in this review who are familiar with the operation to ensure
that the description of oper- ation is accurate and complete.
Based on this description of operation, a first cut at building the model can
begin. Using ProModel, a model for simulating Dr. Brown’s practice can be built
in a matter of just a few minutes. Little translation is needed as both the diagram
(Figure 6.2) and the data table (Table 6.1) can be entered pretty much in the
same way they are shown. The only additional modeling information required
to build a running model is the interarrival time of patients.
Getting a basic model up and running early in a simulation project helps
hold the interest of stakeholders. It also helps identify missing information and
moti- vates team members to try to fill in these information gaps. Additional
questions about the operation of the system are usually raised once a basic
model is running. Some of the questions that begin to be asked are Have all of
the routings been accounted for? and Have any entities been overlooked? In
essence, modeling the system actually helps define and validate system data.
At this point, any numerical values such as activity times, arrival rates, and
others should also be firmed up. Having a running model enables estimates and
other assumptions to be tested to see if it is necessary to spend additional time
get- ting more accurate information. For existing systems, obtaining more
accurate data is usually accomplished by conducting time studies of the activity
or event under investigation. A sample is gathered to represent all conditions
under which the activity or event occurs. Any biases that do not represent
normal operating conditions are eliminated. The sample size should be large
enough to provide an accurate picture yet not so large that it becomes costly
without really adding additional information.
For comparative studies in which two design alternatives are evaluated, the
fact that assumptions are made is less significant because we are evaluating
relative performance, not absolute performance. For example, if we are trying to
determine whether on-time deliveries can be improved by assigning tasks to a
resource by due date rather than by first-come, first-served, a simulation can pro-
vide useful information to make this decision without necessarily having com-
pletely accurate data. Because both models use the same assumptions, it may be
possible to compare relative performance. We may not know what the absolute
performance of the best option is, but we should be able to assess fairly
accurately how much better one option is than another.
Some assumptions will naturally have a greater influence on the validity of
a model than others. For example, in a system with large processing times
com- pared to move times, a move time that is off by 20 percent may make
little or no difference in system throughput. On the other hand, an activity time
that is off by 20 percent could make a 20 percent difference in throughput. One
way to assess the influence of an assumption on the validity of a model is
through sensitivity analysis. Sensitivity analysis, in which a range of values is
tested for potential im- pact on model performance, can indicate just how
accurate an assumption needs to be. A decision can then be made to firm up the
assumption or to leave it as is. If, for example, the degree of variation in a
particular activity time has little or no impact on system performance, then a
constant activity time may be used. At the other extreme, it may be found that
even the type of distribution has a noticeable impact on model behavior and
therefore needs to be selected carefully.
A simple approach to sensitivity analysis for a particular assumption is to
run three different scenarios showing (1) a “best” or most optimistic case,
(2) a “worst” or most pessimistic case, and (3) a “most likely” or best-guess
case. These runs will help determine the extent to which the assumption
influences model behavior. It will also help assess the risk of relying on the
particular assumption.
FIGURE 6.3
Descriptive
statistics for a
sample data set of
100 observations.
0.99 0.41 0.89 0.59 0.98 0.47 0.70 0.94 0.39 0.92
1.30 0.67 0.64 0.88 0.57 0.87 0.43 0.97 1.20 1.50
1.20 0.98 0.89 0.62 0.97 1.30 1.20 1.10 1.00 0.44
0.67 1.70 1.40 1.00 1.00 0.88 0.52 1.30 0.59 0.35
0.67 0.51 0.72 0.76 0.61 0.37 0.66 0.75 1.10 0.76
0.79 0.78 0.49 1.10 0.74 0.97 0.93 0.76 0.66 0.57
1.20 0.49 0.92 1.50 1.10 0.64 0.96 0.87 1.10 0.50
0.60 1.30 1.30 1.40 1.30 0.96 0.95 1.60 0.58 1.10
0.43 1.60 1.20 0.49 0.35 0.41 0.54 0.83 1.20 0.99
1.00 0.65 0.82 0.52 0.52 0.80 0.72 1.20 0.59 1.60
distribution), and stationarity (the distribution of the data doesn’t change with
time) should be determined. Using data analysis software such as Stat::Fit, data
sets can be automatically analyzed, tested for usefulness in a simulation, and
matched to the best-fitting underlying distribution. Stat::Fit is bundled with
ProModel and can be accessed from the Tools menu after opening ProModel. To
illustrate how data are analyzed and converted to a form for use in simulation,
let’s take a look at a data set containing 100 observations of an inspection
operation time, shown in Table 6.2. By entering these data or importing them
from a file into Stat::Fit, a descrip-
tive analysis of the data set can be performed. The summary of this analysis is
displayed in Figure 6.3. These parameters describe the entire sample collection.
Because reference will be made later to some of these parameters, a brief defini-
tion of each is given below.
Mean—the average value of the data.
Median—the value of the middle observation when the data are sorted in
ascending order.
Chapter 6 Data Collection and Analysis 137
TABLE 6.3 100 Outdoor Temperature Readings from 8:00 A.M. to 8:00 P.M.
57 57 58 59 59 60 60 62 62 62
63 63 64 64 65 66 66 68 68 69
70 71 72 72 73 74 73 74 75 75
75 75 76 77 78 78 79 80 80 81
80 81 81 82 83 83 84 84 83 84
83 83 83 82 82 81 81 80 81 80
79 79 78 77 77 76 76 75 74 75
75 74 73 73 72 72 72 71 71 71
71 70 70 70 70 69 69 68 68 68
67 67 66 66 66 65 66 65 65 64
Scatter Plot
This is a plot of adjacent points in the sequence of observed values plotted
against each other. Thus each plotted point represents a pair of consecutive
observations (Xi, Xi+1) for i = 1, 2, . . . , n − 1. This procedure is repeated for
all adjacent data points so that 100 observations would result in 99 plotted
points. If the Xi’s are independent, the points will be scattered randomly. If,
however, the data are dependent on each other, a trend line will be apparent.
If the Xi’s are positively correlated, a positively sloped trend line will appear.
Negatively correlated Xi’s will produce a negatively sloped line. A scatter
plot is a simple way to detect strongly dependent behavior.
Figure 6.4 shows a plot of paired observations using the 100 inspection
times from Table 6.2. Notice that the points are scattered randomly with no
defined pat- tern, indicating that the observations are independent.
Figure 6.5 is a plot of the 100 temperature observations shown in Table 6.3.
Notice the strong positive correlation.
Autocorrelation Plot
If observations in a sample are independent, they are also uncorrelated.
Correlated data are dependent on each other and are said to be autocorrelated. A
measure of the autocorrelation, rho (ρ), can be calculated using the equation
n− j
)( xi −x¯ )(xi j −x¯ )
+
ρ=
i =1
o 2(n −j)
Chapter 6 Data Collection and Analysis 139
FIGURE 6.4
Scatter plot
showing
uncorrelated data.
FIGURE 6.5
Scatter plot showing
correlated temperature
data.
where j is the lag or distance between data points; σ is the standard deviation of
the population, approximated by the standard deviation of the sample; and X¯ is
the sample mean. The calculation is carried out to 15 of the length of the data set,
where diminishing pairs start to make the calculation unreliable.
This calculation of autocorrelation assumes that the data are taken from a
sta- tionary process; that is, the data would appear to come from the same
distribution regardless of when the data were sampled (that is, the data are time
invariant). In the case of a time series, this implies that the time origin may be
shifted without
140 Part I Study Chapters
FIGURE 6.6
Autocorrelatio
n graph
showing
noncorrelation.
FIGURE 6.7
Autocorrelation
graph showing
correlation.
affecting the statistical characteristics of the series. Thus the variance for the
whole sample can be used to represent the variance of any subset. If the
process being studied is not stationary, the calculation of autocorrelation is more
complex.
The autocorrelation value varies between 1 and −1 (that is, between positive
and negative correlation). If the autocorrelation is near either extreme, the data
are autocorrelated. Figure 6.6 shows an autocorrelation plot for the 100
inspection time observations from Table 6.2. Notice that the values are near zero,
indicating
Chapter 6 Data Collection and Analysis 141
FIGURE 6.8
Runs test based on
points above and
below the median
and number of
turning points.
little or no correlation. The numbers in parentheses below the x axis are the max-
imum autocorrelation in both the positive and negative directions.
Figure 6.7 is an autocorrelation plot for the sampled temperatures in
Table 6.3. The graph shows a broad autocorrelation.
Runs Tests
The runs test looks for runs in the data that might indicate data correlation. A
run in a series of observations is the occurrence of an uninterrupted sequence of
num- bers showing the same trend. For instance, a consecutive set of
increasing or decreasing numbers is said to provide runs “up” or “down”
respectively. Two types of runs tests that can be made are the median test and
the turning point test. Both of these tests can be conducted automatically on data
using Stat::Fit. The runs test for the 100 sample inspection times in Table 6.2 is
summarized in Figure 6.8. The result of each test is either do not reject the
hypothesis that the series is random
142 Part I Study Chapters
or reject that hypothesis with the level of significance given. The level of signifi-
cance is the probability that a rejected hypothesis is actually true—that is, that
the test rejects the randomness of the series when the series is actually random.
The median test measures the number of runs—that is, sequences of num-
bers, above and below the median. The run can be a single number above or
below the median if the numbers adjacent to it are in the opposite direction. If
there are too many or too few runs, the randomness of the series is rejected.
This median runs test uses a normal approximation for acceptance or rejection
that requires that the number of data points above or below the median be
greater than 10.
The turning point test measures the number of times the series changes
direc- tion (see Johnson, Kotz, and Kemp 1992). Again, if there are too many
turning points or too few, the randomness of the series is rejected. This turning
point runs test uses a normal approximation for acceptance or rejection that
requires more than 12 data points.
While there are many other runs tests for randomness, some of the most
sen- sitive require larger data sets, in excess of 4,000 numbers (see Knuth 1981).
The number of runs in a series of observations indicates the randomness
of those observations. A few runs indicate strong correlation, point to point.
Several runs may indicate cyclic behavior.
bimodal, as shown in Figure 6.9. The fact that there are two clusters of data indi-
cates that there are at least two distinct causes of downtimes, each producing
different distributions for repair times. Perhaps after examining the cause of the
downtimes, it is discovered that some were due to part jams that were
quickly fixed while others were due to mechanical failures that took longer to
repair.
One type of nonhomogeneous data occurs when the distribution changes
over time. This is different from two or more distributions manifesting
themselves over the same time period such as that caused by mixed types of
downtimes. An exam- ple of a time-changing distribution might result from an
operator who works 20 per- cent faster during the second hour of a shift than
during the first hour. Over long pe- riods of time, a learning curve phenomenon
occurs where workers perform at a faster rate as they gain experience on the
job. Such distributions are called nonsta- tionary or time variant because of their
time-changing nature. A common example of a distribution that changes with
time is the arrival rate of customers to a service facility. Customer arrivals to a
bank or store, for example, tend to occur at a rate that fluctuates throughout the
day. Nonstationary distributions can be detected by plot- ting subgroups of data
that occur within successive time intervals. For example, sampled arrivals
between 8 A.M. and 9 A.M. can be plotted separately from arrivals between 9
A.M. and 10 A.M., and so on. If the distribution is of a different type or is the
same distribution but shifted up or down such that the mean changes value over
time, the distribution is nonstationary. This fact will need to be taken into
account when defining the model behavior. Figure 6.10 is a plot of customer
arrival rates for a department store occurring by half-hour interval between 10
A.M. and 6 P.M.
Frequency of occurrence
FIGURE 6.9
Bimodal distribution
of downtimes Part jams
Mechanical failures
indicating multiple
causes.
Repair time
FIGURE 6.10
Rate of arrival
Change in rate of
customer arrivals
between 10 A.M.
and 6 P.M.
10:00 A.M. 12:00 noon 2:00 P.M. 4:00 P.M. 6:00 P.M.
Time of day
144 Part I Study Chapters
Note that while the type of distribution (Poisson) is the same for each period, the
rate (and hence the mean interarrival time) changes every half hour.
The second case we look at is where two sets of data have been gathered
and we desire to know whether they come from the same population or are
identically dis- tributed. Situations where this type of testing is useful include
the following:
• Interarrival times have been gathered for different days and you want to
know if the data collected for each day come from the same distribution.
• Activity times for two different operators were collected and you want to
know if the same distribution can be used for both operators.
• Time to failure has been gathered on four similar machines and you are
interested in knowing if they are all identically distributed.
One easy way to tell whether two sets of data have the same distribution is
to run Stat::Fit and see what distribution best fits each data set. If the same
distribu- tion fits both sets of data, you can assume that they come from the
same popula- tion. If in doubt, they can simply be modeled as separate
distributions.
Several formal tests exist for determining whether two or more data sets can
be assumed to come from identical populations. Some of them apply to
specific families of distributions such as analysis of variance (ANOVA) tests for
normally distributed data. Other tests are distribution independent and can be
applied to compare data sets having any distribution, such as the Kolmogorov-
Smirnov two- sample test and the chi-square multisample test (see Hoover and
Perry 1990). The Kruskal-Wallis test is another nonparametric test because no
assumption is made about the distribution of the data (see Law and Kelton
2000).
Arrivals Arrivals
per 5-Minute Interval Frequency per 5-Minute Interval Frequency
0 15 6 7
1 11 7 4
2 19 8 5
3 16 9 3
4 8 10 3
5 8 11 1
FIGURE 6.11
Histogram showing arrival count per five-minute interval.
20
15
Frequency
10
0
0 1 2 3 4 5 6 7 8 9 10 11
Arrival count
FIGURE 6.12
Frequency distribution for 100 observed inspection times.
148 Part I Study Chapters
FIGURE 6.13
Histogram distribution
for 100 observed
inspection times.
Binomial Distribution
The binomial distribution is a discrete distribution that expresses the probability
( p) that a particular condition or outcome can occur in n trials. We call an
oc- currence of the outcome of interest a success and its nonoccurrence a
failure. For a binomial distribution to apply, each trial must be a Bernoulli
trial: it must
Chapter 6 Data Collection and Analysis 149
Uniform Distribution
A uniform or rectangular distribution is used to describe a process in which
the outcome is equally likely to fall between the values of a and b. In a
uniform distribution, the mean is (a + b)/2. The variance is expressed by (b
− a)2/12. The probability density function for the uniform distribution is
shown in Figure 6.15.
FIGURE 6.14
0.6
The probability
mass function of 0.5
a binomial
distribution 0.4
p(x)
0.2
0.1
0
0 1 2 3 4 5 6 7 8 9 10
x
FIGURE 6.15
The probability
density function of a f (x)
uniform distribution.
a b
x
150 Part I Study Chapters
FIGURE 6.16
The probability density
function of a
triangular
distribution. f (x)
a m b
x
The uniform distribution is often used in the early stages of simulation pro-
jects because it is a convenient and well-understood source of random variation.
In the real world, it is extremely rare to find an activity time that is uniformly
distributed because nearly all activity times have a central tendency or mode.
Sometimes a uniform distribution is used to represent a worst-case test for varia-
tion when doing sensitivity analysis.
Triangular Distribution
A triangular distribution is a good approximation to use in the absence of
data, es- pecially if a minimum, maximum, and most likely value (mode) can
be estimated. These are the three parameters of the triangular distribution. If
a, m, and b repre- sent the minimum, mode, and maximum values
respectively of a triangular distri- bution, then the mean of a triangular
distribution is (a + m + b)/3. The variance is defined by (a2 + m 2 + b2 −
am − ab − mb)/18. The probability density func- tion for the triangular
distribution is shown in Figure 6.16.
The weakness of the triangular distribution is that values in real activity
times rarely taper linearly, which means that the triangular distribution will
probably create more variation than the true distribution. Also, extreme values
that may be rare are not captured by a triangular distribution. This means that
the full range of values of the true distribution of the population may not be
represented by the tri- angular distribution.
Normal Distribution
The normal distribution (sometimes called the Gaussian distribution) describes
phenomena that vary symmetrically above and below the mean (hence the bell-
shaped curve). While the normal distribution is often selected for defining ac-
tivity times, in practice manual activity times are rarely ever normally
distributed. They are nearly always skewed to the right (the ending tail of the
distribution is longer than the beginning tail). This is because humans can
sometimes take sig- nificantly longer than the mean time, but usually not
much less than the mean
Chapter 6 Data Collection and Analysis 151
FIGURE 6.17
The probability density
function for a normal
distribution.
f (x)
Exponential Distribution
Sometimes referred to as the negative exponential, this distribution is used fre-
quently in simulations to represent event intervals. The exponential distribution
is defined by a single parameter, the mean (µ). This distribution is related to
the Poisson distribution in that if an occurrence happens at a rate that is Poisson
dis- tributed, the time between occurrences is exponentially distributed. In
other words, the mean of the exponential distribution is the inverse of the
Poisson rate. For example, if the rate at which customers arrive at a bank is
Poisson distributed with a rate of 12 per hour, the time between arrivals is
exponentially distributed with a mean of 5 minutes. The exponential
distribution has a memoryless or forgetfulness property that makes it well
suited for modeling certain phenomena that occur independently of one
another. For example, if arrival times are expo- nentially distributed with a
mean of 5 minutes, then the expected time before the next arrival is 5 minutes
regardless of how much time has elapsed since the pre- vious arrival. It is as
though there is no memory of how much time has already elapsed when
predicting the next event—hence the term memoryless. Examples
152 Part I Study Chapters
FIGURE 6.18
The probability density
function for an
exponential
distribution.
f (x )
FIGURE 6.19
Ranking distributions by goodness of fit for inspection time data set.
For a uniform distribution the probabilities for all intervals are equal, so the
remaining intervals also have a hypothesized probability of .20.
Step 3: Calculate the expected frequency for each cell (ei). The expected
frequency (e) for each cell (i ) is the expected number of observations that
would fall into each interval if the null hypothesis were true. It is calculated
by multiplying the total number of observations (n) by the probability ( p)
that an observation would fall within each cell. So for each cell, the
expected frequency (ei ) equals npi .
In our example, since the hypothesized probability ( p) of each cell
is the same, the expected frequency for every cell is ei = npi = 40 × .2
= 8. Step 4: Adjust cells if necessary so that all expected frequencies
are at least 5. If the expected frequency of any cell is less than 5, the
cells must be adjusted. This “rule of five” is a conservative rule that
provides satisfactory validity of the chi-square test. When adjusting
cells, the easiest approach is to simply consolidate adjacent cells. After
any consolidation, the total number of cells should be at least 3;
otherwise you no longer have a meaningful differentiation of the data
and, therefore, will need to gather additional data. If you merge any
cells as the result of this step, you will need to adjust the observed
frequency, hypothesized probability, and expected frequency of those
cells accordingly.
In our example, the expected frequency of each cell is 8, which meets
the minimum requirement of 5, so no adjustment is necessary.
Step 5: Calculate the chi-square statistic. The equation for calculating the
2 = k
chi-square statistic is χ cal ),i (oi − ei )2/ei . If the fit is good the
=1
chi-square statistic will be small.
For our example,
5 2
2 ) (oi − ei )
=
χ calc i = .125 .125 .50 + 0 + = 0.75.
i =1 e + + 0
Step 6: Determine the number of degrees of freedom (k − 1). A
simple way of determining a conservative number of degrees of
freedom is to take the number of cells minus 1, or k − 1.
For our example, the number of degrees of freedom is
k − 1 = 5 − 1 = 4.
Note: The number of degrees of freedom is often computed to be
k − s − 1, where k is the number of cells and s is the number of
parameters estimated from the data for defining the distribution. So for a
normal distribution with two parameters (mean and standard deviation)
that are
Chapter 6 Data Collection and Analysis 157
FIGURE 6.20
Visual comparison
between beta
distribution and a
histogram of the 100
sample inspection time
values.
FIGURE 6.21
Normal distribution with mean = 1 and standard deviation = .25.
FIGURE 6.22
A triangular
distribution with
minimum = 2,
mode = 5, and
maximum = 15.
2 5 15
random variable X will have values generated for X where min ≤ X < max. Thus
values will never be equal to the maximum value (in this case 20). Because gen-
erated values are automatically truncated when used in a context requiring an in-
teger, only integer values that are evenly distributed from 1 to 19 will occur (this
is effectively a discrete uniform distribution).
Objective
The objective of the study is to determine station utilization and throughput of the
system.
Rejected monitors
19" & 21"
monitor
Station 1 Station 2 Inspection
Entities
19" monitor
21" monitor
25" monitor
Workstation Information
Station 1 5 5%
Station 2 8 8%
Station 3 5 0%
Inspection 5 0%
Processing Sequence
Arrivals
A cartload of four monitor assemblies arrives every four hours normally
distrib- uted with a standard deviation of 0.2 hour. The probability of an arriving
monitor being of a particular size is
19" .6
21" .3
25" .1
Move Times
All movement is on an accumulation conveyor with the following times:
Station 1 Station 2 12
Station 2 Inspection 15
Inspection Station 3 12
Inspection Station 1 20
Inspection Station 2 14
Station 1 Inspection 18
Move Triggers
Entities move from one location to the next based on available capacity of the
input buffer at the next location.
Work Schedule
Stations are scheduled to operate eight hours a day.
Assumption List
• No downtimes (downtimes occur too infrequently).
• Dedicated operators at each workstation are always available during the
scheduled work time.
• Rework times are half of the normal operation times.
Chapter 6 Data Collection and Analysis 165
6.13 Summary
Data for building a model should be collected systematically with a view of how
the data are going to be used in the model. Data are of three types: structural, op-
erational, and numerical. Structural data consist of the physical objects that
make up the system. Operational data define how the elements behave.
Numerical data quantify attributes and behavioral parameters.
When gathering data, primary sources should be used first, such as
historical records or specifications. Developing a questionnaire is a good way to
request in- formation when conducting personal interviews. Data gathering
should start with structural data, then operational data, and finally numerical
data. The first piece of the puzzle to be put together is the routing sequence
because everything else hinges on the entity flow.
Numerical data for random variables should be analyzed to test for indepen-
dence and homogeneity. Also, a theoretical distribution should be fit to the data
if there is an acceptable fit. Some data are best represented using an empirical
dis- tribution. Theoretical distributions should be used wherever possible.
Data should be documented, reviewed, and approved by concerned
individu- als. This data document becomes the basis for building the simulation
model and provides a baseline for later modification or for future studies.
10.7, 5.4, 7.8, 12.2, 6.4, 9.5, 6.2, 11.9, 13.1, 5.9, 9.6, 8.1, 6.3, 10.3,
11.5,
12.7, 15.4, 7.1, 10.2, 7.4, 6.5, 11.2, 12.9, 10.1, 9.9, 8.6, 7.9, 10.3, 8.3,
11.1
Type A B C D E
Observations 10 14 12 9 5
19.While doing your homework one afternoon, you notice that you are
frequently interrupted by friends. You decide to record the times
between interruptions to see if they might be exponentially distributed.
Here are 30 observed times (in minutes) that you have recorded;
conduct a goodness-of-fit test to see if the data are exponentially
distributed. (Hint: Use the data average as an estimate of the mean. For
the range, assume a range between 0 and infinity. Divide the cells based
on equal probabilities ( pi ) for each cell rather than equal cell
intervals.)
This exercise is intended to help you practice sorting through and organizing data on a
ser- vice facility. The facility is a fast-food restaurant called Harry’s Drive-Through
and is shown on the next page.
Harry’s Drive-Through caters to two types of customers, walk-in and drive-through.
During peak times, walk-in customers arrive exponentially every 4 minutes and place
their order, which is sent to the kitchen. Nonpeak arrivals occur every 8–10 minutes. Cus-
tomers wait in a pickup queue while their orders are filled in the kitchen. Two workers are
assigned to take and deliver orders. Once customers pick up their orders, 60 percent
of the customers stay and eat at a table, while the other 40 percent leave with their
orders. There is seating for 30 people, and the number of people in each customer
party is one
168 Part I Study Chapters
Drive-through order
Kitchen
Drive-through
pickup
Pickup Order
Table area
40 percent of the time, two 30 percent of the time, three 18 percent of the time, four 10
percent of the time, and five 2 percent of the time. Eating time is normally distributed
with a mean of 15 minutes and a standard deviation of 2 minutes. If a walk-in customer
enters and sees that more than 15 customers are waiting to place their orders, the cus-
tomer will balk (that is, leave).
Harry’s is especially popular as a drive-through restaurant. Cars enter at a rate of 10
per hour during peak times, place their orders, and then pull forward to pick up their
orders. No more than five cars can be in the pickup queue at a time. One person is
dedicated to taking orders. If over seven cars are at the order station, arriving cars will
drive on.
The time to take orders is uniformly distributed between 0.5 minute and 1.2 minutes
in- cluding payment. Orders take an average of 8.2 minutes to fill with a standard
deviation of
1.2 minutes (normal distribution). These times are the same for both walk-in and drive-
through customers.
The objective of the simulation is to analyze performance during peak periods to see
how long customers spend waiting in line, how long lines become, how often customers
balk (pass by), and what the utilization of the table area is.
Problem
Summarize these data in table form and list any assumptions that need to be made in
order to conduct a meaningful simulation study based on the data given and objectives
specified.
Chapter 6 Data Collection and Analysis 169
Reference
s
Banks, Jerry; John S. Carson II; Barry L. Nelson; and David M. Nicol. Discrete-Event
System Simulation. Englewood Cliffs, NJ: Prentice Hall, 2001.
Banks, Jerry, and Randall R. Gibson. “Selecting Simulation Software.” IIE Solutions, May
1997, pp. 29–32.
———. Stat::Fit. South Kent, CT: Geer Mountain Software Corporation, 1996.
Breiman, Leo. Statistics: With a View toward Applications. New York: Houghton Mifflin,
1973.
Brunk, H. D. An Introduction to Mathematical Statistics. 2nd ed. N.Y.: Blaisdell Publish-
ing Co. 1965.
Carson, John S. “Convincing Users of Model’s Validity Is Challenging Aspect of Mod-
eler’s Job.” Industrial Engineering, June 1986, p. 77.
Hoover, S. V., and R. F. Perry. Simulation: A Problem Solving Approach. Reading, MA,
Addison-Wesley, 1990.
Johnson, Norman L.; Samuel Kotz; and Adrienne W. Kemp. Univariate Discrete Distribu-
tions. New York: John Wiley & Sons, 1992, p. 425.
Knuth, Donald E. Seminumerical Algorithms. Reading, MA: Addison-Wesley, 1981.
Law, Averill M., and W. David Kelton. Simulation Modeling & Analysis. New York:
McGraw-Hill, 2000.
Stuart, Alan, and J. Keith Ord. Kendall’s Advanced Theory of Statistics. vol. 2. Cambridge:
Oxford University Press, 1991.
Harrell−Ghosh−Bo I. Study 7. Model © The
wden: Simulation Chapters Building McGraw−Hill
Using ProModel, Companies,
Second Edition
C H A P T E R
7 MODEL BUILDING
7.1 Introduction
In this chapter we look at how to translate a conceptual model of a system into a
simulation model. The focus is on elements common to both manufacturing and
service systems such as entity flow and resource allocation. Modeling issues
more specific to either manufacturing or service systems will be covered in later
chapters.
Modeling is more than knowing how to use a simulation software tool.
Learning to use modern, easy-to-use software is one of the least difficult aspects
of modeling. Indeed, current simulation software makes poor and inaccurate
mod- els easier to create than ever before. Unfortunately, software cannot make
deci- sions about how the elements of a particular system operate and how they
should interact with each other. This is the role of the modeler.
Modeling is considered an art or craft as much as a science. Knowing the
theory behind simulation and understanding the statistical issues are the science
part. But knowing how to effectively and efficiently represent a system using a
simulation tool is the artistic part of simulation. It takes a special knack to be
able to look at a system in the abstract and then creatively construct a
representative logical model using a simulation tool. If three different people
were to model the same system, chances are three different modeling
approaches would be taken. Modelers tend to use the techniques with which
they are most familiar. So the best way to develop good modeling skills is to
look at lots of good examples and, most of all, practice, practice, practice!
Skilled simulation analysts are able to quickly translate a process into a
simulation model and begin conducting experiments.
171
172 Part I Study Chapters
terms of the same structural and operational elements that were described in
Chapter 6. This is essentially how models are defined using ProModel.
FIGURE 7.1
Relationship between
model complexity and
model utility (also
Complexity
Optimum level of
model complexity
Utility
Chapter 7 Model Building 175
7.3.1 Entities
Entities are the objects processed in the model that represent the inputs and out-
puts of the system. Entities in a system may have special characteristics such as
speed, size, condition, and so on. Entities follow one or more different routings
in a system and have processes performed on them. They may arrive from
outside the system or be created within the system. Usually, entities exit the
system after visiting a defined sequence of locations.
Simulation models often make extensive use of entity attributes. For exam-
ple, an entity may have an attribute called Condition that may have a value of 1
for defective or 0 for nondefective. The value of this attribute may determine
where the entity gets routed in the system. Attributes are also frequently used
to gather information during the course of the simulation. For example, a
modeler may define an attribute called ValueAddedTime to track the amount
of value- added time an entity spends in the system.
The statistics of interest that are generally collected for entities include time
in the system (flow time), quantity processed (output), value-added time, time
spent waiting to be serviced, and the average number of entities in the system.
176 Part I Study Chapters
Entities to Include
When deciding what entities to include in a model, it is best to look at every
kind of entity that has a bearing on the problem being addressed. For
example, if a component part is assembled to a base item at an assembly station,
and the station is always stocked with the component part, it is probably
unnecessary to model the component part. In this case, what is essential to
simulate is just the time delay to perform the assembly. If, however, the
component part may not always be available due to delays, then it might be
necessary to simulate the flow of compo- nent parts as well as the base items.
The rule is that if you can adequately capture the dynamics of the system
without including the entity, don’t include it.
Entity Aggregating
It is not uncommon for some manufacturing systems to have hundreds of part
types or for a service system to have hundreds of different customer types. Mod-
eling each one of these entity types individually would be a painstaking task that
would yield little, if any, benefit. A better approach is to treat entity types in the
aggregate whenever possible (see Figure 7.2). This works especially well when
all entities have the same processing sequence. Even if a slight difference in
process- ing exists, it often can be handled through use of attributes or by using
probabili- ties. If statistics by entity type are not required and differences in
treatment can be defined using attributes or probabilities, it makes sense to
aggregate entity types into a single generic entity and perhaps call it part or
customer.
Entity Resolution
Each individual item or person in the system need not always be represented by
a corresponding model entity. Sometimes a group of items or people can be
repre- sented by a single entity (see Figure 7.3). For example, a single entity
might be used to represent a batch of parts processed as a single unit or a party
of people eating together in a restaurant. If a group of entities is processed as a
group and moved as a group, there is no need to model them individually.
Activity times or statistics that are a function of the size of the group can be
handled using an attribute that keeps track of the items represented by the single
entity.
Type A
Type B
Type X
FIGURE 7.3
FIGURE 7.2
7.3.2 Locations
Locations are places in the system that entities visit for processing, waiting, or
de- cision making. A location might be a treatment room, workstation, check-in
point, queue, or storage area. Locations have a holding capacity and may have
certain times that they are available. They may also have special input and
output such as input based on highest priority or output based on first-in, first out
(FIFO).
In simulation, we are often interested in the average contents of a location
such as the average number of customers in a queue or the average number of
parts in a storage rack. We might also be interested in how much time entities
spend at a particular location for processing. There are also location state
statistics that are of interest such as utilization, downtime, or idle time.
Locations to Include
Deciding what to model as a route location depends largely on what happens at
the location. If an entity merely passes through a location en route to another
with- out spending any time, it probably isn’t necessary to include the location.
For ex- ample, a water spray station through which parts pass without pausing
probably doesn’t need to be included in a model. In considering what to define
as a location, any point in the flow of an entity where one or more of the
following actions take place may be a candidate:
• Place where an entity is detained for a specified period of time while
undergoing an activity (such as fabrication, inspection, or cleaning).
• Place where an entity waits until some condition is satisfied (like the
availability of a resource or the accumulation of multiple entities).
• Place or point where some action takes place or logic gets executed, even
though no time is required (splitting or destroying an entity, sending a
signal, incrementing an attribute or variable).
178 Part I Study Chapters
Location Resolution
Depending on the level of resolution needed for the model, a location may be an
entire factory or service facility at one extreme, or individual positions on a desk
or workbench at the other. The combination of locations into a single location is
done differently depending on whether the locations are parallel or serial
locations.
When combining parallel locations having identical processing times, the
resulting location should have a capacity equal to the combined capacities of the
individual locations; however, the activity time should equal that of only one of
the locations (see Figure 7.4). A situation where multiple locations might be
com- bined in this way is a parallel workstation. An example of a parallel
workstation is a work area where three machines perform the same operation.
All three ma- chines could be modeled as a single location with a capacity equal
to three. Com- bining locations can significantly reduce model size, especially
when the number of parallel units gets very large, such as a 20-station checkout
area in a large shop- ping center or a 100-seat dining area in a restaurant.
When combining serial locations, the resultant location should have a
capac- ity equal to the sum of the individual capacities and an activity time
equal to the sum of the activity times. An example of a combined serial
sequence of locations
FIGURE 7.4
Example of
Capacity = 1
combining three
parallel stations into
a single station.
Capacity = 1 Capacity = 1
Capacity = 1
Operation 10 Operation 30
Capacity =
1
Operation 20
1 min. each
FIGURE 7.5
Example of combining three serial stations into a single station.
Capacity = Capacity
Capacity==
11
Capacity = 1 Capacity = 1
1
might be a synchronous transfer line that has multiple serial stations. All of
them could be represented as a single location with a capacity equal to the
number of stations (see Figure 7.5). Parts enter the location, spend an amount of
time equal to the sum of all the station times, and then exit the location. The
behavior may not be exactly the same as having individual stations, such as
when the location becomes blocked and up to three parts may be finished and
waiting to move to station 5. The modeler must decide if the representation is a
good enough approx- imation for the intended purpose of the simulation.
7.3.3 Resources
Resources are the agents used to process entities in the system. Resources may
be either static or dynamic depending on whether they are stationary (like a copy
ma- chine) or move about in the system (like an operator). Dynamic resources
behave much like entities in that they both move about in the system. Like
entities, re- sources may be either animate (living beings) or inanimate (a tool or
machine). The primary difference between entities and resources is that entities
enter the system, have a defined processing sequence, and, in most cases, finally
leave the system. Resources, however, usually don’t have a defined flow
sequence and remain in the system (except for off-duty times). Resources often
respond to requests for their use, whereas entities are usually the objects
requiring the use of resources.
In simulation, we are interested in how resources are utilized, how many re-
sources are needed, and how entity processing is affected by resource
availability. The response time for acquiring a resource may also be of interest.
180 Part I Study Chapters
Resources to Include
The decision as to whether a resource should be included in a model depends
largely on what impact it has on the behavior of the system. If the resource is
dedicated to a particular workstation, for example, there may be little benefit in
in- cluding it in the model since entities never have to wait for the resource to
become available before using it. You simply assign the processing time to the
workstation. If, on the other hand, the resource may not always be available
(it experiences downtime) or is a shared resource (multiple activities
compete for the same resource), it should probably be included. Once again,
the consideration is how much the resource is likely to affect system behavior.
Consumable Resources
Depending on the purpose of the simulation and degree of influence on system
behavior, it may be desirable to model consumable resources. Consumable
resources are used up during the simulation and may include
• Services such as electricity or compressed air.
• Supplies such as staples or tooling.
Consumable resources are usually modeled either as a function of time or as
a step function associated with some event such as the completion of an
operation. This can be done by defining a variable or attribute that changes
value with time or by event. A variable representing the consumption of
packaging materials, for example, might be based on the number of entities
processed at a packaging station.
Transport Resources
Transport resources are resources used to move entities within the system.
Examples of transport resources are lift trucks, elevators, cranes, buses, and air-
planes. These resources are dynamic and often are capable of carrying multiple
entities. Sometimes there are multiple pickup and drop-off points to deal with.
The transporter may even have a prescribed route it follows, similar to an
entity routing. A common example of this is a bus route.
In advanced manufacturing systems, the most complex element to model is
often the transport or material handling system. This is because of the complex
operation that is associated with these computer-controlled systems such as
Chapter 7 Model Building 181
7.3.4 Paths
Paths define the course of travel for entities and resources. Paths may be
isolated, or they may be connected to other paths to create a path network. In
ProModel simple paths are automatically created when a routing path is
defined. A routing path connecting two locations becomes the default path of
travel if no explicitly defined path or path network connects the locations.
Paths linked together to form path networks are common in manufacturing
and service systems. In manufacturing, aisles are connected to create travel
ways for lift trucks and other material handlers. An AGVS sometimes has
complex path networks that allow controlled traffic flow of the vehicles in the
system. In service systems, office complexes have hallways connecting other
hallways that connect to offices. Transportation systems use roadways, tracks,
and so on that are often interconnected.
When using path networks, there can sometimes be hundreds of routes to
take to get from one location to another. ProModel is able to automatically
navigate en- tities and resources along the shortest path sequence between two
locations. Op- tionally, you can explicitly define the path sequence to take to get
from one point to any other point in the network.
7.4.1 Routings
Routings define the sequence of flow for entities from location to location. When
entities complete their activity at a location, the routing defines where the entity
goes next and specifies the criterion for selecting from among multiple possible
locations.
182 Part I Study Chapters
Frequently entities may be routed to more than one possible location. When
choosing from among multiple alternative locations, a rule or criterion must be
defined for making the selection. A few typical rules that might be used for
selecting the next location in a routing decision include
• Probabilistic—entities are routed to one of several locations according to
a frequency distribution.
• First available—entities go to the first available location in the order they
are listed.
• By turn—the selection rotates through the locations in the list.
• Most available capacity—entities select the location that has the most
available capacity.
• Until full—entities continue to go to a single location until it is full and
then switch to another location, where they continue to go until it is full,
and so on.
• Random—entities choose randomly from among a list of locations.
• User condition—entities choose from among a list of locations based on a
condition defined by the user.
Recirculation
Sometimes entities revisit or pass through the same location multiple times. The
best approach to modeling this situation is to use an entity attribute to keep track
of the number of passes through the location and determine the operation or
rout- ing accordingly. When using an entity attribute, the attribute is incremented
either on entry to or on exit from a location and tested before making the
particular op- eration or routing decision to see which pass the entity is
currently on. Based on the value of the attribute, a different operation or routing
may be executed.
Unordered Routings
Certain systems may not require a specific sequence for visiting a set of
locations but allow activities to be performed in any order as long as they all
eventually get performed. An example is a document requiring signatures from
several depart- ments. The sequence in which the signatures are obtained may be
unimportant as long as all signatures are obtained.
In unordered routing situations, it is important to keep track of which
locations have or haven’t been visited. Entity attributes are usually the most
prac- tical way of tracking this information. An attribute may be defined for
each possi- ble location and then set to 1 whenever that location is visited. The
routing is then based on which of the defined attributes are still set to zero.
in terms of the time required, the resources used, and any other logic that
impacts system performance. For operations requiring more than a time and
resource des- ignation, detailed logic may need to be defined using if–then
statements, variable assignment statements, or some other type of statement
(see Section 7.4.8, “Use of Programming Logic”).
An entity operation is one of several different types of activities that take
place in a system. As with any other activity in the system, the decision to
include an entity operation in a model should be based on whether the operation
impacts entity flow in some way. For example, if a labeling activity is performed
on enti- ties in motion on a conveyor, the activity need not be modeled unless
there are sit- uations where the labeler experiences frequent interruptions.
Consolidation of Entities
Entities often undergo operations where they are consolidated or become either
physically or logically connected with other entities. Examples of entity
consoli- dation include batching and stacking. In such situations, entities are
allowed to simply accumulate until a specified quantity has been gathered, and
then they are grouped together into a single unit. Entity consolidation may be
temporary, al- lowing them to later be separated, or permanent, in which case
the consolidated entities no longer retain their individual identities. Figure 7.6
illustrates these two types of consolidation.
Examples of consolidating multiple entities to a single entity include
• Accumulating multiple items to fill a container.
• Gathering people together into groups of five for a ride at an amusement
park.
• Grouping items to load them into an oven for heating.
In ProModel, entities are consolidated permanently using the COMBINE com-
mand. Entities may be consolidated temporarily using the GROUP command.
Attachment of Entities
In addition to consolidating accumulated entities at a location, entities can also
be attached to a specific entity at a location. Examples of attaching entities might
be
FIGURE 7.6 destroyed. In (b) temporary consolidation, batched entities are preserved for later
Consolidation of unbatching.
entities into a single
entity. In (a)
permanent before after
consolidation, (a)
batched entities get
before
(b) after
184 Part I Study Chapters
FIGURE 7.7
Attachment of one or
more entities to
another entity. In (a)
permanent after
attachment, the
attached entities get befor
destroyed. In (b) e (a)
temporary attachment,
the attached entities
are preserved for later
detachment.
after
before (b)
Dividing Entities
In some entity processes, a single entity is converted into two or more new enti-
ties. An example of entity splitting might be an item that is cut into smaller
pieces or a purchase order that has carbon copies removed for filing or
sending to accounting. Entities are divided in one of two ways: either the
entity is split up into two or more new entities and the original entity no longer
exists; or additional entities are merely created (cloned) from the original
entity, which continues to exist. These two methods are shown in Figure 7.8.
Chapter 7 Model Building 185
FIGURE 7.8
Multiple entities
created from a
single entity. Either before after
(a) the entity splits (a)
into multiple entities
(the original entity
is destroyed) or (b)
the entity creates
one or more entities
(the original entity before
continues). after
(b)
Examples of entities being split or creating new entities from a single entity
include
• A container or pallet load being broken down into the individual
items comprising the load.
• Driving in and leaving a car at an automotive service center.
• Separating a form from a multiform document.
• A customer placing an order that is processed while the customer waits.
• A length of bar stock being cut into smaller pieces.
In ProModel, entities are split using a SPLIT statement. New entities are
created from an existing entity using a CREATE statement. Alternatively, entities
can be con- veniently split or created using the routing options provided in
ProModel.
Periodic Arrivals
Periodic arrivals occur more or less at the same interval each time. They may
occur in varying quantities, and the interval is often defined as a random
variable. Periodic arrivals are often used to model the output of an upstream
process that feeds into the system being simulated. For example, computer
monitors might arrive from an assembly line to be packaged at an interval
that is normally
186 Part I Study Chapters
distributed with a mean of 1.6 minutes and a standard deviation of 0.2 minute.
Ex- amples of periodic arrivals include
• Parts arriving from an upstream operation that is not included in the model.
• Customers arriving to use a copy machine.
• Phone calls for customer service during a particular part of the day.
Periodic arrivals are defined in ProModel by using the arrivals table.
Scheduled Arrivals
Scheduled arrivals occur when entities arrive at specified times with possibly
some defined variation (that is, a percentage will arrive early or late). Scheduled
arrivals may occur in quantities greater than one such as a shuttle bus
transporting guests at a scheduled time. It is often desirable to be able to read in
a schedule from an ex- ternal file, especially when the number of scheduled
arrivals is large and the sched- ule may change from run to run. Examples of
scheduled arrivals include
• Customer appointments to receive a professional service such as counseling.
• Patients scheduled for lab work.
• Production release times created through an MRP (material requirements
planning) system.
Scheduled arrivals sometime occur at intervals, such as appointments that occur
at 15-minute intervals with some variation. This may sound like a periodic
arrival; however, periodic arrivals are autocorrelated in that the absolute time of
each ar- rival is dependent on the time of the previous arrival. In scheduled
arrival inter- vals, each arrival occurs independently of the previous arrival. If
one appointment arrives early or late, it will not affect when the next
appointment arrives.
ProModel provides a straightforward way for defining scheduled arrivals
using the arrivals table. A variation may be assigned to a scheduled arrival to
sim- ulate early or late arrivals for appointments.
Fluctuating Arrivals
Sometimes entities arrive at a rate that fluctuates with time. For example, the
rate at which customers arrive at a bank usually varies throughout the day with
peak and lull times. This pattern may be repeated each day (see Figure 7.9).
Examples of fluctuating arrivals include
• Customers arriving at a restaurant.
• Arriving flights at an international airport.
• Arriving phone calls for customer service.
In ProModel, fluctuating arrivals are specified by defining an arrival cycle
pattern for a time period that may be repeated as often as desired.
Event-Triggered Arrivals
In many situations, entities are introduced to the system by some internal trigger
such as the completion of an operation or the lowering of an inventory level to a
Chapter 7 Model Building 187
FIGURE 7.9
A daily cycle 120
pattern of arrivals.
Arrival quantity
100
80
60
40
20
9A.M. 10A.M. 11A.M.12 NOON 1P.M. 2P.M. 3P.M. 4P.M. 5P.M. 6P.M.
Hour of day
Use of Priorities
Locations and resources may be requested with a particular priority in
ProModel. Priorities range from 0 to 999, with higher values having higher
priority. If no pri- ority is specified, it is assumed to be 0. For simple prioritizing,
you should use pri- orities from 0 to 99. Priorities greater than 99 are for
preempting entities and downtimes currently in control of a location or
resource. The command Get Operator, 10 will attempt to get the resource
called Operator with a priority of
10. If the Operator is available when requested, the priority has no significance.
If, however, the Operator is currently busy, this request having a priority of 10
will get the Operator before any other waiting request with a lower priority.
Preemption
Sometimes it is desirable to have a resource or location respond immediately to
a task, interrupting the current activity it is doing. This ability to bump
another activity or entity that is using a location or resource is referred to as
preemption. For example, an emergency patient requesting a particular doctor
may preempt a patient receiving routine treatment by the doctor. Manufacturing
and service or- ganizations frequently have “hot” jobs that take priority over
any routine job being done. Preemption is achieved by specifying a
preemptive priority (100 to 999) for entering the location or acquiring the
resource.
Priority values in ProModel are divided into 10 levels (0 to 99, 100 to
199, . . . , 900 to 999). Levels higher than 99 are used to preempt entities or
downtimes of a lower level. Multiple preemptive levels make it possible to
preempt entities or downtimes that are themselves preemptive.
Chapter 7 Model Building 189
times, one solution is to try to synchronize the arrivals with the work schedule.
This usually complicates the way arrivals are defined. Another solution, and
usu- ally an easier one, is to have the arrivals enter a preliminary location where
they test whether the facility is closed and, if so, exit the system. In
ProModel, if a location where entities are scheduled to arrive is unavailable
at the time of an arrival, the arriving entities are simply discarded.
mean of 2 minutes, the time between failures should be defined as xlast + E (10)
where xlast is the last repair time generated using E (2) minutes.
Downtime Resolution
Unfortunately, data are rarely available on equipment downtime. When they are
available, they are often recorded as overall downtime and seldom broken down
into number of times down and time between failures. Depending on the nature
of the downtime information and degree of resolution required for the
simulation, downtimes can be treated in the following ways:
• Ignore the downtime.
• Simply increase processing times to adjust for downtime.
• Use average values for mean time between failures (MTBF) and
mean time to repair (MTTR).
• Use statistical distributions for time between failures and time to repair.
Ignoring Downtime. There are several situations where it might make sense to
ignore downtimes in building a simulation model. Obviously, one situation is
where absolutely no data are unavailable on downtimes. If there is no knowledge
Chapter 7 Model Building 193
time. It also implies that during periods of high equipment utilization, the same
amount of downtime occurs as during low utilization periods. Equipment
failures should generally be based on operating time and not on elapsed time
because elapsed time includes operating time, idle time, and downtime. It should
be left to the simulation to determine how idle time and downtime affect the
overall elapsed time between failures.
To illustrate the difference this can make, let’s assume that the following
times were logged for a given operation:
In use 20
Down 5
Idle 15
Total time 40
A similar situation occurs in activity times that have more than one distribu-
tion. For example, when a machine goes down, 30 percent of the time it takes
Triangular(0.2, 1.5, 3) minutes to repair and 70 percent of the time it takes Trian-
gular(3, 7.5, 15) minutes to repair. The logic for the downtime definition might
be
if rand() <= .30
then wait T(.2, 1.5, 3) min
else wait T(3, 7.5, 15) min
A .20 E(5)
B .50 E(8)
C .30 E(12)
196 Part I Study Chapters
module. To do this within operation logic (or in any other logic), you would enter
something like the following, where Count is defined as a local variable:
int Count = 1
while Count < 11 do
{
NumOfBins[Count] = 4
Inc Count
}
The braces “{” and “}” are the ProModel notation (also used in C++ and
Java) for starting and ending a block of logic. In this case it is the block of
statements to be executed repeatedly by an object as long as the local variable
Count is less than 11.
model. When faced with building a supermodel, it is always a good idea to parti-
tion the model into several submodels and tackle the problem on a smaller scale
first. Once each of the submodels has been built and validated, they can be
merged into a larger composite model. This composite model can be structured
either as a single monolithic model or as a hierarchical model in which the
details of each submodel are hidden unless explicitly opened for viewing.
Several ways have been described for merging individual submodels into a
composite model (Jayaraman and Agarwal 1996). Three of the most common
ways that might be considered for integrating submodels are
• Option 1: Integrate all of the submodels just as they have been built. This
approach preserves all of the detail and therefore accuracy of the
individual submodels. However, the resulting composite model may be
enormous and cause lengthy execution times. The composite model may
be structured as a flat model or, to reduce complexity, as a hierarchical
model.
• Option 2: Use only the recorded output from one or more of the
submodels. By simulating and recording the time at which each entity
exits the model for a single submodel, these exit times can be used in
place of the submodel for determining the arrival times for the larger
model. This eliminates the need to include the overhead of the individual
submodel in the composite model. This technique, while drastically
reducing the complexity of the composite model, may not be possible if
the interaction with the submodel is two-way. For submodels representing
subsystems that simply feed into a larger system (in which case the
subsystem operates fairly independently of downstream activities), this
technique is valid. An example is an assembly facility in which fabricated
components or even subassemblies feed into a final assembly line.
Basically, each feeder line is viewed as a “black box” whose output is
read from a file.
• Option 3: Represent the output of one or more of the submodels as
statistical distributions. This approach is the same as option 2, but
instead of using the recorded output times from the submodel in the
composite model, a statistical distribution is fit to the output times and
used to generate the input to the composite model. This technique
eliminates the need for using data files that, depending on the submodel,
may be quite large. Theoretically, it should also be more accurate because
the true underlying distribution is used instead of just a sample unless
there are discontinuities in the output. Multiple sample streams can also
be generated for running multiple replications.
One is to include cost factors in the model itself and dynamically update cost
col- lection variables during the simulation. ProModel includes a cost module
for as- signing costs to different factors in the simulation such as entity cost,
waiting cost, and operation cost. The alternative approach is to run a cost
analysis after the sim- ulation, applying cost factors to collected cost drivers
such as resource utilization or time spent in storage. The first method is best
when it is difficult to summarize cost drivers. For example, the cost per unit of
production may be based on the types of resources used and the time for using
each type. This may be a lot of in- formation for each entity to carry using
attributes. It is much easier to simply up- date the entity’s cost attribute
dynamically whenever a particular resource has been used. Dynamic cost
tracking suffers, however, from requiring cost factors to be considered during
the modeling stage rather than the analysis stage. For some models, it may be
difficult to dynamically track costs during a simulation, espe- cially when
relationships become very complex.
The preferred way to analyze costs, whenever possible, is to do a
postsimula- tion analysis and to treat cost modeling as a follow-on activity to
system model- ing rather than as a concurrent activity (see Lenz and Neitzel
1995). There are several advantages to separating the logic model from the
cost model. First, the model is not encumbered with tracking information that
does not directly affect how the model operates. Second, and perhaps more
importantly, post analysis of costs gives more flexibility for doing “what-if”
scenarios with the cost model. For example, different cost scenarios can be run
based on varying labor rates in a matter of seconds when applied to
simulation output data that are immediately available. If modeled during the
simulation, a separate simulation would have to be run applying each labor rate.
7.6 Summary
18. What is the problem with modeling downtimes in terms of mean time
between failures (MTBF) and mean time to repair (MTTR)?
19. Why should unplanned downtimes or failures be defined as a function
of usage time rather than total elapsed time on the clock?
20. In modeling repair times, how should the time spent waiting for a
repairperson be modeled?
21. What is preemption? What activities or events might preempt
other activities in a simulation?
22. A boring machine experiences downtimes every five hours
(exponentially distributed). It also requires routine preventive
maintenance (PM) after every eight hours (fixed) of operation. If a
downtime occurs within two hours of the next scheduled PM, the PM is
performed as part of the repair time (no added time is needed) and, after
completing the repair coupled with the PM, the next PM is set for eight
hours away. Conceptually, how would you model this situation?
23. A real estate agent schedules six customers (potential buyers) each day,
one every 1.5 hours, starting at 8 A.M. Customers are expected to arrive for
their appointments at the scheduled times. However, past experience shows
that customer arrival times are normally distributed with a mean equal to
the scheduled time and a standard deviation of five minutes. The time the
agent spends with each customer is normally distributed with a mean of 1.4
hours and a standard deviation of .2 hours. Develop a simulation model to
calculate the expected waiting time for customers.
Reference
s Jayaraman, Arun, and Arun Agarwal. “Simulating an Engine Plant.” Manufacturing
Engi- neering, November 1996, pp. 60–68.
Law, A. M. “Introduction to Simulation: A Powerful Tool for Analyzing Complex Manu-
facturing Systems.” Industrial Engineering, 1986, 18(5):57–58.
Lenz, John, and Ray Neitzel. “Cost Modeling: An Effective Means to Compare Alterna-
tives.” Industrial Engineering, January 1995, pp. 18–20.
Shannon, Robert E. “Introduction to the Art and Science of Simulation.” In Proceedings
of the 1998 Winter Simulation Conference, ed. D. J. Medeiros, E. F. Watson, J. S.
Carson, and M. S. Manivannan. Piscataway, NJ: Institute of Electrical and
Electronics Engineers, 1998.
Thompson, Michael B. “Expanding Simulation beyond Planning and Design.”
Industrial Engineering, October 1994, pp. 64–66.
Harrell−Ghosh−Bo I. Study 8. Model © The
wden: Simulation Chapters Verification and McGraw−Hill
Using ProModel, Validation Companies,
Second Edition
C H A P T E R
8 MODEL
VERIFICATION AND
VALIDATION
8.1 Introduction
Building a simulation model is much like developing an architectural plan for a
house. A good architect will review the plan and specifications with the client
or owner of the house to ensure that the design meets the client’s expectations.
The architect will also carefully check all dimensions and other specifications
shown on the plan for accuracy. Once the architect is reasonably satisfied that
the right information has been accurately represented, a contractor can be given
the plan to begin building the house. In a similar way the simulation analyst
should examine the validity and correctness of the model before using it to
make implementation decisions.
In this chapter we cover the importance and challenges associated with
model verification and validation. We also present techniques for verifying and
validat- ing models. Balci (1997) provides a taxonomy of more than 77
techniques for model verification and validation. In this chapter we give only a
few of the more common and practical methods used. The greatest problem
with verification and validation is not one of failing to use the right techniques
but failing to use any technique. Questions addressed in this chapter include the
following:
• What are model verification and validation?
• What are obstacles to model verification and validation?
• What techniques are used to verify and validate a model?
• How are model verification and validation maintained?
Two case studies are presented at the end of the chapter showing how verifi-
cation and validation techniques have been used in actual simulation projects.
203
204 Part I Study Chapters
Concept
Model
Chapter 8 Model Verification and Validation 205
the tangled mess and figure out what the model creator had in mind become
almost futile. It is especially discouraging when attempting to use a poorly con-
structed model for future experimentation. Trying to figure out what changes
need to be made to model new scenarios becomes difficult if not impossible.
The solution to creating models that ease the difficulty of verification and
val- idation is to first reduce the amount of complexity of the model.
Frequently the most complex models are built by amateurs who do not have
sense enough to know how to abstract system information. They code way too
much detail into the model. Once a model has been simplified as much as
possible, it needs to be coded so it is easily readable and understandable. Using
object-oriented techniques such as encapsulation can help organize model data.
The right simulation software can also help keep model data organized and
readable by providing table entries and intelligent, parameterized constructs
rather than requiring lots of low-level pro- gramming. Finally, model data and
logic code should be thoroughly and clearly documented. This means that
every subroutine used should have an explanation of what it does, where it is
invoked, what the parameters represent, and how to change the subroutine for
modifications that may be anticipated.
In the top-down approach, the verification testing begins with the main
mod- ule and moves down gradually to lower modules. At the top level, you
are more interested that the outputs of modules are as expected given the inputs.
If discrep- ancies arise, lower-level code analysis is conducted.
In both approaches sample test data are used to test the program code. The
bottom-up approach typically requires a smaller set of data to begin with. After
exercising the model using test data, the model is stress tested using extreme
input values. With careful selection, the results of the simulation under extreme
condi- tions can be predicted fairly well and compared with the test results.
FIGURE 8.2
Fragment of a trace
listing.
Chapter 8 Model Verification and Validation 211
FIGURE 8.3
ProModel debugger
window.
state values as they dynamically change. Like trace messaging, debugging can
be turned on either interactively or programmatically. An example of a
debugger window is shown in Figure 8.3.
Experienced modelers make extensive use of trace and debugging capabili-
ties. Animation and output reports are good for detecting problems in a simula-
tion, but trace and debug messages help uncover why problems occur.
Using trace and debugging features, event occurrences and state variables
can be examined and compared with hand calculations to see if the program is
op- erating as it should. For example, a trace list might show when a particular
oper- ation began and ended. This can be compared against the input operation
time to see if they match.
One type of error that a trace or debugger is useful in diagnosing is gridlock.
This situation is caused when there is a circular dependency where one action
de- pends on another action, which, in turn, is dependent on the first action. You
may have experienced this situation at a busy traffic intersection or when
trying to leave a crowded parking lot after a football game. It leaves you with
a sense of utter helplessness and frustration. An example of gridlock in
simulation some- times occurs when a resource attempts to move a part from
one station to another, but the second station is unavailable because an entity
there is waiting for the same resource. The usual symptom of this situation is
that the model appears to freeze up and no more activity takes place. Meanwhile
the simulation clock races rapidly forward. A trace of events can help detect
why the entity at the second sta- tion is stuck waiting for the resource.
212 Part I Study Chapters
system. The modeler should have at least an intuitive idea of how the
model will react to a given change. It should be obvious, for example, that
doubling the number of resources for a bottleneck operation should
increase, though not necessarily double, throughput.
• Running traces. An entity or sequence of events can be traced through
the model processing logic to see if it follows the behavior that would
occur in the actual system.
• Conducting Turing tests. People who are knowledgeable about the
operations of a system are asked if they can discriminate between
system and model outputs. If they are unable to detect which outputs are
the model outputs and which are the actual system outputs, this is
another piece of evidence to use in favor of the model being valid.
A common method of validating a model of an existing system is to
compare the model performance with that of the actual system. This approach
requires that an as-is simulation be built that corresponds to the current
system. This helps “calibrate” the model so that it can be used for simulating
variations of the same model. After running the as-is simulation, the
performance data are compared to those of the real-world system. If sufficient
data are available on a performance measure of the real system, a statistical
test can be applied called the Student’s t test to determine whether the
sampled data sets from both the model and the actual system come from the
same distribution. An F test can be performed to test the equality of variances of
the real system and the simulation model. Some of the problems associated with
statistical comparisons include these:
• Simulation model performance is based on very long periods of time. On
the other hand, real system performances are often based on much
shorter periods and therefore may not be representative of the long-term
statistical average.
• The initial conditions of the real system are usually unknown and are
therefore difficult to replicate in the model.
• The performance of the real system is affected by many factors that may
be excluded from the simulation model, such as abnormal downtimes or
defective materials.
The problem of validation becomes more challenging when the real system
doesn’t exist yet. In this case, the simulation analyst works with one or more
experts who have a good understanding of how the real system should operate.
They team up to see whether the simulation model behaves reasonably in a vari-
ety of situations. Animation is frequently used to watch the behavior of the
simu- lation and spot any nonsensical or unrealistic behavior.
The purpose of validation is to mitigate the risk associated with making de-
cisions based on the model. As in any testing effort, there is an optimum amount
of validation beyond which the return may be less valuable than the investment
(see Figure 8.4). The optimum effort to expend on validation should be based on
minimizing the total cost of the validation effort plus the risk cost associated
Chapter 8 Model Verification and Validation 215
FIGURE 8.4
Optimum level of Optimum effort
validation looks at
the trade-off between
Total cost Validation cost
validation cost and Cost
risk cost.
Risk cost
Validation effort
informal and intuitive approach to validation, while the HP case study relies on
more formal validation techniques.
modeled). The animation that showed the actual movement of patients and use
of resources was a valuable tool to use during the validation process and
increased the credibility of the model.
The use of a team approach throughout the data-gathering and model
development stages proved invaluable. It both gave the modeler immediate feed-
back regarding incorrect assumptions and boosted the confidence of the team
members in the simulation results. The frequent reviews also allowed the group
to bounce ideas off one another. As the result of conducting a sound and
convincing simulation study, the hospital’s administration was persuaded by
the simulation results and implemented the recommendations in the new
construction.
output of the real process. The initial model predicted 25 shifts to complete a
spe- cific sequence of panels, while the process actually took 35 shifts. This
led to further process investigations of interruptions or other processing
delays that weren’t being accounted for in the model. It was discovered that
feeder replace- ment times were underestimated in the model. A discrepancy
between the model and the actual system was also discovered in the utilization
of the pick-and-place machines. After tracking down the causes of these
discrepancies and making appropriate adjustments to the model, the simulation
results were closer to the real process. The challenge at this stage was not to
yield to the temptation of making arbitrary changes to the model just to get the
desired results. Then the model would lose its integrity and become nothing
more than a sham for the real system.
The final step was to conduct sensitivity analysis to determine how model
performance was affected in response to changes in model assumptions. By
changing the input in such a way that the impact was somewhat predictable, the
change in simulation results could be compared with intuitive guesses. Any
bizarre results such as a decrease in work in process when increasing the arrival
rate would raise an immediate flag. Input parameters were systematically
changed based on a knowledge of both the process behavior and the model
operation. Knowing just how to stress the model in the right places to test its
robustness was an art in itself. After everyone was satisfied that the model
accurately reflected the actual operation and that it seemed to respond as would
be expected to specific changes in input, the model was ready for experimental
use.
8.5 Summary
she trusts the software or the modeler, but because he or she trusts the input data
and knows how the model was built.
References
Balci, Osman. “Verification, Validation and Accreditation of Simulation Models.” In Pro-
ceedings of the 1997 Winter Simulation Conference, ed. S. Andradottir, K. J. Healy,
D. H. Withers, and B. L. Nelson, 1997, pp. 135–41.
Banks, Jerry. “Simulation Evolution.” IIE Solutions, November 1998, pp. 26–29.
Banks, Jerry, John Carson, Barry Nelson, and David Nicol. Discrete-Event Simulation,
3rd ed. Englewood Cliffs, NJ: Prentice-Hall, 1995.
Baxter, Lori K., and Johnson, Eric. “Don’t Implement before You Validate.” Industrial
Engineering, February 1993, pp. 60–62.
Hoover, Stewart, and Ronald Perry. Simulation: A Problem Solving Approach. Reading,
MA: Addison-Wesley, 1990.
Neelamkavil, F. Computer Simulation and Modeling. New York: John Wiley & Sons,
1987. O’Conner, Kathleen. “The Use of a Computer Simulation Model to Plan an
Obstetrical Renovation Project.” The Fifth Annual PROMODEL Users Conference,
Park City,
UT, 1994.
Sargent, Robert G. “Verifying and Validating Simulation Models.” In Proceedings of the
1998 Winter Simulation Conference, ed. D. J. Medeiros, E. F. Watson, J. S. Carson,
and M. S. Manivannan, 1998, pp. 121–30.
Harrell−Ghosh−Bo I. Study 9. Simulation © The
wden: Simulation Chapters Output McGraw−Hill
Using ProModel, Analysis Companies,
Second Edition
C H A P T E R
9 SIMULATION
OUTPUT ANALYSIS
9.1 Introduction
In analyzing the output from a simulation model, there is room for both rough
analysis using judgmental procedures as well as statistically based procedures
for more detailed analysis. The appropriateness of using judgmental
procedures or statistically based procedures to analyze a simulation’s output
depends largely on the nature of the problem, the importance of the decision,
and the validity of the input data. If you are doing a go/no-go type of analysis in
which you are trying to find out whether a system is capable of meeting a
minimum performance level, then a simple judgmental approach may be
adequate. Finding out whether a sin- gle machine or a single service agent is
adequate to handle a given workload may be easily determined by a few runs
unless it looks like a close call. Even if it is a close call, if the decision is not
that important (perhaps there is a backup worker who can easily fill in during
peak periods), then more detailed analysis may not be needed. In cases where
the model relies heavily on assumptions, it is of little value to be extremely
precise in estimating model performance. It does little good to get six decimal
places of precision for an output response if the input data warrant only
precision to the nearest tens. Suppose, for example, that the arrival rate of
customers to a bank is roughly estimated to be 30 plus or minus 10 per hour. In
this situation, it is probably meaningless to try to obtain a precise estimate of
teller utilization in the facility. The precision of the input data simply doesn’t
warrant any more than a rough estimate for the output.
These examples of rough estimates are in no way intended to minimize the
importance of conducting statistically responsible experiments, but rather to em-
phasize the fact that the average analyst or manager can gainfully use simulation
221
222 Part I Study Chapters
FIGURE 9.1
.52
Example of a cycling
pseudo-random .80 .
number stream 31
produced by a
random number
generator with a very
short cycle length. .07 Random number .95 Seed Z0 = 17
generator
.25 .
60
.66
experiment but does not give us an independent replication. On the other hand,
if a different seed value is appropriately selected to initialize the random number
gen- erator, the simulation will produce different results because it will be
driven by a different segment of numbers from the random number stream. This
is how the sim- ulation experiment is replicated to collect statistically
independent observations of the simulation model’s output response. Recall that
the random number generator in ProModel can produce over 2.1 billion different
values before it cycles.
To replicate a simulation experiment, then, the simulation model is
initialized to its starting conditions, all statistical variables for the output
measures are reset, and a new seed value is appropriately selected to start the
random number gener- ator. Each time an appropriate seed value is used to start
the random number gen- erator, the simulation produces a unique output
response. Repeating the process with several appropriate seed values produces
a set of statistically independent observations of a model’s output response.
With most simulation software, users need only specify how many times they
wish to replicate the experiment; the soft- ware handles the details of initializing
variables and ensuring that each simulation run is driven by nonoverlapping
segments of the random number cycle.
Point Estimates
A point estimate is a single value estimate of a parameter of interest. Point
esti- mates are calculated for the mean and standard deviation of the
population. To estimate the mean of the population (denoted as µ), we simply
calculate the aver- age of the sample values (denoted as x¯ ):
n i
i =1x
n x¯ =
where n is the sample size (number of observations) and xi is the value of ith
observation. The sample mean x¯ estimates the population mean µ.
The standard deviation for the population (denoted as σ), which is a
measure of the spread of data values in the population, is similarly estimated by
calculat- ing a standard deviation of the sample of values (denoted as s):
s= in =1 [x − x¯ ]2
i
n−1
The sample variance s2, used to estimate the variance of the population σ2, is ob-
tained by squaring the sample standard deviation.
Let’s suppose, for example, that we are interested in determining the mean or
average number of customers getting a haircut at Buddy’s Style Shop on
Saturday morning. Buddy opens his barbershop at 8:00 A.M. and closes at noon
on Saturday. In order to determine the exact value for the true average number of
customers getting a haircut on Saturday morning (µ), we would have to
compute the average based on the number of haircuts given on all Saturday
mornings that Buddy’s Style Shop has been and will be open (that is, the
complete population of observations). Not want- ing to work that hard, we
decide to get an estimate of the true mean µ by spending the next 12 Saturday
mornings watching TV at Buddy’s and recording the number of customers that
get a haircut between 8:00 A.M. and 12:00 noon (Table 9.1).
1 21
2 16
3 8
4 11
5 17
6 16
7 6
8 14
9 15
10 16
11 14
12 10
Sample mean x¯ 13.67
Sample standard deviation s 4.21
226 Part I Study Chapters
Interval Estimates
A point estimate, by itself, gives little information about how accurately it
esti- mates the true value of the unknown parameter. Interval estimates constructed
using x¯ and s, on the other hand, provide information about how far off the
point estimate x¯ might be from the true mean µ. The method used to determine
this is referred to as confidence interval estimation.
A confidence interval is a range within which we can have a certain level
of confidence that the true mean falls. The interval is symmetric about x¯ , and
the distance that each endpoint is from x¯ is called the half-width (hw). A
confidence interval, then, is expressed as the probability P that the unknown true
mean µ lies within the interval x¯ ± hw. The probability P is called the
confidence level.
If the sample observations used to compute x¯ and s are independent and
nor- mally distributed, the following equation can be used to calculate the half-
width of a confidence interval for a given level of confidence:
(tn−1,α/2 )s
hw = √
n
Chapter 9 Simulation Output Analysis 227
where tn−1,α/2 is a factor that can be obtained from the Student’s t table in Appen-
dix B. The values are identified in the Student’s t table according to the value
of α/2 and the degrees of freedom (n − 1). The term α is the complement of P.
That is, α = 1 − P and is referred to as the significance level. The significance
level may be thought of as the “risk level” or probability that µ will fall outside
the confidence interval. Therefore, the probability that µ will fall within the
confidence interval is 1 − α. Thus confidence intervals are often stated as
P(x¯ − hw ≤ µ ≤ x¯ + hw) = 1 − α and are read as the probability that the
true but unknown mean µ falls between the interval (x¯ − hw) to (x¯ + hw) is
equal to 1 − α. The confidence interval is traditionally referred to as a 100(1 −
α) percent confidence interval.
Assuming the data from the barbershop example are independent and nor-
mally distributed (this assumption is discussed in Section 9.3), a 95 percent
con- fidence interval is constructed as follows:
Given: P = confidence level = 0.95
α = significance level = 1 − P = 1 − 0.95 = 0.05
n = sample size = 12
x¯ = 13.67 haircuts
s = 4.21 haircuts
From the Student’s t table in Appendix B, we find tn−1,α/2 = t11,0.025 = 2.201.
The half-width is computed as follows:
hw = (t11,0.025)s (2.201)4.21
√ = √ = 2.67 haircuts
n 12
The lower and upper limits of the 95 percent confidence interval are calculated as
follows:
Lower limit = x¯ − hw = 13.67 − 2.67 = 11.00 haircuts
Upper limit = x¯ + hw = 13.67 + 2.67 = 16.34 haircuts
It can now be asserted with 95 percent confidence that the true but unknown
mean falls between 11.00 haircuts and 16.34 haircuts (11.00 ≤ µhaircuts ≤
16.34). A risk level of 0.05 (α = 0.05) means that if we went through this
process for estimating the number of haircuts given on a Saturday morning
100 times and computed 100 confidence intervals, we would expect 5 of the
confidence intervals (5 percent of 100) to exclude the true but unknown mean
number of haircuts given on a Saturday morning.
The width of the interval indicates the accuracy the point estimate. It is
desirable to have a small interval with high confidence (usually 90 percent or
greater). The width of the confidence interval is affected by the variability in the
output of the system and the number of observations collected (sample size). It
can be seen from the half-width equation that, for a given confidence level, the
half-width will shrink if (1) the sample size n is increased or (2) the variability in
the output of the system (standard deviation s) is reduced. Given that we have
lit- tle control over the variability of the system, we resign ourselves to
increasing the sample size (run more replications) to increase the accuracy of
our estimates.
228 Part I Study Chapters
Actually, when dealing with the output from a simulation model, techniques
can sometimes reduce the variability of the output from the model without
chang- ing the expected value of the output. These are called variance
reduction tech- niques and are covered in Chapter 10.
1 21
2 16
3 8
4 11
5 17
6 16
7 6
8 14
9 15
10 16
11 14
12 10
13 7
14 9
15 18
16 13
17 16
18 8
Sample mean x¯ 13.06
Sample standard deviation s 4.28
230 Part I Study Chapters
hw = (t17,0.025)s (2.11)4.28
√ = √ = 2.13 haircuts
n 18
The lower and upper limits for the new 95 percent confidence interval are calcu-
lated as follows:
Lower limit = x¯ − hw = 13.06 − 2.13 = 10.93
haircuts Upper limit = x¯ − hw = 13.06 + 2.13 =
15.19 haircuts
It can now be asserted with 95 percent confidence that the true but unknown
mean falls between 10.93 haircuts and 15.19 haircuts (10.93 ≤ µhaircuts ≤
15.19).
Note that with the additional observations, the half-width of the confidence
interval has indeed decreased. However, the half-width is larger than the absolute
error value planned for (e = 2.0). This is just luck, or bad luck, because the new
half-width could just as easily have been smaller. Why? First, 18 replications was
only a rough estimate of the number of observations needed. Second, the number
of haircuts given on a Saturday morning at the barbershop is a random
variable. Each collection of observations of the random variable will likely differ
from pre- vious collections. Therefore, the values computed for x¯ , s, and hw
will also differ. This is the nature of statistics and why we deal only in estimates.
We have been expressing our target amount of error e in our point estimate
x¯ as an absolute value (hw = e). In the barbershop example, we selected an
absolute value of e = 2.00 haircuts as our target value. However, it is sometimes
more con- venient to work in terms of a relative error (re) value (hw = re|µ|).
This allows us to talk about the percentage error in our point estimate in place
of the absolute error. Percentage error is the relative error multiplied by 100
(that is, 100re per- cent). To approximate the number of replications needed to
obtain a point estimate x¯ with a certain percentage error, we need only change
the denominator of the n t equation used earlier. The relative error version of the
equation becomes I l
(z )s 2
( α/2 )
nt = re
(1+re)
x¯
where re denotes the relative error. The re/(1 + re) part of the denominator is an
adjustment needed to realize the desired re value because we use x¯ to estimate
µ (see Chapter 9 of Law and Kelton 2000 for details). The appeal of this
approach is that we can select a desired percentage error without prior
knowledge of the magnitude of the value of µ.
Chapter 9 Simulation Output Analysis 231
Thus n t ≈ 18 observations. This is the same result computed earlier on page 229
for an absolute error of e = 2.00 haircuts. This occurred because we purposely
selected re = 0.1714 to produce a value of 2.00 for the equation’s denominator in
order to demonstrate the equivalency of the different methods for approximating
the number of replications needed to achieve a desired level of precision in the
point estimate x¯ . The SimRunner software uses the relative error methodology
to provide estimates for the number of replications needed to satisfy the
specified level of precision for a given significance level α.
In general, the accuracy of the estimates improves as the number of replica-
tions increases. However, after a point, only modest improvements in accuracy
(reductions in the half-width of the confidence interval) are made through con-
ducting additional replications of the experiment. Therefore, it is sometimes nec-
essary to compromise on a desired level of accuracy because of the time
required to run a large number of replications of the model.
The ProModel simulation software automatically computes confidence
inter- vals. Therefore, there is really no need to estimate the sample size
required for a desired half-width using the method given in this section. Instead,
the experiment could be replicated, say, 10 times, and the half-width of the
resulting confidence interval checked. If the desired half-width is not achieved,
additional replications of the experiment are made until it is. The only real
advantage of estimating the number of replications in advance is that it may
save time over the trial-and-error approach of repeatedly checking the half-
width and running additional replica- tions until the desired confidence interval
is achieved.
simulated to estimate the mean time that entities wait in queues in that system.
The experiment is replicated several times to collect n independent sample
obser- vations, as was done in the barbershop example. As an entity exits the
system, the time that the entity spent waiting in queues is recorded. The
waiting time is de- noted by yij in Table 9.3, where the subscript i denotes the
replication from which the observation came and the subscript j denotes the
value of the counter used to count entities as they exit the system. For example,
y32 is the waiting time for the second entity that was processed through the
system in the third replication. These values are recorded during a particular
run (replication) of the model and are listed under the column labeled “Within
Run Observations” in Table 9.3.
Note that the within run observations for a particular replication, say the ith
replication, are not usually independent because of the correlation between
consecutive observations. For example, when the simulation starts and the first
entity begins processing, there is no waiting time in the queue. Obviously, the
more congested the system becomes at various times throughout the simulation,
the longer entities will wait in queues. If the waiting time observed for one entity
is long, it is highly likely that the waiting time for the next entity observed is
going to be long and vice versa. Observations exhibiting this correlation
between consecutive observations are said to be autocorrelated. Furthermore,
the within run observations for a particular replication are often nonstationary
in that they do not follow an identical distribution throughout the simulation
run. Therefore, they cannot be directly used as observations for statistical
methods that require in- dependent and identically distributed observations
such as those used in this chapter.
At this point, it seems that we are a long way from getting a usable set of
observations. However, let’s focus our attention on the last column in Table 9.3,
labeled “Average,” which contains the x1 through xn values. xi denotes the
average waiting time of the entities processed during the ith simulation run
(replication) and is computed as follows:
yij
xi = m j =1
m
where m is the number of entities processed through the system and yij is the
time that the jth entity processed through the system waited in queues during
the ith replication. Although not as formally stated, you used this equation in
the last row of the ATM spreadsheet simulation of Chapter 3 to compute the
average waiting time of the m = 25 customers (Table 3.2). An xi value for a
particular replication represents only one possible value for the mean time
an entity waits in queues, and it would be risky to make a statement about
the true waiting time from a single observation. However, Table 9.3 contains
an xi value for each of the n independent replications of the simulation.
These xi values are statistically independent if different seed values are used
to initialize the random number gen- erator for each replication (as discussed
in Section 9.2.1). The xi values are often identically distributed as well.
Therefore, the sample of xi values can be used for statistical methods requiring
independent and identically distributed observations. Thus we can use the xi
values to estimate the true but unknown average time
234 Part I Study Chapters
[xi − x¯ ]2
in =1
s=
n−1
The third assumption for many standard statistical methods is that the
sample observations (x1 through xn) are normally distributed. For example, the
normality assumption is implied at the point for which a tn−1,α/2 value from the
Student’s t distribution is used in computing a confidence interval’s half-width.
The central limit theorem of statistics provides a basis for making this
assumption. If a vari- able is defined as the sum of several independent and
identically distributed random values, the central limit theorem states that the
variable representing the sum tends to be normally distributed. This is helpful
because in computing xi for a particular replication, the (y ij ) waiting times for
the m entities processed in the ith replication are summed together. However,
the yij observations are autocor- related and, therefore, not independent. But
there are also central limit theorems for certain types of correlated data that
suggest that xi will be approximately nor- mally distributed if the sample size
used to compute it (the number of entities processed through the system, m
from Table 9.3) is not too small (Law and Kelton 2000). Therefore, we can
compute a confidence interval using
(tn−1,α/2 )s
x¯ ± √
n
where tn−1,α/2 is a value from the Student’s t table in Appendix B for an α level of
significance.
ProModel automatically computes point estimates and confidence intervals
using this method. Figure 9.2 presents the output produced from running the
ProModel version of the ATM simulation of Lab Chapter 3 for five replications.
The column under the Locations section of the output report labeled “Average
Minutes per Entry” displays the average amount of time that customers waited
in the queue during each of the five replications. These values correspond to the
xi values in Table 9.3. Note that in addition to the sample mean and standard
devia- tion, ProModel also provides a 95 percent confidence interval.
Sometimes the output measure being evaluated is not based on the mean, or
sum, of a collection of random values. For example, the output measure may
be the maximum number of entities that simultaneously wait in a queue. (See
the “Maximum Contents” column in Figure 9.2.) In such cases, the output
measure may not be normally distributed, and because it is not a sum, the
central limit
Chapter 9 Simulation Output Analysis 235
FIGURE 9.2
Replication technique used on ATM simulation of Lab Chapter 3.
-------------------------------------------------------------------
General Report
Output from C:\Bowden\2nd Edition\ATM System Ch9.MOD [ATM System]
Date: Nov/26/2002 Time: 04:24:27 PM
-------------------------------------------------------------------
Scenario : Normal Run
Replication : All
Period : Final Report (1000 hr to 1500 hr Elapsed: 500 hr)
Warmup Time : 1000
Simulation Time : 1500 hr
-------------------------------------------------------------------
LOCATIONS
Average
Location Scheduled Total Minutes Average Maximum Current
Name Hours Capacity Entries Per Entry Contents Contents Contents % Util
--------- ----- -------- ------- --------- -------- -------- -------- ------
ATM Queue 500 999999 9903 9.457 3.122 26 18 0.0 (Rep 1)
ATM Queue 500 999999 9866 8.912 2.930 24 0 0.0 (Rep 2)
ATM Queue 500 999999 9977 11.195 3.723 39 4 0.0 (Rep 3)
ATM Queue 500 999999 10006 8.697 2.900 26 3 0.0 (Rep 4)
ATM Queue 500 999999 10187 10.841 3.681 32 0 0.0 (Rep 5)
ATM Queue 500 999999 9987.8 9.820 3.271 29.4 5 0.0 (Average)
ATM Queue 0 0 124.654 1.134 0.402 6.148 7.483 0.0 (Std. Dev.)
ATM Queue 500 999999 9833.05 8.412 2.772 21.767 -4.290 0.0 (95% C.I. Low)
ATM Queue 500 999999 10142.6 11.229 3.771 37.032 14.290 0.0 (95% C.I. High)
runs two shifts with an hour break during each shift in which everything
momen- tarily stops. Break and third-shift times are excluded from the model
because work always continues exactly as it left off before the break or end of
shift. The length of the simulation is determined by how long it takes to get a
representative steady- state reading of the model behavior.
Nonterminating simulations can, and often do, change operating characteris-
tics after a period of time, but usually only after enough time has elapsed to es-
tablish a steady-state condition. Take, for example, a production system that runs
10,000 units per week for 5 weeks and then increases to 15,000 units per week
for the next 10 weeks. The system would have two different steady-state periods.
Oil and gas refineries and distribution centers are additional examples of
nontermi- nating systems.
Contrary to what one might think, a steady-state condition is not one in
which the observations are all the same, or even one for which the variation in
obser- vations is any less than during a transient condition. It means only that all
obser- vations throughout the steady-state period will have approximately
the same distribution. Once in a steady state, if the operating rules change or
the rate at which entities arrive changes, the system reverts again to a transient
state until the system has had time to start reflecting the long-term behavior of
the new operat- ing circumstances. Nonterminating systems begin with a warm-
up (or transient) state and gradually move to a steady state. Once the initial
transient phase has diminished to the point where the impact of the initial
condition on the system’s response is negligible, we consider it to have reached
steady state.
the system, the utilization of resources) over the period simulated, as discussed
in Section 9.3.
The answer to the question of how many replications are necessary is
usually based on the analyst’s desired half-width of a confidence interval. As a
general guideline, begin by making 10 independent replications of the
simulation and add more replications until the desired confidence interval half-
width is reached. For the barbershop example, 18 independent replications of
the simulation were re- quired to achieve the desired confidence interval for the
expected number of hair- cuts given on Saturday mornings.
FIGURE 9.3
Behavior of model’s output response as it reaches steady state.
Warm-up ends
Transient state Steady state
Averaged
output
response
measure
(y )
1 2 3 4 5 6
Simulation time
from the beginning of the run and use the remaining observations to estimate the
true mean response of the model.
While several methods have been developed for estimating the warm-up
time, the easiest and most straightforward approach is to run a preliminary simu-
lation of the system, preferably with several (5 to 10) replications, average the
output values at each time step across replications, and observe at what time the
system reaches statistical stability. This usually occurs when the averaged output
response begins to flatten out or a repeating pattern emerges. Plotting each data
point (averaged output response) and connecting them with a line usually helps
to identify when the averaged output response begins to flatten out or begins a
re- peating pattern. Sometimes, however, the variation of the output response
is so large that it makes the plot erratic, and it becomes difficult to visually
identify the end of the warm-up period. Such a case is illustrated in Figure 9.4.
The raw data plot in Figure 9.4 was produced by recording a model’s output
response for a queue’s average contents during 50-hour time periods (time
slices) and averaging the output values from each time period across five
replications. In this case, the model was initialized with several entities in the
queue. Therefore, we need to eliminate this apparent upward bias before
recording observations of the queue’s average contents. Table 9.4 shows this
model’s output response for the 20 time periods (50 hours each) for each of the
five replications. The raw data plot in Fig- ure 9.4 was constructed using the 20
values under the y¯i column in Table 9.4.
When the model’s output response is erratic, as in Figure 9.4, it is useful
to “smooth” it with a moving average. A moving average is constructed by
calculating the arithmetic average of the w most recent data points (averaged
output responses) in the data set. You have to select the value of w, which is
called the moving-average window. As you increase the value of w, you
increase the “smoothness” of the moving average plot. An indicator for the end
of the warm-up
Chapter 9 Simulation Output Analysis 241
FIGURE 9.4
SimRunner uses the Welch moving-average method to help identify the end of the warm-up period
that occurs around the third or fourth period (150 to 200 hours) for this model.
time is when the moving average plot appears to flatten out. Thus the routine is
to begin with a small value of w and increase it until the resulting moving
average plot begins to flatten out.
The moving-average plot in Figure 9.4 with a window of six (w = 6)
helps to identify the end of the warm-up period—when the moving average
plot appears to flatten out at around the third or fourth period (150 to 200
hours). Therefore, we ignore the observations up to the 200th hour and
record only those after the 200th hour when we run the simulation. The
moving-average plot in Figure 9.4 was constructed using the 14 values under
the yi (6) column in Table 9.4 that were computed using
w
s=−w y¯i +s
2w + 1 if i = w 1, . . . , m − w
+
y¯i (w) =
s=−(i
i 1
− −1) y¯ if 1,... ,w
i i
+s
=
2i − 1
where m denotes the total number of periods and w denotes the window of the
moving average (m = 20 and w = 6 in this example).
242 Part I Study Chapters
TABLE 9.4 Welch Moving Average Based on Five Replications and 20 Periods
Notice the pattern that as i increases, we average more data points together,
with the ith data point appearing in the middle of the sum in the numerator (an
equal number of data points are on each side of the center data point). This con-
tinues until we reach the (w + 1)th moving average, when we switch to the
top part of the y¯i (w) equation. For our example, the switch occurs at the 7th
moving average because w = 6. From this point forward, we average the 2w +
1 closest data points. For our example, we average the 2(6) + 1 = 13 closest
data points (the ith data point plus the w = 6 closest data points above it and the
w = 6 clos- est data point below it in Table 9.4 as follows:
y¯1 +y¯2 +y¯3 +y¯4 +y¯5 +y¯6 +y¯7 +y¯8 +y¯9 +y¯10 +y¯11 +y¯12 +y¯13
y¯7(6) = 13
y¯7 (6) = 3.86
y¯2 +y¯3 +y¯4 +y¯5 +y¯6 +y¯7 +y¯8 +y¯9 +y¯10 +y¯11 +y¯12 +y¯13 +y¯14
y¯8(6) = 13
y¯8 (6) = 3.83
y¯3 +y¯4 +y¯5 +y¯6 +y¯7 +y¯8 +y¯9 +y¯10 +y¯11 +y¯12 +y¯13 +y¯14 +y¯15
y¯9(6) = 13
y¯9 (6) = 3.84
·
·
·
y¯8 +y¯9 +y¯10 +y¯11 +y¯12 +y¯13 +y¯14 +y¯15 +y¯16 +y¯17 +y¯18 +y¯19 +y¯20
y¯14 =
(6) 13
y¯14 (6) = 3.96
Eventually we run out of data and have to stop. The stopping point occurs
when i = m − w. In our case with m = 20 periods and w = 6, we stopped
when i = 20 − 6 = 14.
The development of this graphical method for estimating the end of the
warm-up time is attributed to Welch (Law and Kelton 2000). This method is
sometimes referred to as the Welch moving-average method and is implemented
in SimRunner. Note that when applying the method that the length of each repli-
cation should be relatively long and the replications should allow even rarely
occurring events such as infrequent downtimes to occur many times. Law and
Kelton (2000) recommend that w not exceed a value greater than about m/4.
To determine a satisfactory warm-up time using Welch’s method, one or more
key output response variables, such as the average number of entities in a queue
or the average utilization of a resource, should be monitored for successive
periods. Once these variables begin to exhibit steady state, a good practice to
follow would be to extend the warm-up period by 20 to 30 percent. This
approach is simple, conservative, and usually satisfactory. The danger is in
underestimating the warm-up period, not overestimating it.
244 Part I Study Chapters
FIGURE 9.5
Individual statistics Warm-up ends
on the output measure
are computed for each
batch interval. Output
response
measure
(y )
Batch
should be based on time. If the output measure is based on observations (like the
waiting time of entities in a queue), then the batch interval is typically based on
the number of observations.
Table 9.5 details how a single, long simulation run (one replication), like the
one shown in Figure 9.5, is partitioned into batch intervals for the purpose of ob-
taining (approximately) independent and identically distributed observations of a
simulation model’s output response. Note the similarity between Table 9.3 of
Section 9.3 and Table 9.5. As in Table 9.3, the observations represent the time an
entity waited in queues during the simulation. The waiting time for an entity is
denoted by yij , where the subscript i denotes the interval of time (batch
interval) from which the observation came and the subscript j denotes the
value of the counter used to count entities as they exit the system during a
particular batch in- terval. For example, y23 is the waiting time for the third
entity that was processed through the system during the second batch interval of
time.
Because these observations are all from a single run (replication), they are
not statistically independent. For example, the waiting time of the mth entity
exit- ing the system during the first batch interval, denoted y1m , is correlated
with the
246 Part I Study Chapters
waiting time of the first entity to exit the system during the second batch
interval, denoted y21. This is because if the waiting time observed for one entity
is long, it is likely that the waiting time for the next entity observed is going to
be long, and vice versa. Therefore, adjacent observations in Table 9.5 will
usually be auto- correlated. However, the value of y14 can be somewhat
uncorrelated with the value of y24 if they are spaced far enough apart such that
the conditions that re- sulted in the waiting time value of y14 occurred so long
ago that they have little or no influence on the waiting time value of y24.
Therefore, most of the observations within the interior of one batch interval can
become relatively uncorrelated with most of the observations in the interior of
other batch intervals, provided they are spaced sufficiently far apart. Thus the
goal is to extend the batch interval length until there is very little correlation
(you cannot totally eliminate it) between the observations appearing in different
batch intervals. When this occurs, it is reason- able to assume that observations
within one batch interval are independent of the observations within other batch
intervals.
With the independence assumption in hand, the observations within a batch
interval can be treated in the same manner as we treated observations within a
replication in Table 9.3. Therefore, the values in the Average column in Table
9.5, which represent the average amount of time the entities processed
during the ith batch interval waited in queues, are computed as follows:
j =1 y ij
m
xi =
m
where m is the number of entities processed through the system during the batch
in- terval of time and y ij is the time that the jth entity processed through the
system waited in queues during the ith batch interval. The xi values in the
Average column are approximately independent and identically distributed and
can be used to com- pute a sample mean for estimating the true average time an
entity waits in queues:
n i
i =1x
x¯ = n
The sample standard deviation is also computed as before:
in =1
s= [xi − x¯ ]2
n−1
And, if we assume that the observations (x1 through xn) are normally distributed
(or at least approximately normally distributed), a confidence interval is
computed by
(tn−1,α/2 )s
x¯ ± √
n
where tn−1,α/2 is a value from the Student’s t table in Appendix B for an α level of
significance.
ProModel automatically computes point estimates and confidence intervals
using the above method. Figure 9.6 is a ProModel output report from a single,
Chapter 9 Simulation Output Analysis 247
FIGURE 9.6
Batch means technique used on ATM simulation in Lab Chapter 3. Warm-up period is from 0 to 1,000 hours. Each
batch interval is 500 hours in length.
-----------------------------------------------------------------
General Report
Output from C:\Bowden\2nd Edition\ATM System Ch9.MOD [ATM
System] Date: Nov/26/2002 Time: 04:44:24 PM
-----------------------------------------------------------------
Scenario : Normal Run
Replication : 1 of 1
Period : All
Warmup Time : 1000
hr Simulation Time :
3500 hr
-----------------------------------------------------------------
LOCATIONS
Average
Location Scheduled Total Minutes Average Maximum Current
Name Hours Capacity Entries Per Entry Contents Contents Contents % Util
--------- --------- -------- ------- --------- -------- -------- -------- ------
ATM Queue 500 999999 9903 9.457 3.122 26 18 0.0 (Batch 1)
ATM Queue 500 999999 10065 9.789 3.284 24 0 0.0 (Batch 2)
ATM Queue 500 999999 9815 8.630 2.823 23 16 0.0 (Batch 3)
ATM Queue 500 999999 9868 8.894 2.925 24 1 0.0 (Batch 4)
ATM Queue 500 999999 10090 12.615 4.242 34 0 0.0 (Batch 5)
ATM Queue 500 999999 9948.2 9.877 3.279 26.2 7 0.0 (Average)
ATM Queue 0 0 122.441 1.596 0.567 4.494 9.165 0.0 (Std. Dev.)
ATM Queue 500 999999 9796.19 7.895 2.575 20.620 -4.378 0.0 (95% C.I. Low)
ATM Queue 500 999999 10100.2 11.860 3.983 31.779 18.378 0.0 (95% C.I. High)
long run of the ATM simulation in Lab Chapter 3 with the output divided into
five batch intervals of 500 hours each after a warm-up time of 1,000 hours. The
col- umn under the Locations section of the output report labeled “Average
Minutes Per Entry” displays the average amount of time that customer entities
waited in the queue during each of the five batch intervals. These values
correspond to the xi values of Table 9.5. Note that the 95 percent confidence
interval automatically computed by ProModel in Figure 9.6 is comparable,
though not identical, to the 95 percent confidence interval in Figure 9.2 for the
same output statistic.
Establishing the batch interval length such that the observations x1,
x 2 , . . . , xn of Table 9.5 (assembled after the simulation reaches steady
state) are approxi- mately independent is difficult and time-consuming. There is
no foolproof method for doing this. However, if you can generate a large
number of observations, say n ≥ 100, you can gain some insight about the
independence of the observations by estimating the autocorrelation between
adjacent observations (lag-1 autocorre- lation). See Chapter 6 for an
introduction to tests for independence and autocor- relation plots. The
observations are treated as being independent if their lag-1 autocorrelation is
zero. Unfortunately, current methods available for estimating the lag-1
autocorrelation are not very accurate. Thus we may be persuaded that our ob-
servations are almost independent if the estimated lag-1 autocorrelation
computed from a large number of observations falls between −0.20 to +0.20.
The word almost is emphasized because there is really no such thing as almost
independent
248 Part I Study Chapters
(the observations are independent or they are not). What we are indicating here
is that we are leaning toward calling the observations independent when the
estimate of the lag-1 autocorrelation is between −0.20 and +0.20. Recall that
autocorrela- tion values fall between ±1. Before we elaborate on this idea for
determining if an acceptable batch interval length has been used to derive the
observations, let’s talk about positive autocorrelation versus negative
autocorrelation.
Positive autocorrelation is a bigger enemy to us than negative
autocorrelation because our sample standard deviation s will be biased low if
the observations from which it is computed are positively correlated. This would
result in a falsely narrow confidence interval (smaller half-width), leading us to
believe that we have a better estimate of the mean µ than we actually have.
Negatively correlated ob- servations have the opposite effect, producing a
falsely wide confidence interval (larger half-width), leading us to believe that
we have a worse estimate of the mean
µ than we actually have. A negative autocorrelation may lead us to waste time
col- lecting additional observations to get a more precise (narrow) confidence
interval but will not result in a hasty decision. Therefore, an emphasis is placed
on guard- ing against positive autocorrelation. We present the following
procedure adapted from Banks et al. (2001) for estimating an acceptable batch
interval length.
For the observations x1, x2 , . . . , xn derived from n batches of data
assembled after the simulation has researched steady state as outlined in
Table 9.5, compute an estimate of the lag-1 autocorrelation ρˆ1 using
n−1
ρˆ1 = (xi − x¯) +1 − x¯ )
(xii
=1
s2(n − 1)
you might combine four contiguous batches of data into one batch with a
length of 500 hours (125 hours × 4 = 500 hours). “Rebatching” the data
with the 500-hour batch interval length would leave you 25 observations
(x1 to x25) from which to construct the confidence interval. We
recommend that you “rebatch” the data into 10 to 30 batch intervals.
Not Achieving −0.20 ≤ ρˆ1 ≤ 0.20: If obtaining an estimated lag-1
autocorrelation within the desired range becomes impossible because you
cannot continue extending the length of the simulation run, then rebatch the
data from your last run into no more than about 10 batch intervals and
construct the confidence interval. Interpret the confidence interval with the
apprehension that the observations may be significantly correlated.
Lab Chapter 9 provides an opportunity for you to gain experience applying
these criteria to a ProModel simulation experiment. However, remember that
there is no universally accepted method for determining an appropriate batch in-
terval length. See Banks et al. (2001) for additional details and variations on this
approach, such as a concluding hypothesis test for independence on the final set
of observations. The danger is in setting the batch interval length too short, not
too long. This is the reason for extending the length of the batch interval in the
last step. A starting point for setting the initial batch interval length from
which to begin the process of evaluating the lag-1 autocorrelation estimates is
provided in Section 9.6.3.
In summary, the statistical methods in this chapter are applied to the data
compiled from batch intervals in the same manner they were applied to the data
compiled from replications. And, as before, the accuracy of point and interval
es- timates generally improves as the sample size (number of batch intervals)
in- creases. In this case we increase the number of batch intervals by extending
the length of the simulation run. As a general guideline, the simulation should be
run long enough to create around 10 batch intervals and possibly more,
depending on the desired confidence interval half-width. If a trade-off must be
made between the number of batch intervals and the batch interval length, then
err on the side of increasing the batch interval length. It is better to have a few
independent obser- vations than to have several autocorrelated observations
when using the statistical methods presented in this text.
Replication 1
Batch
Batch
Batch Batch
interval 1 interval 2 interval 3
Simulation time
9.7 Summary
Simulation experiments can range from a few replications (runs) to a large num-
ber of replications. While “ballpark” decisions require little analysis, more
precise decision making requires more careful analysis and extensive
experimentation.
Chapter 9 Simulation Output Analysis 251
15. Given the five batch means for the Average Minutes Per Entry
of customer entities at the ATM queue location in Figure 9.6,
estimate the lag-1 autocorrelation ρˆ1 . Note that five observations
are woefully inadequate for obtaining an accurate estimate of the
lag-1 autocorrelation. Normally you will want to base the estimate
on at least 100 observations. The question is designed to give you
some experience using the ρˆ1 equation so that you will understand
what the Stat::Fit software is doing when you use it in Lab Chapter
9 to crunch through an example with 100 observations.
16. Construct a Welch moving average with a window of 2
(w = 2) using the data in Table 9.4 and compare it to the Welch
moving average with a window of 6 (w = 6) presented in
Table 9.4.
References
Banks, Jerry; John S. Carson, II; Barry L. Nelson; and David M. Nicol. Discrete-Event
System Simulation. New Jersey: Prentice-Hall, 2001.
Bateman, Robert E.; Royce O. Bowden; Thomas J. Gogg; Charles R. Harrell; and Jack
R. A. Mott. System Improvement Using Simulation. Orem, UT: PROMODEL Corp.,
1997.
Hines, William W., and Douglas C. Montgomery. Probability and Statistics in Engineering
and Management Science. New York: John Wiley and Sons, 1990.
Law, Averill M., and David W. Kelton. Simulation Modeling and Analysis. New York:
McGraw-Hill, 2000.
Montgomery, Douglas C. Design and Analysis of Experiments. New York: John Wiley &
Sons, 1991.
Petersen, Roger G. Design and Analysis of Experiments. New York: Marcel Dekker, 1985.
Harrell−Ghosh−Bo I. Study 10. Comparing © The
wden: Simulation Chapters Systems McGraw−Hill
Using ProModel, Companies,
Second Edition
C H A P T E R
10 COMPARING SYSTEMS
“The method that proceeds without analysis is like the groping of a blind man.”
—Socrates
10.1 Introduction
In many cases, simulations are conducted to compare two or more alternative
de- signs of a system with the goal of identifying the superior system relative to
some performance measure. Comparing alternative system designs requires
careful analysis to ensure that differences being observed are attributable to
actual differ- ences in performance and not to statistical variation. This is where
running either multiple replications or batches is required. Suppose, for
example, that method A for deploying resources yields a throughput of 100
entities for a given time period while method B results in 110 entities for the
same period. Is it valid to conclude that method B is better than method A, or
might additional replications actually lead to the opposite conclusion?
You can evaluate alternative configurations or operating policies by
perform- ing several replications of each alternative and comparing the
average results from the replications. Statistical methods for making these types
of comparisons are called hypotheses tests. For these tests, a hypothesis is first
formulated (for example, that methods A and B both result in the same
throughput) and then a test is made to see whether the results of the simulation
lead us to reject the hypothe- sis. The outcome of the simulation runs may cause
us to reject the hypothesis that methods A and B both result in equal throughput
capabilities and conclude that the throughput does indeed depend on which
method is used.
This chapter extends the material presented in Chapter 9 by providing statis-
tical methods that can be used to compare the output of different simulation
mod- els that represent competing designs of a system. The concepts behind
hypothesis testing are introduced in Section 10.2. Section 10.3 addresses the
case when two
253
254 Part I Study Chapters
alternative system designs are to be compared, and Section 10.4 considers the
case when more than two alternative system designs are to be compared. Addi-
tionally, a technique called common random numbers is described in Section
10.5 that can sometimes improve the accuracy of the comparisons.
FIGURE 10.1
Production system
with four
workstations and
three buffer storage
areas.
.72 4.56
Chapter 10 Comparing Systems 255
Suppose that Strategy 1 and Strategy 2 are the two buffer allocation
strategies proposed by the production control staff. We wish to identify the
strategy that maximizes the throughput of the production system (number of
parts completed per hour). Of course, the possibility exists that there is no
significant difference in the performance of the two candidate strategies. That is
to say, the mean through- put of the two proposed strategies is equal. A starting
point for our problem is to formulate our hypotheses concerning the mean
throughput for the production system under the two buffer allocation strategies.
Next we work out the details of setting up our experiments with the simulation
models built to evaluate each strat- egy. For example, we may decide to estimate
the true mean performance of each strategy (µ1 and µ2) by simulating each
strategy for 16 days (24 hours per day) past the warm-up period and
replicating the simulation 10 times. After we run experiments, we would use
the simulation output to evaluate the hypotheses concerning the mean
throughput for the production system under the two buffer allocation strategies.
In general, a null hypothesis, denoted H0, is drafted to state that the value of
µ1 is not significantly different than the value of µ2 at the α level of
significance. An alternate hypothesis, denoted H1, is drafted to oppose the null
hypothesis H0. For example, H1 could state that µ1 and µ2 are different at the α
level of significance. Stated more formally:
H0: µ1 = µ2 or equivalently H0: µ1 − µ2 =
0 H1: µ1 /= µ2 or equivalently H1: µ1 − µ2
/= 0
In the context of the example problem, the null hypothesis H0 states that the
mean throughputs of the system due to Strategy 1 and Strategy 2 do not differ.
The alternate hypothesis H1 states that the mean throughputs of the system
due to Strategy 1 and Strategy 2 do differ. Hypothesis testing methods are
designed such that the burden of proof is on us to demonstrate that H0 is not
true. Therefore, if our analysis of the data from our experiments leads us to
reject H0, we can be con- fident that there is a significant difference between the
two population means. In our example problem, the output from the simulation
model for Strategy 1 repre- sents possible throughput observations from one
population, and the output from the simulation model for Strategy 2 represents
possible throughput observations from another population.
The α level of significance in these hypotheses refers to the probability of
making a Type I error. A Type I error occurs when we reject H0 in favor of H1
when in fact H0 is true. Typically α is set at a value of 0.05 or 0.01. However,
the choice is yours, and it depends on how small you want the probability of
making a Type I error to be. A Type II error occurs when we fail to reject H0 in
favor of H1 when in fact H1 is true. The probability of making a Type II error is
denoted as β. Hypothesis testing methods are designed such that the probability
of making a Type II error, β, is as small as possible for a given value of α. The
relationship between α and β is that β increases as α decreases. Therefore,
we should be careful not to make α too small.
We will test these hypotheses using a confidence interval approach to
determine if we should reject or fail to reject the null hypothesis in favor of the
256 Part I Study Chapters
alternative hypothesis. The reason for using the confidence interval method
is that it is equivalent to conducting a two-tailed test of hypothesis with the
added bene- fit of indicating the magnitude of the difference between µ1 and
µ2 if they are in fact significantly different. The first step of this procedure is
to construct a confi- dence interval to estimate the difference between the
two means (µ1 − µ2). This can be done in different ways depending on how
the simulation experiments are conducted (we will discuss this later). For
now, let’s express the confidence inter- val on the difference between the two
means as
P[(x¯ 1 − x¯ 2 ) − hw ≤ µ1 − µ2 ≤ (x¯ 1 − x¯2 ) + hw] = 1 − α
where hw denotes the half-width of the confidence interval. Notice the similari-
ties between this confidence interval expression and the one given on page 227 in
Chapter 9. Here we have replaced x¯ with x¯1 − x¯2 and µ with µ1 − µ2 .
If the two population means are the same, then µ1 − µ2 = 0, which is our
null hypothesis H0. If H0 is true, our confidence interval should include zero
with a probability of 1 − α. This leads to the following rule for deciding
whether to re- ject or fail to reject H0. If the confidence interval includes
zero, we fail to reject H0 and conclude that the value of µ1 is not
significantly different than the value of µ2 at the α level of significance (the
mean throughput of Strategy 1 is not signifi- cantly different than the mean
throughput of Strategy 2). However, if the confi- dence interval does not
include zero, we reject H0 and conclude that the value of
µ1 is significantly different than the value of µ2 at the α level of significance
(throughput values for Strategy 1 and Strategy 2 are significantly different).
Figure 10.2(a) illustrates the case when the confidence interval contains
zero, leading us to fail to reject the null hypothesis H0 and conclude that there is
no sig- nificant difference between µ1 and µ2. The failure to obtain sufficient
evidence to pick one alternative over another may be due to the fact that there
really is no dif- ference, or it may be a result of the variance in the observed
outcomes being too high to be conclusive. At this point, either additional
replications may be run or one of several variance reduction techniques might
be employed (see Section 10.5). Figure 10.2(b) illustrates the case when the
confidence interval is completely to the
Reject H0
[------|------]
(c)
µ,1 - µ,2 = 0
left of zero, leading us to reject H0. This case suggests that µ1 − µ2 < 0 or,
equiv- alently, µ1 < µ2. Figure 10.2(c) illustrates the case when the
confidence interval is completely to the right of zero, leading us to also
reject H0. This case suggests that µ1 − µ2 > 0 or, equivalently, µ1 > µ2.
These rules are commonly used in practice to make statements about how
the population means differ (µ1 > µ2 or µ1 < µ2) when the confidence
interval does not include zero (Banks et al. 2001; Hoover and Perry 1989).
1 54.48 56.01
2 57.36 54.08
3 54.81 52.14
4 56.20 53.49
5 54.83 55.49
6 57.69 55.00
7 58.33 54.88
8 57.19 54.47
9 56.84 54.93
10 55.29 55.84
Sample mean x¯i, for i = 1, 2 56.30 54.63
Sample standard deviation si, for i = 1, 2 1.37 1.17
Sample variance s 2 , for i = 1, 2 1.89 1.36
i
258 Part I Study Chapters
because a unique segment (stream) of random numbers from the random number
generator was used for each replication. The same is true for the 10 observations
in column C (Strategy 2 Throughput). The use of random number streams is
dis- cussed in Chapters 3 and 9 and later in this chapter. At this point we are
assuming that the observations are also normally distributed. The
reasonableness of assum- ing that the output produced by our simulation
models is normally distributed is discussed at length in Chapter 9. For this data
set, we should also point out that two different sets of random numbers were
used to simulate the 10 replications of each strategy. Therefore, the observations
in column B are independent of the observa- tions in column C. Stated another
way, the two columns of observations are not correlated. Therefore, the
observations are independent within a population (strat- egy) and between
populations (strategies). This is an important distinction and will be employed
later to help us choose between different methods for computing the confidence
intervals used to compare the two strategies.
From the observations in Table 10.1 of the throughput produced by each strat-
egy, it is not obvious which strategy yields the higher throughput. Inspection of the
summary statistics indicates that Strategy 1 produced a higher mean throughput
for the system; however, the sample variance for Strategy 1 was higher than for
Strategy 2. Recall that the variance provides a measure of the variability of the data
and is obtained by squaring the standard deviation. Equations for computing the
sample mean x¯ , sample variance s2, and sample standard deviation s are given
in Chapter 9. Because of this variation, we should be careful when making
conclu- sions about the population of throughput values (µ1 and µ2) by only
inspecting the point estimates (x¯1 and x¯2 ). We will avoid the temptation and use
the output from the 10 replications of each simulation model along with a
confidence interval to make a more informed decision.
We will use an α = 0.05 level of significance to compare the two
candidate strategies using the following hypotheses:
H 0: µ1 − µ2 =
0 H1: µ1 − µ2
/= 0
where the subscripts 1 and 2 denote Strategy 1 and Strategy 2, respectively. As
stated earlier, there are two common methods for constructing a confidence
interval for evaluating hypotheses. The first method is referred to as the Welch
confidence interval (Law and Kelton 2000; Miller 1986) and is a modified
two- sample-t confidence interval. The second method is the paired-t confidence
inter- val (Miller et al. 1990). We’ve chosen to present these two methods
because their statistical assumptions are more easily satisfied than are the
assumptions for other confidence interval methods.
Table 10.1 are independent and are assumed normal. However, the Welch
confi- dence interval method does not require that the number of samples
drawn from one population (n1) equal the number of samples drawn from the
other population (n2) as we did in the buffer allocation example. Therefore, if
you have more ob- servations for one candidate system than for the other
candidate system, then by all means use them. Additionally, this approach
does not require that the two pop- ulations have equal variances (σ 2 = σ 2 =
σ 2 ) as do other approaches. This
1
is 2
useful because we seldom know the true value of the variance of a population.
Thus we are not required to judge the equality of the variances based on the sam-
ple variances we compute for each population (s21 and s22 ) before using the Welch
confidence interval method.
The Welch confidence interval for an α level of significance is
P[(x¯ 1 − x¯ 2 ) − hw ≤ µ1 − µ2 ≤ (x¯ 1 − x¯2 ) + hw] = 1 − α
where x¯ 1 and x¯2 represent the sample means used to estimate the
population means µ1 and µ2; hw denotes the half-width of the confidence
interval and is computed by
hw = tdf,α/2 s12 s2
+
n1 n2
where df (degrees of freedom) is estimated by
2
s2 n + s2 n
1 1 2 2
df ≈ 2
2 1 − 1) + s2
2
2 (n2 − 1)
s12 n 1
n
(n
and tdf,α/2 is a factor obtained from the Student’s t table in Appendix B based on
the value of α/2 and the estimated degrees of freedom. Note that the degrees of
freedom term in the Student’s t table is an integer value. Given that the estimated
degrees of freedom will seldom be an integer value, you will have to use
interpo- lation to compute the tdf,α/2 value.
For the example buffer allocation problem with an α = 0.05 level of
signifi-
cance, we use these equations and data from Table 10.1 to compute
[1.89/10 + 1.36/10]2
df ≈
[1.89/10]2/ − + [1.36/10]2/(10 ≈ 17.5
1)
(10 1) −
and
/
1.89 1.36 √
hw = t17.5,0.025 + 0.325 = 1.20 parts per hour
10 =
10
2.106
j =1
Sample mean = n
x(1−2) j
x¯ (1−2) = n
n 2
j =1
x x
(1−2) j − ¯(1−2)
Sample standard deviation = s(1−2) =
n−1
i
P (all m confidence interval statements are correct) ≥ (1 − α) = 1 − i α
m =1
m
where α = αi and is the overall level of significance and m =
K (K −1)
and
i 2
is the number of
=1confidence interval statements.
our conclusions leaves much to be desired. To combat this, we simply lower the
val- ues of the individual significance levels (α1 = α2 = α3 = · · · = αm ) so
their sum is not so large. However, this does not come without a price, as we
shall see later. One way to assign values to the individual significance levels
is to first es- tablish an overall level of significance α and then divide it by the
number of pair-
wise comparisons. That is,
α
αi = for i = 1, 2, 3, . . . , K (K− 1)/2
K (K − 1)/2
Note, however, that it is not required that the individual significance levels be as-
signed the same value. This is useful in cases where the decision maker wants to
place different levels of significance on certain comparisons.
Practically speaking, the Bonferroni inequality limits the number of
system de- signs that can be reasonably compared to about five designs or
less. This is because controlling the overall significance level α for the test
requires the assignment of small values to the individual significance levels
(α1 = α2 = α3 = · · · = αm ) if more than five designs are compared. This
presents a problem because the width of a confidence interval quickly
increases as the level of significance is reduced. Recall that the width of a
confidence interval provides a measure of the accuracy of the estimate.
Therefore, we pay for gains in the overall confidence of our test by reducing
the accuracy of our individual estimates (wide confidence intervals). When
accurate estimates (tight confidence intervals) are desired, we recommend
not using the Bonferroni approach when comparing more than five system
designs. For comparing more than five system designs, we recommend that
the analysis of variance technique be used in conjunction with perhaps the
Fisher’s least signifi- cant difference test. These methods are presented in
Section 10.4.2.
Let’s return to the buffer allocation example from the previous section and
apply the Bonferroni approach using paired-t confidence intervals. In this case,
the production control staff has devised three buffer allocation strategies to com-
pare. And, as before, we wish to determine if there are significant differences
between the throughput levels (number of parts completed per hour) achieved
by the strategies. Although we will be working with individual confidence
intervals, the hypotheses for the overall α level of significance are
H 0 : µ1 = µ2 = µ3 = µ
H1: µ1 /= µ2 or µ1 /= µ3 or µ2 /= µ3
where the subscripts 1, 2, and 3 denote Strategy 1, Strategy 2, and Strategy 3,
respectively.
To evaluate these hypotheses, we estimated the performance of the three
strategies by simulating the use of each strategy for 16 days (24 hours per
day) past the warm-up period. And, as before, the simulation was replicated 10
times for each strategy. The average hourly throughput achieved by each
strategy is shown in Table 10.3.
The evaluation of the three buffer allocation strategies (K = 3)
requires that three [3(3 − 1)/2] pairwise comparisons be made. The three
pairwise
266 Part I Study Chapters
comparisons are shown in columns E, F, and G of Table 10.3. Also shown in Table
10.3 are the sample means x¯(i −i t ) and sample standard deviations s(i −i t ) for each
pairwise comparison.
Let’s say that we wish to use an overall significance level of α = 0.06 to
eval- uate our hypotheses. For the individual levels of significance, let’s set
α1 = α2 = α3 = 0.02 by using the equation
α
0.06
αi = =
= 0.02
3 for i = 1, 2, 3
3
The computation of the three paired-t confidence intervals using the method out-
lined in Section 10.3.2 and data from Table 10.3 follows:
is
states that the mean throughputs due to the application of treatments (Strategies 1,
2, and 3) differ among at least one pair of strategies.
We will use a balanced CR design to help us conduct this test of hypothesis.
In a balanced design, the same number of observations are collected for each
fac- tor level. Therefore, we executed 10 simulation runs to produce 10
observations of throughput for each strategy. Table 10.4 presents the
experimental results and summary statistics for this problem. The response
variable (x ij ) is the observed throughput for the treatment (strategy). The
subscript i refers to the factor level (Strategy 1, 2, or 3) and j refers to an
observation (output from replication j) for that factor level. For example, the
mean throughput response of the simulation model for the seventh replication
of Strategy 2 is 54.88 in Table 10.4. Parameters for this balanced CR design are
Number of factor levels = number of alternative system designs = K = 3
Number of observations for each factor level = n = 10
Total number of observations = N = nK = (10)3 = 30
Inspection of the summary statistics presented in Table 10.4 indicates that
Strategy 3 produced the highest mean throughput and Strategy 2 the lowest.
Again, we should not jump to conclusions without a careful analysis of the
experimental data. Therefore, we will use analysis of variance (ANOVA) in
con- junction with a multiple comparison test to guide our decision.
Analysis of Variance
Analysis of variance (ANOVA) allows us to partition the total variation in the out-
put response from the simulated system into two components—variation due to
270 Part I Study Chapters
the effect of the treatments and variation due to experimental error (the inherent
variability in the simulated system). For this problem case, we are interested in
knowing if the variation due to the treatment is sufficient to conclude that the
per- formance of one strategy is significantly different than the other with
respect to mean throughput of the system. We assume that the observations are
drawn from normally distributed populations and that they are independent
within a strategy and between strategies. Therefore, the variance reduction
technique based on common random numbers (CRN) presented in Section 10.5
cannot be used with this method.
The fixed-effects model is the underlying linear statistical model used for
the analysis because the levels of the factor are fixed and we will consider each
pos- sible factor level. The fixed-effectsÍ model is written as
for i = 1, 2, 3, . . . , K
x ij = µ + τi + ε ij
for j = 1, 2, 3, . . . , n
where τi is the effect of the ith treatment (ith strategy in our example) as a devia-
tion from the overall (common to all treatments) population mean µ and ε ij is
the error associated with this observation. In the context of simulation, the ε ij
term represents the random variation of the response x ij that occurred during
the jth replication of the ith treatment. Assumptions for the fixed-effects model
are that the sum of all τi equals zero and that the error terms ε ij are independent
and nor- mally distributed with a mean of zero and common variance. There
are methods for testing the reasonableness of the normality and common
variance assump- tions. However, the procedure presented in this section is
reported to be somewhat insensitive to small violations of these assumptions
(Miller et al. 1990). Specifi- cally, for the buffer allocation example, we are
testing the equality of three treatment effects (Strategies 1, 2, and 3) to
determine if there are statistically sig- nificant differences among them.
Therefore, our hypotheses are written as
H0: τ1 = τ2 = τ3 = 0
H1: τi /= 0 for at least one i, for i = 1, 2, 3
Basically, the previous null hypothesis that the K population means are
all equal (µ1 = µ2 = µ3 = · · · = µK = µ) is replaced by the null
hypothesis τ1 = τ2 = τ3 = · · · = τK = 0 for the fixed-effects model.
Likewise, the alterna- tive hypothesis that at least two of the population
means are unequal is replaced by τi /= 0 for at least one i. Because only one
factor is considered in this problem, a simple one-way analysis of variance
is used to determine FCALC, the test statis- tic that will be used for the
hypothesis test. If the computed FCALC value exceeds a threshold value
called the critical value, denoted FCRITICAL, we shall reject the null
hypothesis that states that the treatment effects do not differ and conclude
that there are statistically significant differences among the treatments
(strategies in our example problem).
To help us with the hypothesis test, let’s summarize the experimental results
shown in Table 10.4 for the example problem. The first summary statistic that
we will compute is called the sum of squares (SSi) and is calculated for the
ANOVA for each factor level (Strategies 1, 2, and 3 in this case). In a
balanced design
Chapter 10 Comparing Systems 271
where the number of observations n for each factor level is a constant, the sum of
squares is calculated using the formula
n x ij
1 2
n j=
SSi = j xi2j \ − for i = 1, 2, 3, . . . , K
=1 n
10
SS1 =
10
x2 \ j 1j 2
=1 x
−
j =1 1 j
10
(563.02)2
2 2 2
SS1 = [(54.48) + (57.36) + · · · + (55.29) ] − = 16.98
10
SS2 =
12.23
SS3 = 3.90
The grand total of the N observations (N = nK ) collected from the output re-
sponse of the simulated system is computed by
K
Grand total = x.. = n
x ij =
K
xi
i j
=1 =1 i
=1
The overall mean of the N observations collected from the output response of
the simulated system is computed by
x.
K j =1 x ij
Overall mean = x¯ .. = =1
i n .
=
N N
Using the data in Table 10.4 for the buffer allocation example, these statistics are
3
Grand total = x.. = xi = 563.02 + 546.33 + 573.92 = 1,683.27
i
=1
and
K
SSi = 16.98 + 12.23 + 3.90 = 33.11
Sum of squares error = SSE = i
=1
r \ l
1
Sum of squares treatment = SST = x2 − x..
K
2
n i =1 i K
r 1
1 2 2 2 (1,683.27)2
((563.02) + = 38.62
SST = (546.33) + (573.92) )−
10 3
SST 38.62
Mean square treatment = MST = = 19.31
2
=
df(treatment) 33.11
SSE
= 1.23
Mean square error = MSE = 27
=
df(error)
and finally
MST 19.31
Calculated F statistic = FCALC =
MSE 1.23 = 15.70
=
Table 10.5 presents the ANOVA table for this problem. We will compare the
value of FCALC with a value from the F table in Appendix C to determine
whether to reject or fail to reject the null hypothesis H0: τ1 = τ2 = τ3 =
0. The values obtained from the F table in Appendix C are referred to as
critical values and are determined by F(df(treatment), df(error); α). For this problem,
F(2,27; 0.05) = 3.35 = FCRITICAL, using a significance level (α) of 0.05.
Therefore, we will reject H0 since FCALC > FCRITICAL at the α = 0.05 level
of significance. If we believe the data in Table 10.4 satisfy the assumptions
of the fixed-effects model, then we would con- clude that the buffer
allocation strategy (treatment) significantly affects the mean
throughput of the system. We now have evidence that at least one strategy
produces better results than the other strategies. Next, a multiple comparison
test will be conducted to determine which strategy (or strategies) causes the
significance.
Strategy 2 Strategy 1
x¯2 = x¯1 = 56.30
54.63
Strategy 3 |x¯ 2 − x¯ 3 | = 2.76 |x¯ 1 − x¯ 3 | = 1.09
x¯ 3 = 57.39 Significant Significant
(2.76 > 1.02) (1.09 > 1.02)
explanation is that the LSD test is considered to be more liberal in that it will in-
dicate a difference before the more conservative Bonferroni approach. Perhaps if
the paired-t confidence intervals had been used in conjunction with common
random numbers (which is perfectly acceptable because the paired-t method
does not require that observations be independent between populations), then
the Bonferroni approach would have also indicated a difference. We are not
sug- gesting here that the Bonferroni approach is in error (or that the LSD test
is in error). It could be that there really is no difference between the
performances of Strategy 1 and Strategy 3 or that we have not collected
enough observations to be conclusive.
There are several multiple comparison tests from which to choose. Other
tests include Tukey’s honestly significant difference (HSD) test, Bayes LSD
(BLSD) test, and a test by Scheffe. The LSD and BLSD tests are considered to
be liberal in that they will indicate a difference between µi and µi t before the
more conservative Scheffe test. A book by Petersen (1985) provides more
information on multiple comparison tests.
FIGURE 10.3
Relationship
Factors (X1, X2, . . . , Simulation Output responses
between factors
Xn )
(decision variables) model
and output
responses.
The natural inclination when experimenting with multiple factors is to test
the impact that each individual factor has on system response. This is a simple
and straightforward approach, but it gives the experimenter no knowledge of
how fac- tors interact with each other. It should be obvious that experimenting
with two or more factors together can affect system response differently than
experimenting with only one factor at a time and keeping all other factors the
same.
One type of experiment that looks at the combined effect of multiple
factors on system response is referred to as a two-level, full-factorial design. In
this type of ex- periment, we simply define a high and a low setting for each
factor and, since it is a full-factorial experiment, we try every combination
of factor settings. This means that if there are five factors and we are testing
two different levels for each factor, we would test each of the 25 = 32
possible combinations of high and low factor levels. For factors that have no
range of values from which a high and a low can be chosen, the high and
low levels are arbitrarily selected. For example, if one of the factors being
investigated is an operating policy (like first come, first served or last come,
last served), we arbitrarily select one of the alternative policies as the high-
level setting and a different one as the low-level setting.
For experiments in which a large number of factors are considered, a two-
level, full-factorial design would result in an extremely large number of
combina- tions to test. In this type of situation, a fractional-factorial design is
used to strate- gically select a subset of combinations to test in order to “screen
out” factors with little or no impact on system performance. With the remaining
reduced number of factors, more detailed experimentation such as a full-
factorial experiment can be conducted in a more manageable fashion.
After fractional-factorial experiments and even two-level, full-factorial ex-
periments have been performed to identify the most significant factor level com-
binations, it is often desirable to conduct more detailed experiments, perhaps
over the entire range of values, for those factors that have been identified as
being the most significant. This provides more precise information for making
decisions re- garding the best, or optimal, factor values or variable settings for
the system. For a more detailed treatment of factorial design in simulation
experimentation, see Law and Kelton (2000).
In many cases, the number of factors of interest prohibits the use of even
fractional-factorial designs because of the many combinations to test. If this is
the case and you are seeking the best, or optimal, factor values for a system, an
alternative is to employ an optimization technique to search for the best
combina- tion of values. Several optimization techniques are useful for
searching for the combination that produces the most desirable response from
the simulation model without evaluating all possible combinations. This is the
subject of simulation op- timization and is discussed in Chapter 11.
276 Part I Study Chapters
FIGURE 10.4
Stream
Unique seed
value assigned Rep.1
for each Seed 9
.83
replication. .
.
.
.12
Rep. 2
Seed 5
.93
.
.
.
.79
Rep. 3
Seed 3
.28
.
.
.
The goal is to use the exact random number from the stream for the exact
purpose in each simulated system. To help achieve this goal, the random number
stream can be seeded at the beginning of each independent replication to keep it
synchronized across simulations of each system. For example, in Figure 10.4,
the first replication starts with a seed value of 9, the second replication starts
with a seed value of 5, the third with 3, and so on. If the same seed values for
each repli- cation are used to simulate each alternative system, then the same
stream of ran- dom numbers will drive each of the systems. This seems simple
enough. How- ever, care has to be taken not to pick a seed value that places us
in a location on the stream that has already been used to drive the simulation in
a previous repli- cation. If this were to happen, the results from replicating the
simulation of a sys- tem would not be independent because segments of the
random number stream would have been shared between replications, and this
cannot be tolerated. Therefore, some simulation software provides a CRN option
that, when selected,
278 Part I Study Chapters
Machine Machine
Machine Machine
1 3
2 4
Chapter 10 Comparing Systems 279
not specify an initial seed value for a stream that is used, ProModel will use the
same seed number as the stream number (stream 3 uses the third seed). A
detailed explanation of how random number generators work and how they
produce unique streams of random numbers is provided in Chapter 3.
Complete synchronization of the random numbers across different models is
sometimes difficult to achieve. Therefore, we often settle for partial synchro-
nization. At the very least, it is a good idea to set up two streams with one
stream of random numbers used to generate an entity’s arrival pattern and
the other stream of random numbers used to generate all other activities in
the model. That way, activities added to the model will not inadvertently alter
the arrival pattern because they do not affect the sample values generated
from the arrival distribution.
and
(t9,0.025 )s(1−2)
hw (2.262)1.16
√ =
= n √ = 0.83 parts per hour
10
10.6 Summary
An important point to make here is that simulation, by itself, does not solve a
problem. Simulation merely provides a means to evaluate proposed solutions by
estimating how they behave. The user of the simulation model has the responsi-
bility to generate candidate solutions either manually or by use of automatic
optimization techniques and to correctly measure the utility of the solutions
based on the output from the simulation. This chapter presented several
statistical methods for comparing the output produced by simulation models
representing candidate solutions or designs.
When comparing two candidate system designs, we recommend using either
the Welch confidence interval method or the paired-t confidence interval. Also, a
282 Part I Study Chapters
Reference
s Banks, Jerry; John S. Carson; Berry L. Nelson; and David M. Nicol. Discrete-Event Sys-
tem Simulation. Englewood Cliffs, NJ: Prentice Hall, 2001.
Bateman, Robert E.; Royce O. Bowden; Thomas J. Gogg; Charles R. Harrell; and Jack
R. A. Mott. System Improvement Using Simulation. Orem, UT: PROMODEL Corp.,
1997.
Goldsman, David, and Berry L. Nelson. “Comparing Systems via Simulation.” Chapter 8
in Handbook of Simulation. New York: John Wiley and Sons, 1998.
Hines, William W., and Douglas C. Montgomery. Probability and Statistics in Engineering
and Management Science. New York: John Wiley & Sons, 1990.
Hoover, Stewart V., and Ronald F. Perry. Simulation: A Problem-Solving Approach.
Reading, MA: Addison-Wesley, 1989.
Law, Averill M., and David W. Kelton. Simulation Modeling and Analysis. New York:
McGraw-Hill, 2000.
Miller, Irwin R.; John E. Freund; and Richard Johnson. Probability and Statistics for
Engineers. Englewood Cliffs, NJ: Prentice Hall, 1990.
Miller, Rupert G. Beyond ANOVA, Basics of Applied Statistics, New York: Wiley, 1986.
Montgomery, Douglas C. Design and Analysis of Experiments. New York: John Wiley &
Sons, 1991.
Petersen, Roger G. Design and Analysis of Experiments. New York: Marcel Dekker, 1985.
Harrell−Ghosh−Bo I. Study 11. Simulation © The
wden: Simulation Chapters Optimization McGraw−Hill
Using ProModel, Companies,
Second Edition
C H A P T E R
11 SIMULATION
OPTIMIZATION
“Man is a goal seeking animal. His life only has meaning if he is reaching out
and striving for his goals.”
—Aristotle
11.1 Introduction
Simulation models of systems are built for many reasons. Some models are built
to gain a better understanding of a system, to forecast the output of a system, or
to compare one system to another. If the reason for building simulation models is
to find answers to questions like “What are the optimal settings for to mini-
mize (or maximize) ?” then optimization is the appropriate technology to
combine with simulation. Optimization is the process of trying different
combina- tions of values for the variables that can be controlled to seek the
combination of values that provides the most desirable output from the
simulation model.
For convenience, let us think of the simulation model as a black box that
imitates the actual system. When inputs are presented to the black box, it
produces output that estimates how the actual system responds. In our
question, the first blank represents the inputs to the simulation model that are
controllable by the decision maker. These inputs are often called decision
variables or factors. The second blank represents the performance measures of
interest that are computed from the stochastic output of the simulation model
when the decision variables are set to specific values (Figure 11.1). In the
question, “What is the optimal number of material handling devices needed to
minimize the time that workstations are starved for material?” the decision
variable is the number of material handling de- vices and the performance
measure computed from the output of the simulation model is the amount of
time that workstations are starved. The objective, then, is to seek the optimal
value for each decision variable that minimizes, or maximizes, the expected
value of the performance measure(s) of interest. The performance
285
286 Part I Study Chapters
Output responses
measure is traditionally called the objective function. Note that the expected
value of the objective function is estimated by averaging the model’s output
over multi- ple replications or batch intervals. The simulation optimization
problem is more formally stated as
Min or Max E [ f (X 1 , X 2 , . . . , X n )]
Subject to
Lower Boundi ≤ Xi ≤ Upper Boundi for i = 1, 2, . . . , n
where E [ f (X 1 , X 2 , . . . , X n )] denotes the expected value of the objective
function, which is estimated.
The search for the optimal solution can be conducted manually or
automated with algorithms specifically designed to seek the optimal solution
without evalu- ating all possible solutions. Interfacing optimization algorithms
that can automat- ically generate solutions and evaluate them in simulation
models is a worthwhile endeavor because
• It automates part of the analysis process, saving the analyst time.
• A logical method is used to efficiently explore the realm of
possible solutions, seeking the best.
• The method often finds several exemplary solutions for the analyst to
consider.
The latter is particularly important because, within the list of optimized solutions,
there may be solutions that the decision maker may have otherwise overlooked.
In 1995, PROMODEL Corporation and Decision Science, Incorporated,
developed SimRunner based on the research of Bowden (1992) on the use of
modern optimization algorithms for simulation-based optimization and machine
learning. SimRunner helps those wishing to use advanced optimization concepts
to seek better solutions from their simulation models. SimRunner uses an opti-
mization method based on evolutionary algorithms. It is the first widely used,
commercially available simulation optimization package designed for major sim-
ulation packages (ProModel, MedModel, ServiceModel, and ProcessModel).
Although SimRunner is relatively easy to use, it can be more effectively used
with a basic understanding of how it seeks optimal solutions to a problem.
Therefore,
the purpose of this chapter is fourfold:
• To provide an introduction to simulation optimization, focusing on the
latest developments in integrating simulation and a class of direct
optimization techniques called evolutionary algorithms.
Chapter 11 Simulation Optimization 287
FIGURE 11.2
SimRunner plots the
output responses
generated by a
ProModel simulation
model as it seeks the
optimal solution,
which occurred at
the highest peak.
optimization module for the GASP IV simulation software package. Pegden and
Gately (1980) later developed another optimization module for use with the
SLAM simulation software package. Their optimization packages were based on
a variant of a direct search method developed by Hooke and Jeeves (1961).
After solving several problems, Pegden and Gately concluded that their
packages extended the capabilities of the simulation language by “providing for
automatic optimization of decision.”
The direct search algorithms available today for simulation optimization are
much better than those available in the late 1970s. Using these newer
algorithms, the SimRunner simulation optimization tool was developed in
1995. Following SimRunner, two other simulation software vendors soon
added an optimization feature to their products. These products are
OptQuest96, which was introduced in 1996 to be used with simulation models
built with Micro Saint software, and WITNESS Optimizer, which was
introduced in 1997 to be used with simulation models built with Witness
software. The optimization module in OptQuest96 is based on scatter search,
which has links to Tabu Search and the popular evolu- tionary algorithm called
the Genetic Algorithm (Glover 1994; Glover et al. 1996). WITNESS Optimizer
is based on a search algorithm called Simulated Annealing (Markt and Mayer
1997). Today most major simulation software packages include an optimization
feature.
SimRunner has an optimization module and a module for determining the
required sample size (replications) and a model’s warm-up period (in the case of
a steady-state analysis). The optimization module can optimize integer and real
decision variables. The design of the optimization module in SimRunner was
influenced by optima-seeking techniques such as Tabu Search (Glover 1990) and
evolutionary algorithms (Fogel 1992; Goldberg 1989; Schwefel 1981), though it
most closely resembles an evolutionary algorithm (SimRunner 1996b).
and Mollaghasemi (1991) suggest that GAs and Simulated Annealing are the
algorithms of choice when dealing with a large number of decision variables.
Tompkins and Azadivar (1995) recommend using GAs when the optimization
prob- lem involves qualitative (logical) decision variables. The authors have
extensively researched the use of genetic algorithms, evolutionary
programming, and evolu- tion strategies for solving manufacturing simulation
optimization problems and simulation-based machine learning problems
(Bowden 1995; Bowden, Neppalli, and Calvert 1995; Bowden, Hall, and Usher
1996; Hall, Bowden, and Usher 1996). Reports have also appeared in the trade
journal literature on how the EA- based optimizer in SimRunner helped to
solve real-world problems. For example, IBM; Sverdrup Facilities, Inc.; and
Baystate Health Systems report benefits from using SimRunner as a decision
support tool (Akbay 1996). The simulation group at Lockheed Martin used
SimRunner to help determine the most efficient lot sizes for parts and when
the parts should be released to the system to meet
schedules (Anonymous 1996).
FIGURE 11.3
Ackley’s function with noise and the ES’s progress over eight generations.
Measured response
20
Ackley's Function
18
16 Generation 1
14 Generation 2
12 Generation 3
10 Generation 4
8
Generation 5
6
4 Generation 6
2 Generation 7
0 Generation 8
-10 -5 0 5 10
Decision variable X
292 Part I Study Chapters
The authors are not advocating that analysts can forget about determining
the number of replications needed to satisfactorily estimate the expected value
of the response. However, to effectively conduct a search for the optimal
solution, an algorithm must be able to deal with noisy response surfaces and
the resulting uncertainties that exist even when several observations
(replications) are used to estimate a solution’s true performance.
Step 1. The decision variables believed to affect the output of the simulation
model are first programmed into the model as variables whose values can be
quickly changed by the EA. Decision variables are typically the parameters
whose values can be adjusted by management, such as the number of nurses
assigned to a shift or the number of machines to be placed in a work cell.
Step 2. For each decision variable, define its numeric data type (integer or real)
and its lower bound (lowest possible value) and upper bound (highest
possible value). During the search, the EA will generate solutions by varying the
values of decision variables according to their data types, lower bounds, and
upper bounds. The number of decision variables and the range of possible
values affect the size of the search space (number of possible solutions to the
problem). Increasing
Chapter 11 Simulation Optimization 295
the number of decision variables or their range of values increases the size of the
search space, which can make it more difficult and time-consuming to identify
the optimal solution. As a rule, include only those decision variables known to
signif- icantly affect the output of the simulation model and judiciously define
the range of possible values for each decision variable. Also, care should be
taken when defining the lower and upper bounds of the decision variables to
ensure that a combination of values will not be created that lead to a solution
not envisioned when the model was built.
Step 3. After selecting the decision variables, construct the objective function to
measure the utility of the solutions tested by the EA. Actually, the foundation for
the objective function would have already been established when the goals for
the simulation project were set. For example, if the goal of the modeling project
is to find ways to minimize a customer’s waiting time in a bank, then the
objective function should measure an entity’s (customer’s) waiting time in the
bank. The objective function is built using terms taken from the output report
generated at the end of the simulation run. Objective function terms can be
based on entity statistics, location statistics, resource statistics, variable
statistics, and so on. The user specifies whether a term is to be minimized or
maximized as well as the overall weighting of that term in the objective
function. Some terms may be more or less important to the user than other
terms. Remember that as terms are added to the objective function, the
complexity of the search space may increase, which makes a more difficult
optimization problem. From a statistical point of view, single-term objective
functions are also preferable to multiterm objective func- tions. Therefore,
strive to keep the objective function as specific as possible.
The objective function is a random variable, and a set of initial experiments
should be conducted to estimate its variability (standard deviation). Note that
there is a possibility that the objective function’s standard deviation differs
from one solution to the next. Therefore, the required number of replications
necessary to es- timate the expected value of the objective function may change
from one solution to the next. Thus the objective function’s standard deviation
should be measured for several different solutions and the highest standard
deviation recorded used to compute the number of replications necessary to
estimate the expected value of the objective function. When selecting the set of
test solutions, choose solutions that are very different from one another. For
example, form solutions by setting the decision variables to their lower bounds,
middle values, or upper bounds.
A better approach for controlling the number of replications used to
estimate the expected value of the objective function for a given solution would
be to incor- porate a rule into the model that schedules additional replications
until the estimate reaches a desired level of precision (confidence interval half-
width). Using this technique can help to avoid running too many replications for
some solutions and too few replications for others.
Step 4. Select the size of the EA’s population (number of solutions) and begin
the search. The size of the population of solutions used to conduct the search
296 Part I Study Chapters
affects both the likelihood that the algorithm will locate the optimal solution and
the time required to conduct the search. In general, as the population size is
increased, the algorithm finds better solutions. However, increasing the popula-
tion size generally increases the time required to conduct the search. Therefore,
a balance must be struck between finding the optimum and the amount of
available time to conduct the search.
Step 5. After the EA’s search has concluded (or halted due to time constraints),
the analyst should study the solutions found by the algorithm. In addition to the
best solution discovered, the algorithm usually finds many other competitive
solutions. A good practice is to rank each solution evaluated based on its utility
as measured by the objective function. Next, select the most highly
competitive solutions and, if necessary, make additional model replications of
those solutions to get better estimates of their true utility. And, if necessary,
refer to Chapter 10 for background on statistical techniques that can help you
make a final decision between competing solutions. Also, keep in mind that the
database of solutions evaluated by the EA represents a rich source of
information about the behavior, or response surface, of the simulation model.
Sorting and graphing the solutions can help you interpret the “meaning” of the
data and gain a better understanding of how the system behaves.
that disruptions (machine failures, line imbalances, quality problems, or the like)
will shut down the production line. Several strategies have been developed for
determining the amount of buffer storage needed between workstations.
However, these strategies are often developed based on simplifying
assumptions, made for mathematical convenience, that rarely hold true for real
production systems.
One way to avoid oversimplifying a problem for the sake of mathematical
convenience is to build a simulation model of the production system and use it
to help identify the amount of buffer storage space needed between
workstations. However, the number of possible solutions to the buffer allocation
problem grows rapidly as the size of the production system (number of
possible buffer storage areas and their possible sizes) increases, making it
impractical to evaluate all so- lutions. In such cases, it is helpful to use
simulation optimization software like SimRunner to identify a set of candidate
solutions.
This example is loosely based on the example production system
presented in Chapter 10. It gives readers insight into how to formulate
simulation optimiza- tion problems when using SimRunner. The example is not
fully solved. Its com- pletion is left as an exercise in Lab Chapter 11.
FIGURE 11.4
Production system
with four
workstations and
three buffer storage
areas.
.72 4.56
298 Part I Study Chapters
next machine, where it waits to be processed. However, if the buffer is full, the
part cannot move forward and remains on the machine until a space becomes
available in the buffer. Furthermore, the machine is blocked and no other parts
can move to the machine for processing. The part exits the system after being
processed by the fourth machine. Note that parts are selected from the buffers to
be processed by a machine in a first-in, first-out order. The processing time at
each machine is exponentially distributed with a mean of 1.0 minute, 1.3
minutes,
0.7 minute, and 1.0 minute for machines one, two, three, and four, respectively.
The time to move parts from one location to the next is negligible.
For this problem, three decision variables describe how buffer space is allo-
cated (one decision variable for each buffer to signify the number of parts that
can be stored in the buffer). The Goal is to find the optimal value for each
decision variable to maximize the profit made from the sale of the parts. The
manufacturer collects $10 per part produced. The limitation is that each unit of
space provided for a part in a buffer costs $1,000. So the buffer storage has to
be strategically allocated to maximize the throughput of the system. Throughput
will be measured as the number of parts completed during a 30-day period.
Step 1. In this step, the decision variables for the problem are identified and
defined. Three decision variables are needed to represent the number of parts
that can be stored in each buffer (buffer capacity). Let Q1, Q2, and Q3
represent the number of parts that can be stored in buffers 1, 2, and 3,
respectively.
Step 2. The numeric data type for each Qi is integer. If it is assumed that each
buffer will hold a minimum of one part, then the lower bound for each Qi is 1.
The upper bound for each decision variable could arbitrarily be set to, say, 20.
How- ever, keep in mind that as the range of values for a decision variable is
increased, the size of the search space also increases and will likely increase
the time to conduct the search. Therefore, this is a good place to apply
existing knowledge about the performance and design of the system.
For example, physical constraints may limit the maximum capacity of each
buffer to no greater than 15 parts. If so, there is no need to conduct a search with
a range of values that produce infeasible solutions (buffer capacities greater than
15 parts). Considering that the fourth machine’s processing time is larger than
the third machine’s processing time, parts will tend to queue up at the third
buffer. Therefore, it might be a good idea to set the upper bound for Q3 to a
larger value than the other two decision variables. However, be careful not to
assume too much
Chapter 11 Simulation Optimization 299
Step 3. Here the objective function is formulized. The model was built to inves-
tigate buffer allocation strategies to maximize the throughput of the system.
Given that the manufacturer collects $10 per part produced and that each unit of
space provided for a part in a buffer costs $1,000, the objective function for the
optimization could be stated as
Maximize [$10(Throughput) − $1,000(Q1 + Q2 + Q3)]
where Throughput is the total number of parts produced during a 30-day
period.
Next, initial experiments are conducted to estimate the variability of the
objective function in order to determine the number of replications the EA-based
optimization algorithm will use to estimate the expected value of the objective
function for each solution it evaluates. While doing this, it was also noticed that
the throughput level increased very little for buffer capacities beyond a value of
nine. Therefore, it was decided to change the upper bound for each decision
vari- able to nine. This resulted in a search space of 93, or 729, unique
solutions, a reduction of 2,646 solutions from the original formulation. This will
likely reduce the search time.
Step 4. Select the size of the population that the EA-based optimization algorithm
will use to conduct its search. SimRunner allows the user to select an optimization
profile that influences the degree of thoroughness used to search for the optimal
solution. The three optimization profiles are aggressive, moderate, and cautious,
which correspond to EA population sizes of small, medium, and large. The
aggres- sive profile generally results in a quick search for locally optimal
solutions and is used when computer time is limited. The cautious profile
specifies that a more thorough search for the global optimum be conducted and
is used when computer time is plentiful. At this point, the analyst knows the
amount of time required to evaluate a solution. Only one more piece of
information is needed to determine how long it will take the algorithm to
conduct its search. That is the fraction of the 729 solutions the algorithm will
evaluate before converging to a final solution. Unfortunately, there is no way of
knowing this in advance. With time running out before a recommendation for
the system must be given to management, the ana- lyst elects to use a small
population size by selecting SimRunner’s aggressive op- timization profile.
300 Part I Study Chapters
Step 5. After the search concludes, the analyst selects for further evaluation
some of the top solutions found by the optimization algorithm. Note that this
does not necessarily mean that only those solutions with the best objective
function values are chosen, because the analyst should conduct both a
quantitative and a qualitative analysis. On the quantitative side, statistical
procedures presented in Chapter 10 are used to gain a better understanding of
the relative differences in performance between the candidate solutions. On
the qualitative side, one solution may be preferred over another based on
factors such as ease of implementation.
FIGURE 11.5
SimRunner results for
the buffer allocation
problem.
Chapter 11 Simulation Optimization 301
from best to worst. SimRunner evaluated 82 out of the possible 729 solutions.
Note that there is no guarantee that SimRunner’s best solution is in fact the opti-
mum solution to the problem. However, it is likely to be one of the better
solutions to the problem and could be the optimum one.
The last two columns in the SimRunner table shown in Figure 11.5 display
the lower and upper bounds of a 95 percent confidence interval for each solution
evaluated. Notice that there is significant overlap between the confidence inter-
vals. Although this is not a formal hypothesis-testing procedure, the overlapping
confidence intervals suggest the possibility that there is not a significant differ-
ence in the performance of the top solutions displayed in the table. Therefore, it
would be wise to run additional replications of the favorite solutions from the
list and/or use one of the hypothesis-testing procedures in Chapter 10 before
selecting a particular solution as the best. The real value here is that SimRunner
automati- cally conducted the search, without the analyst having to hover over
it, and re- ported back several good solutions to the problem for the analyst to
consider.
FIGURE 11.6
The two-stage Trigger = Trigger =
pull production Customer
1 2
Assembly line Raw
system. demand
Kanban posts
Component
Final product
Kanban card
Subassembly
Figure 11.6 illustrates the relationship of the processes in the two-stage pull
production system of interest that produces several different types of parts.
Customer demand for the final product causes containers of subassemblies to be
pulled from the Stage One WIP location to the assembly lines. As each container
is withdrawn from the Stage One WIP location, a production-ordering kanban
card representing the number of subassemblies in a container is sent to the
kanban post for the Stage One processing system. When the number of kanban
cards for a given subassembly meets its trigger value, the necessary component
parts are pulled from the Stage Two WIP to create the subassemblies. Upon
completing the Stage One process, subassemblies are loaded into containers, the
corresponding kanban card is attached to the container, and both are sent to
Stage One WIP. The container and card remain in the Stage One WIP location
until pulled to an assembly line.
In Stage Two, workers process raw materials to fill the Stage Two WIP
location as component parts are pulled from it by Stage One. As component
parts are withdrawn from the Stage Two WIP location and placed into the
Stage One process, a production-ordering kanban card representing the
quantity of compo- nent parts in a container is sent to the kanban post for the
Stage Two line. When the number of kanban cards for a given component part
meets a trigger value, production orders equal to the trigger value are issued to
the Stage Two line. As workers move completed orders of component parts
from the Stage Two line to WIP, the corresponding kanban cards follow the
component parts to the Stage Two WIP location.
While an overall reduction in WIP is sought, production planners desire a
so- lution (kanban cards and trigger values for each stage) that gives
preference to minimizing the containers in the Stage One WIP location. This
requirement is due to space limitations.
where the safety coefficient represents the amount of WIP needed in the system.
Production planners assumed a safety coefficient that resulted in one day of WIP
for each part type. Additionally, they decided to use one setup per day for each
part type. Although this equation provides an estimate of the minimum number
of kanban cards, it does not address trigger values. Therefore, trigger values for
the Stage Two line were set at the expected number of containers consumed for
each part type in one day. The Toyota equation recommended using a total
of 243 kanban cards. The details of the calculation are omitted for brevity.
When evalu- ated in the simulation model, this solution yielded a performance
score of 35.140 (using the performance function defined in Section 11.7.2)
based on four inde- pendent simulation runs (replications).
Measured response
FIGURE 11.7 39 Optimization module
After the second Toyota
generation, the 38
solutions found by
the optimization 37
module are better
than the solutions 36
generated using the
Toyota method.
35
34
33
0 50 100 150
Generation
11.8 Summary
In recent years, major advances have been made in the development of user-
friendly simulation software packages. However, progress in developing sim-
ulation output analysis tools has been especially slow in the area of simulation
optimization because conducting simulation optimization with traditional tech-
niques has been as much an art as a science (Greenwood, Rees, and Crouch
1993). There are such an overwhelming number of traditional techniques that
only indi- viduals with extensive backgrounds in statistics and optimization
theory have realized the benefits of integrating simulation and optimization
concepts. Using newer optimization techniques, it is now possible to narrow
the gap with user- friendly, yet powerful, tools that allow analysts to combine
simulation and optimization for improved decision support. SimRunner is one
such tool.
Our purpose is not to argue that evolutionary algorithms are the panacea
for solving simulation optimization problems. Rather, our purpose is to introduce
the reader to evolutionary algorithms by illustrating how they work and how to
use them for simulation optimization and to expose the reader to the wealth of
literature that demonstrates that evolutionary algorithms are a viable choice for
reliable optimization.
There will always be debate on what are the best techniques to use for simu-
lation optimization. Debate is welcome as it results in better understanding of
the real issues and leads to the development of better solution procedures. It
must be remembered, however, that the practical issue is not that the
optimization tech- nique guarantees that it locates the optimal solution in the
shortest amount of time for all possible problems that it may encounter; rather,
it is that the optimization technique consistently finds good solutions to
problems that are better than the solutions analysts are finding on their own.
Newer techniques such as evolution- ary algorithms and scatter search meet this
requirement because they have proved robust in their ability to solve a wide
variety of problems, and their ease of use makes them a practical choice for
simulation optimization today (Boesel et al. 2001; Brady and Bowden 2001).
Reference
s
Ackley, D. A Connectionist Machine for Genetic Hill Climbing. Boston, MA: Kluwer,
1987. Akbay, K. “Using Simulation Optimization to Find the Best Solution.” IIE
Solutions, May
1996, pp. 24–29.
Anonymous, “Lockheed Martin.” IIE Solutions, December, 1996, pp. SS48–SS49.
Azadivar, F. “A Tutorial on Simulation Optimization.” 1992 Winter Simulation Confer-
ence. Arlington Virginia, ed. Swain, J., D. Goldsman, R. Crain and J. Wilson. Institute
of Electrical and Electronics Engineers, Piscataway, NJ: 1992, pp. 198–204.
Bäck, T.; T. Beielstein; B. Naujoks; and J. Heistermann. “Evolutionary Algorithms for
the Optimization of Simulation Models Using PVM.” Euro PVM 1995—
Second European PVM 1995, User’s Group Meeting. Hermes, Paris, ed.
Dongarra, J., M. Gengler, B. Tourancheau and X. Vigouroux, 1995, pp. 277–282.
Bäck, T., and H.-P. Schwefel. “An Overview of Evolutionary Algorithms for Parameter
Optimization.” Evolutionary Computation 1, no. 1 (1993), pp. 1–23.
Barton, R., and J. Ivey. “Nelder-Mead Simplex Modifications for Simulation
Optimization.”
Management Science 42, no. 7 (1996), pp. 954–73.
Biethahn, J., and V. Nissen. “Combinations of Simulation and Evolutionary Algorithms in
Management Science and Economics.” Annals of Operations Research 52 (1994),
pp. 183–208.
Boesel, J.; Bowden, R. O.; Glover, F.; and J. P. Kelly. “Future of Simulation Optimization.”
Proceedings of the 2001 Winter Simulation Conference, 2001, pp. 1466–69.
Bowden, R. O. “Genetic Algorithm Based Machine Learning Applied to the Dynamic
Routing of Discrete Parts.” Ph.D. Dissertation, Department of Industrial
Engineering, Mississippi State University, 1992.
Bowden, R. “The Evolution of Manufacturing Control Knowledge Using Reinforcement
Learning.” 1995 Annual International Conference on Industry, Engineering, and
Management Systems, Cocoa Beach, FL, ed. G. Lee. 1995, pp. 410–15.
Bowden, R., and S. Bullington. “An Evolutionary Algorithm for Discovering
Manufactur- ing Control Strategies.” In Evolutionary Algorithms in Management
Applications, ed. Biethahn, J. and V. Nissen. Berlin: Springer, 1995, pp. 124–38.
Bowden, R., and S. Bullington. “Development of Manufacturing Control Strategies
Using Unsupervised Learning.” IIE Transactions 28 (1996), pp. 319–31.
Bowden, R.; J. Hall; and J. Usher. “Integration of Evolutionary Programming and Simula-
tion to Optimize a Pull Production System.” Computers and Industrial
Engineering 31, no. 1/2 (1996), pp. 217–20.
Bowden, R.; R. Neppalli; and A. Calvert. “A Robust Method for Determining Good
Com- binations of Queue Priority Rules.” Fourth International Industrial
Engineering Re- search Conference, Nashville, TN, ed. Schmeiser, B. and R.
Uzsoy. Norcross, GA: IIE, 1995, pp. 874–80.
308 Part I Study Chapters
SimRunner Online Software Help. Ypsilanti, MI: Decision Science, Inc., 1996b.
SimRunner User’s Guide ProModel Edition. Ypsilanti, MI: Decision Science, Inc., 1996a.
Stuckman, B.; G. Evans; and M. Mollaghasemi. “Comparison of Global Search Methods
for Design Optimization Using Simulation.” 1991 Winter Simulation Conference,
Phoenix, AZ, ed. Nelson, B., W. Kelton and G. Clark. Piscataway, NJ: Institute of
Electrical and Electronics Engineers, 1991, pp. 937–44.
Tompkins, G., and F. Azadivar. “Genetic Algorithms in Optimizing Simulated Systems.”
1991 Winter Simulation Conference, Arlington, VA, ed. Alexopoulos, C., K. Kang,
W. Lilegdon, and D. Goldsman. Piscataway, NJ: Institute of Electrical and Electron-
ics Engineers, 1995, pp. 757–62.
Usher, J., and R. Bowden. “The Application of Genetic Algorithms to Operation
Sequenc- ing for Use in Computer-Aided Process Planning.” Computers and
Industrial Engi- neering Journal 30, no. 4 (1996), pp. 999–1013.
Harrell−Ghosh−Bo I. Study 12. Modeling © The
wden: Simulation Chapters Manufacturing McGraw−Hill
Using ProModel, Systems Companies,
Second Edition
C H A P T E R
12 MODELING
MANUFACTURING
SYSTEMS
“We no longer have the luxury of time to tune and debug new manufacturing
systems on the floor, since the expected economic life of a new system, before
major revision will be required, has become frighteningly short.”
—Conway and Maxwell
12.1 Introduction
In Chapter 7 we discussed general procedures for modeling the basic operation
of manufacturing and service systems. In this chapter we discuss design and
operat- ing issues that are more specific to manufacturing systems. Different
applications of simulation in manufacturing are presented together with how
specific manu- facturing issues are addressed in a simulation model. Most
manufacturing systems have material handling systems that, in some
instances, have a major impact on overall system performance. We touch on a
few general issues related to material handling systems in this chapter; however,
a more thorough treatment of material handling systems is given in Chapter 13.
Manufacturing systems are processing systems in which raw materials are
transformed into finished products through a series of operations performed at
workstations. In the rush to get new manufacturing systems on line, engineers
and planners often become overly preoccupied with the processes and methods
with- out fully considering the overall coordination of system components.
Many layout and improvement decisions in manufacturing are left to chance
or are driven by the latest management fad with little knowledge of how much
im- provement will result or whether a decision will result in any improvement
at all. For example, work-in-process (WIP) reduction that is espoused by just-
in-time (JIT) often disrupts operations because it merely uncovers the rocks
(variability, long setups, or the like) hidden beneath the inventory water level
that necessitated the WIP in the first place. To accurately predict the effect of
lowering WIP levels
311
312 Part I Study Chapters
requires sonar capability. Ideally you would like to identify and remove produc-
tion rocks before arbitrarily lowering inventory levels and exposing production
to these hidden problems. “Unfortunately,” note Hopp and Spearman (2001),
“JIT, as described in the American literature, offers neither sonar (models that
predict the effects of system changes) nor a sense of the relative economics
of level reduction versus rock removal.”
Another popular management technique is the theory of constraints. In this
approach, a constraint or bottleneck is identified and a best-guess solution is
implemented, aimed at either eliminating that particular constraint or at least
mit- igating the effects of the constraint. The implemented solution is then
evaluated and, if the impact was underestimated, another solution is attempted.
As one man- ufacturing manager expressed, “Contraint-based management
can’t quantify investment justification or develop a remedial action plan”
(Berdine 1993). It is merely a trial-and-error technique in which a best-guess
solution is implemented with the hope that enough improvement is realized to
justify the cost of the solu- tion. Even Deming’s plan–do–check–act (PDCA)
cycle of process improvement implicitly prescribes checking performance after
implementation. What the PDCA cycle lacks is an evaluation step to test or
simulate the plan before it is implemented. While eventually leading to a
better solution, this trial-and-error approach ends up being costly, time-
consuming, and disruptive to implement.
In this chapter we address the following topics:
• What are the special characteristics of manufacturing systems?
• What terminology is used to describe manufacturing operations?
• How is simulation applied to manufacturing systems?
• What techniques are used to model manufacturing systems?
As companies move toward greater vertical and horizontal integration and look
for ways to improve the entire value chain, simulation will continue to be an es-
sential tool for effectively planning the production and delivery processes.
Simulation in manufacturing covers the range from real-time cell control
to long-range technology assessment, where it is used to assess the feasibility
of new technologies prior to committing capital funds and corporate resources.
Figure 12.1 illustrates this broad range of planning horizons.
Simulations used to make short-term decisions usually require more detailed
models with a closer resemblance to current operations than what would be
found in long-term decision-making models. Sometimes the model is an exact
replica of the current system and even captures the current state of the system.
This is true in real- time control and detailed scheduling applications. As
simulation is used for more long-term decisions, the models may have little or
no resemblance to current opera- tions. The model resolution becomes coarser,
usually because higher-level decisions are being made and data are too fuzzy and
unreliable that far out into the future.
Simulation helps evaluate the performance of alternative designs and the
effec- tiveness of alternative operating policies. A list of typical design and
operational decisions for which simulation might be used in manufacturing
includes the following:
Design Decisions
1. What type and quantity of machines, equipment, and tooling should
be used?
2. How many operating personnel are needed?
3. What is the production capability (throughput rate) for a given
configuration?
FIGURE 12.1
Decision range in
manufacturing
simulation.
Cell Job Resource Production Process System Technology
manage- sequen- loading scheduling change configu- assess-
ment cing studies ration ment
include
• Methods analysis.
• Plant layout.
• Batch sizing.
• Production control.
• Inventory control.
• Supply chain planning.
• Production scheduling.
• Real-time control.
• Emulation.
Each of these application areas is discussed here.
Operations
Transactions, September
1984, p. 218. Automated
Semiautomated
Semiautomated
FIGURE 12.3
Material flow
system. Receiving Manufacturing Shipping
FIGURE 12.4
Part
Comparison between A
(a) process layout,
(b) product layout,
and (a) Process
(c) cell layout. layout
Part
B
(c) Cell
layout
Family 1 Family 2
product layout is best. Often a combination of topologies is used within the same
facility to accommodate mixed requirements. For example, a facility might
be predominately a job shop with a few manufacturing cells. In general, the
more the topology resembles a product layout where the flow can be
streamlined, the greater the efficiencies that can be achieved.
Perhaps the greatest impact simulation can have on plant layout comes from
designing the process itself, before the layout is even planned. Because the
process plan and method selection provide the basis for designing the layout,
having a well-designed process means that the layout will likely not require
major changes once the best layout is found for the optimized process.
FIGURE 12.5 To
production
Illustration of
(a) production
batch,
(b) move batch, and (a) Production batch
(c) process batch.
Station
1 Station Station
2 3
and moved together from one workstation to another. A production batch is often
broken down into smaller move batches. This practice is called lot splitting. The
move batch need not be constant from location to location. In some batch manu-
facturing systems, for example, a technique called overlapped production is used
to minimize machine idle time and reduce work in process. In overlapped pro-
duction, a move batch arrives at a workstation where parts are individually
processed. Then instead of accumulating the entire batch before moving on,
parts are sent on individually or in smaller quantities to prevent the next
workstation from being idle while waiting for the entire batch. The process
batch is the quan- tity of parts that are processed simultaneously at a particular
operation and usually consists of a single part. The relationship between these
batch types is illustrated in Figure 12.5.
Deciding which size to use for each particular batch type is usually based on
economic trade-offs between in-process inventory costs and economies of scale
associated with larger batch sizes. Larger batch sizes usually result in lower
setup costs, handling costs, and processing costs. Several commands are
provided in Pro- Model for modeling batching operations such as GROUP,
COMBINE, JOIN and LOAD.
Push Control
A push system is one in which production is driven by workstation capacity and
material availability. Each workstation seeks to produce as much as it can, push-
ing finished work forward to the next workstation. In cases where the system is
unconstrained by demand (the demand exceeds system capacity), material can
be pushed without restraint. Usually, however, there is some synchronization of
push with demand. In make-to-stock production, the triggering mechanism is a
drop in finished goods inventory. In make-to-order or assemble-to-order produc-
tion, a master production schedule drives production by scheduling through a
material requirements planning (MRP) or other backward or forward scheduling
system.
MRP systems determine how much each station should produce for a given
period. Unfortunately, once planned, MRP is not designed to respond to disrup-
tions and breakdowns that occur for that period. Consequently, stations continue
to produce inventory as planned, regardless of whether downstream stations can
absorb the inventory. Push systems such as those resulting from MRP tend to
build up work in process (WIP), creating high inventory-carrying costs and long
flow times.
Pull Control
At the other extreme of a push system is a pull system, in which downstream de-
mand triggers each preceding station to produce a part with no more than one or
two parts at a station at any given time (see Figure 12.6). Pull systems are often
associated with the just-in-time (JIT) or lean manufacturing philosophy, which
advocates the reduction of inventories to a minimum. The basic principle of JIT
is to continuously reduce scrap, lot sizes, inventory, and throughput time as well
as eliminate the waste associated with non–value added activities such as
material handling, storage, machine setup, and rework.
FIGURE 12.6
Push versus Station Station Station
pull system. 1 2 3
and so forth). The modeler can experiment with different replenishment strategies
and usage policies until the best plan is found that meets the established criteria.
Using simulation modeling over traditional, analytic modeling for inventory
planning provides several benefits (Browne 1994):
• Greater accuracy because actual, observed demand patterns and irregular
inventory replenishments can be modeled.
• Greater flexibility because the model can be tailored to fit the situation
rather than forcing the situation to fit the limitations of an existing model.
• Easier to model because complex formulas that attempt to capture the
entire problem are replaced with simple arithmetic expressions describing
basic cause-and-effect relationships.
• Easier to understand because demand patterns and usage conditions
are more descriptive of how the inventory control system actually
works. Results reflect information similar to what would be observed
from operating the actual inventory control system.
• More informative output that shows the dynamics of inventory conditions
over time and provides a summary of supply, demand, and shortages.
• More suitable for management because it provides “what if ” capability
so alternative scenarios can be evaluated and compared. Charts and
graphs are provided that management can readily understand.
Rule Definition
Shortest processing time Select the job having the least processing time.
Earliest due date Select the job that is due the soonest.
First-come, first-served Select the job that has been waiting the longest for this
workstation.
First-in-system, first-served Select the job that has been in the shop the longest.
Slack per remaining operations Select the job with the smallest ratio of slack to operations
remaining to be performed.
Chapter 12 Modeling Manufacturing Systems 327
12.5.9 Emulation
A special use of simulation in manufacturing, particularly in automated systems,
has been in the area of hardware emulation. As an emulator, simulation takes
328 Part I Study Chapters
inputs from the actual control system (such as programmable controllers or mi-
crocomputers), mimics the behavior that would take place in the actual system,
and then provides feedback signals to the control system. The feedback signals
are synthetically created rather than coming from the actual hardware devices.
In using simulation for hardware emulation, the control system is essentially
plugged into the model instead of the actual system. The hardware devices are
then emulated in the simulation model. In this way simulation is used to test,
debug, and even refine the actual control system before any or all of the
hardware has been installed. Emulation can significantly reduce the time to start
up new sys- tems and implement changes to automation.
Emulation has the same real-time requirements as when simulation is used
for real-time control. The simulation clock must be synchronized with the real
clock to mimic actual machining and move times.
unload times can be tacked onto the operation time. A precaution here
is in semiautomatic machines where an operator is required to load
and/or unload the machine but is not required to operate the machine. If
the operator is nearly always available, or if the load and unload
activities are automated, this may not be a problem.
• Model them as a movement or handling activity. If, as just described, an
operator is required to load and unload the machine but the operator is
not always available, the load and unload activities should be modeled as
a separate move activity to and from the location. To be accurate, the part
should not enter or leave the machine until the operator is available. In
ProModel, this would be defined as a routing from the input buffer to the
machine and then from the machine to the output buffer using the
operator as the movement resource.
FIGURE 12.7
Transfer line
system. Machine Buffer Station
Line
placed directly on the station fixture. In-line transfer machines have a load
station at one end and an unload station at the other end. Some in-line transfer
machines are coupled together to form a transfer line (see Figure 12.7) in
which parts are placed on system pallets. This provides buffering
(nonsynchronization) and even recirculation of pallets if the line is a closed
loop.
Another issue is finding the optimum number of pallets in a closed, nonsyn-
chronous pallet system. Such a system is characterized by a fixed number of pal-
lets that continually recirculate through the system. Obviously, the system should
have enough pallets to at least fill every workstation in the system but not so
many that they fill every position in the line (this would result in a gridlock).
Generally, productivity increases as the number of pallets increases up to a
certain point, beyond which productivity levels off and then actually begins to
decrease. Stud- ies have shown that the optimal point tends to be close to the
sum of all of the workstation positions plus one-half of the buffer pallet
positions.
A typical analysis might be to find the necessary buffer sizes to ensure that
the system is unaffected by individual failures at least 95 percent of the time. A
similar study might find the necessary buffer sizes to protect the operation
against the longest tool change time of a downstream operation.
Stations may be modeled individually or collectively, depending on the
level of detail required in the model. Often a series of stations can be modeled
as a sin- gle location. Operations in a transfer machine can be modeled as a
simple opera- tion time if an entire machine or block of synchronous stations
is modeled as a single location. Otherwise, the operation time specification is a
bit tricky because it depends on all stations finishing their operation at the same
time. One might ini- tially be inclined simply to assign the time of the slowest
operation to every sta- tion. Unfortunately, this does not account for the
synchronization of operations. Usually, synchronization requires a timer to
be set up for each station that represents the operation for all stations. In
ProModel this is done by defining an activated subroutine that increments a
global variable representing the cycle completion after waiting for the cycle
time. Entities at each station wait for the variable to be incremented using a
WAIT UNTIL statement.
Chapter 12 Modeling Manufacturing Systems 331
FIGURE 12.8
Continuous flow
system.
FIGURE 12.9
Rate of flow Rate of flow
Quantity of flow when f f
(a) rate is constant
and
(b) rate is changing. F F
t1 t2 Time t1 t2 Time
t1
Q = F * (t2 - t1) Q =
f dt
t2
(a) (b)
332 Part I Study Chapters
12.7 Summary
In this chapter we focused on the issues and techniques for modeling
manufactur- ing systems. Terminology common to manufacturing systems was
presented and design issues were discussed. Different applications of simulation
in manufactur- ing were described with examples of each. Issues related to
modeling each type of system were explained and suggestions offered on how
system elements for each system type might be represented in a model.
References
Askin, Ronald G., and Charles R. Standridge. Modeling and Analysis of
Manufacturing Systems. New York: John Wiley & Sons, 1993.
Berdine, Robert A. “FMS: Fumbled Manufacturing Startups?” Manufacturing
Engineer- ing, July 1993, p. 104.
Bevans, J. P. “First, Choose an FMS Simulator.” American Machinist, May 1982,
pp. 144–45.
Blackburn, J., and R. Millen. “Perspectives on Flexibility in Manufacturing: Hardware
versus Software.” In Modelling and Design of Flexible Manufacturing Systems, ed.
Andrew Kusiak, pp. 99–109. Amsterdam: Elsevier, 1986.
Chapter 12 Modeling Manufacturing Systems 333
C H A P T E R
13 MODELING
MATERIAL HANDLING
SYSTEMS
“Small changes can produce big results, but the areas of highest leverage are
often the least obvious.”
—Peter Senge
13.1 Introduction
Material handling systems utilize resources to move entities from one location to
another. While material handling systems are not uncommon in service systems,
they are found mainly in manufacturing systems. Apple (1977) notes that
material handling can account for up to 80 percent of production activity. On
average, 50 percent of a company’s operation costs are material handling costs
(Meyers 1993). Given the impact of material handling on productivity and
operation costs, the importance of making the right material handling decisions
should be clear.
This chapter examines simulation techniques for modeling material
handling systems. Material handling systems represent one of the most
complicated fac- tors, yet, in many instances, the most important element, in a
simulation model. Conveyor systems and automatic guided vehicle systems
often constitute the backbone of the material flow process. Because a basic
knowledge of material handling technologies and decision variables is
essential to modeling material handling systems, we will briefly describe the
operating characteristics of each type of material handling system.
335
336 Part I Study Chapters
are traditionally classified into one of the following categories (Tompkins et al.
1996):
1. Conveyors.
2. Industrial vehicles.
3. Automated storage/retrieval systems.
4. Carousels.
5. Automatic guided vehicle systems.
6. Cranes and hoists.
7. Robots.
Missing from this list is “hand carrying,” which still is practiced widely if for no
other purpose than to load and unload machines.
13.4 Conveyors
A conveyor is a track, rail, chain, belt, or some other device that provides
continu- ous movement of loads over a fixed path. Conveyors are generally used
for high- volume movement over short to medium distances. Some overhead
or towline systems move material over longer distances. Overhead systems most
often move parts individually on carriers, especially if the parts are large. Floor-
mounted con- veyors usually move multiple items at a time in a box or container.
Conveyor speeds generally range from 20 to 80 feet per minute, with high-speed
sortation conveyors reaching speeds of up to 500 fpm in general merchandising
operations.
Conveyors may be either gravity or powered conveyors. Gravity conveyors
are easily modeled as queues because that is their principal function. The more
challenging conveyors to model are powered conveyors, which come in a
variety of types.
Belt Conveyors. Belt conveyors have a circulating belt on which loads are car-
ried. Because loads move on a common span of fabric, if the conveyor stops, all
loads must stop. Belt conveyors are often used for simple load transport.
Roller Conveyors. Some roller conveyors utilize rollers that may be “dead” if
the bed is placed on a grade to enable the effect of gravity on the load to actuate
the rollers. However, many roller conveyors utilize “live” or powered rollers to
convey loads. Roller conveyors are often used for simple load accumulation.
Chain Conveyors. Chain conveyors are much like a belt conveyor in that one or
more circulating chains convey loads. Chain conveyors are sometimes used in
combination with roller conveyors to provide right-angle transfers for heavy
loads. A pop-up section allows the chains running in one direction to “comb”
through the rollers running in a perpendicular direction.
suited for applications where precise handling is unimportant. Tow carts are rela-
tively inexpensive compared to powered vehicles; consequently, many of them
can be added to a system to increase throughput and be used for accumulation.
An underfloor towline uses a tow chain in a trough under the floor. The
chain moves continuously, and cart movement is controlled by extending a
drive pin from the cart down into the chain. At specific points along the
guideway, computer-operated stopping mechanisms raise the drive pins to halt
cart move- ment. One advantage of this system is that it can provide some
automatic buffer- ing with stationary carts along the track.
Towline systems operate much like power-and-free systems, and, in fact,
some towline systems are simply power-and-free systems that have been
inverted. By using the floor to support the weight, heavier loads can be
transported.
Load Transport. A conveyor spans the distance of travel along which loads
move continuously, while other systems consist of mobile devices that move one
or more loads as a batch move.
Capacity. The capacity of a conveyor is determined by the speed and load spac-
ing rather than by having a stated capacity. Specifically, it is a function of the
min- imum allowable interload spacing (which is the length of a queue position
in the case of accumulation conveyors) and the length of the conveyor. In
practice, how- ever, this may never be reached because the conveyor speed
may be so fast that loads are unable to transfer onto the conveyor at the
minimum allowable spacing. Furthermore, intentional control may be imposed
on the number of loads that are permitted on a particular conveyor section.
Entity Pickup and Delivery. Conveyors usually don’t pick up and drop off
loads, as in the case of lift trucks, pallet jacks, or hand carrying, but rather loads
are typically placed onto and removed from conveyors. Putting a load onto an
input spur of a branching conveyor and identifying the load to the system so it
can enter the conveyor system is commonly referred to as load induction (the
spur is called an induction conveyor).
340 Part I Study Chapters
Random Load Spacing. Random load spacing permits parts to be placed at any
distance from another load on the same conveyor. A powered roller conveyor is
a common example of an accumulation conveyor with random load spacing.
Pow- ered conveyors typically provide noncontact accumulation, while gravity
convey- ors provide contact accumulation (that is, the loads accumulate
against each
Accumulation Nonaccumulation
locations, making the cycle time extremely difficult to calculate. The problem is
further compounded by mixed single (pickup and store or retrieve and deposit)
and dual (pickup, storage, retrieval, and deposit) cycles. Activity zoning, in
which items are stored in assigned zones based on frequency of use, also
complicates cycle time calculations. The easiest way to accurately determine
the cycle time for an AS/RS is by using a computer to enumerate the possible
movements of the S/R machine from the pickup and delivery (P&D) stand to
every rack or bin location. This produces an empirical distribution for the
single cycle time. For dual cycle times, an intermediate cycle time—the time to
go from any location to any other location—must be determined. For a rack
10 tiers high and 40 bays
long, this can be 400 × 400, or 160,000, calculations! Because of the
many calculations, sometimes a large sample size is used to develop the
distribution.
Most suppliers of automated storage/retrieval systems have computer programs
for calculating cycle times that can be generated based on a defined configuration
in a matter of minutes.
Where picking takes place is an important issue achieving the highest pro-
ductivity from both AS/RS and picker. Both are expensive resources, and it is
undesirable to have either one waiting on the other.
13.7 Carousels
One class of storage and retrieval systems is a carousel. Carousels are essentially
moving racks that bring the material to the retriever (operator or robot) rather
than sending a retriever to the rack location.
In zone blocking, the guide path is divided into various zones (segments)
that allow no more than one vehicle at a time. Zones can be set up using a
variety of different sensing and communication techniques. When one vehicle
enters a zone, other vehicles are prevented from entering until the current
vehicle occupying the zone leaves. Once the vehicle leaves, any vehicle waiting
for access to the freed zone can resume travel.
AGV Selection Rules. When a load is ready to be moved by an AGV and more
than one AGV is available for executing the move, a decision must be made as
to which vehicle to use. The most common selection rule is a “closest rule” in
which the nearest available vehicle is selected. This rule is intended to minimize
empty vehicle travel time as well as to minimize the part waiting time.
Other less frequently used vehicle selection rules include longest idle vehicle
and least uti- lized vehicle.
Work Search Rules. If a vehicle becomes available and two or more parts are
waiting to be moved, a decision must be made as to which part should be moved
first. A number of rules are used to make this decision, each of which can be
effective depending on the production objective. Common rules used for dis-
patching an available vehicle include
• Longest-waiting load.
• Closest waiting load.
• Highest-priority load.
• Most loads waiting at a location.
Vehicle Parking Rules. If a transporter delivers a part and no other parts are
waiting for pickup, a decision must be made relative to the deployment of the
transporter. For example, the vehicle can remain where it is, or it can be sent to a
more strategic location where it is likely to be needed next. If several transport
Chapter 13 Modeling Material Handling Systems 351
vehicles are idle, it may be desirable to have a prioritized list for a vehicle to fol-
low for alternative parking preferences.
• Response time.
• Percentage of time blocked by another crane.
Decision variables that become the basis for experimentation include
• Work search rules.
• Park search rules.
• Multiple-crane priority rules.
Typical questions addressed in a simulation model involving cranes include
• Which task assignment rules maximize crane utilization?
• What idle crane parking strategy minimizes response time?
• How much time are cranes blocked in a multicrane system?
13.10 Robot
Robots are programmable, multifunctional manipulators used for handling mate-
s rial or manipulating a tool such as a welder to process material. Robots are often
classified based on the type of coordinate system on which they are based: cylin-
drical, cartesian, or revolute. The choice of robot coordinate system depends on
the application. Cylindrical or polar coordinate robots are generally more appro-
priate for machine loading. Cartesian coordinate robots are easier to equip with
tactile sensors for assembly work. Revolute or anthropomorphic coordinate ro-
bots have the most degrees of freedom (freedom of movement) and are
especially suited for use as a processing tool, such as for welding or painting.
Because carte- sian or gantry robots can be modeled easily as cranes, our
discussion will focus on cylindrical and revolute robots. When used for
handling, cylindrical or revolute robots are generally used to handle a
medium level of movement activity over very short distances, usually to
perform pick-and-place or load/unload functions. Robots generally move parts
individually rather than in a consolidated load.
One of the applications of simulation is in designing the cell control logic
for a robotic work cell. A robotic work cell may be a machining, assembly,
inspec- tion, or a combination cell. Robotic cells are characterized by a robot
with 3 to 5 degrees of freedom surrounded by workstations. The workstation
is fed parts by an input conveyor or other accumulation device, and parts exit
from the cell on a similar device. Each workstation usually has one or more
buffer positions to which parts are brought if the workstation is busy. Like all
cellular manufactur- ing, a robotic cell usually handles more than one part
type, and each part type may not have to be routed through the same sequence
of workstations. In addi- tion to part handling, the robot may be required to
handle tooling or fixtures.
13.11 Summary
Material handling systems can be one of the most difficult elements to model
using simulation simply because of the sheer complexity. At the same time, the
material handling system is often the critical element in the smooth operation of
a manufacturing or warehousing system. The material handling system should
be designed first using estimates of resource requirements and operating
parameters (speed, move capacity, and so forth). Simulation can then help verify
design deci- sions and fine-tune the design.
In modeling material handling systems, it is advisable to simplify wherever
possible so models don’t become too complex. A major challenge in modeling
conveyor systems comes when multiple merging and branching occur. A chal-
lenging issue in modeling discrete part movement devices, such as AGVs, is
how to manage their deployment in order to get maximum utilization and meet
pro- duction goals.
References
Apple, J. M. Plant Layout and Material Handling. 3rd ed. N.Y.: Ronald Press, 1977.
Askin, Ronald G., and C. R. Standridge. Modeling and Analysis of Manufacturing
Systems.
New York: John Wiley & Sons, 1993.
“Automatic Monorail Systems.” Material Handling Engineering, May 1988, p. 95.
Bozer, Y. A., and J. A. White. “Travel-Time Models for Automated Storage/Retrieval
Systems.” IIE Transactions 16, no. 4 (1984), pp. 329–38.
Fitzgerald, K. R. “How to Estimate the Number of AGVs You Need.” Modern
Materials Handling, October 1985, p. 79.
Henriksen, J., and T. Schriber. “Simplified Approaches to Modeling Accumulation and
Non-Accumulating Conveyor Systems.” In Proceedings of the 1986 Winter Simula-
tion Conference, ed. J. Wilson, J. Henricksen, and S. Roberts. Piscataway, NJ: Insti-
tute of Electrical and Electronics Engineers, 1986.
Meyers, Fred E. Plant Layout and Material Handling. Englewood Cliffs, NJ: Regents/
Prentice Hall, 1993.
Pritsker, A. A. B. Introduction to Simulation and Slam II. West Lafayette, IN: Systems
Publishing Corporation, 1986, p. 600.
Tompkins, J. A.; J. A. White; Y. A. Bozer; E. H. Frazelle; J. M. A. Tanchoco; and
J. Trevino. Facilities Planning. 2nd ed. New York: John Wiley & Sons, 1996.
Zisk, B. “Flexibility Is Key to Automated Material Transport System for Manufacturing
Cells.” Industrial Engineering, November 1983, p. 60.
Harrell−Ghosh−Bo I. Study 14. Modeling © The
wden: Simulation Chapters Service McGraw−Hill
Using ProModel, Systems Companies,
Second Edition
C H A P T E R
14 MODELING SERVICE
SYSTEMS
“No matter which line you move to, the other line always moves faster.”
—Unknown
14.1 Introduction
A service system is a processing system in which one or more services are pro-
vided to customers. Entities (customers, patients, paperwork) are routed through
a series of processing areas (check-in, order, service, payment) where resources
(service agents, doctors, cashiers) provide some service. Service systems exhibit
unique characteristics that are not found in manufacturing systems. Sasser,
Olsen, and Wyckoff (1978) identify four distinct characteristics of services that
distin- guish them from products that are manufactured:
1. Services are intangible; they are not things.
2. Services are perishable; they cannot be inventoried.
3. Services provide heterogeneous output; output is varied.
4. Services involve simultaneous production and consumption; the service
is produced and used at the same time.
These characteristics pose great challenges for service system design and man-
agement, particularly in the areas of process design and staffing. Having dis-
cussed general modeling procedures common to both manufacturing and service
system simulation in Chapter 7, and specific modeling procedures unique to
manufacturing systems in Chapter 12, in this chapter we discuss design and
operating considerations that are more specific to service systems. A description
is given of major classes of service systems. To provide an idea of how simula-
tion might be performed in a service industry, a call center simulation example is
presented.
357
358 Part I Study Chapters
powerful strategic and competitive weapon. Here are some typical internal per-
formance measures that can be evaluated using simulation:
• Service time.
• Waiting time.
• Queue lengths.
• Resource utilization.
• Service level (the percentage of customers who can be promptly serviced,
without any waiting).
• Abandonment rate (the percentage of impatient customers who leave the
system).
Just taking a customer order, moving it through the plant, distributing these require-
ments out to the manufacturing floor—that activity alone has thirty sub-process
steps to it. Accounts receivable has over twenty process steps. Information
processing is a whole discipline in itself, with many challenging processes
integrated into a single total activity. Obviously, we do manage some very complex
processes separate from the manufacturing floor itself.
This entire realm of support processes presents a major area of potential applica-
tion for simulation. Similar to the problem of dealing with excess inventory in
manufacturing systems, customers, paperwork, and information often sit idle in
service systems while waiting to be processed. In fact, the total waiting time for
entities in service processes often exceeds 90 percent of the total flow time.
The types of questions that simulation helps answer in service systems can
be categorized as being either design related or management related. Here are
some
Chapter 14 Modeling Service Systems 361
service force, or the servicing policies and procedures can be modified to run
additional experiments.
usually have both front-room and back-room activities with total service being
provided in a matter of minutes. Customization is done by selecting from a
menu of options previously defined by the provider. Waiting time and service
time are two primary factors in selecting the provider. Convenience of location
is another important consideration. Customer commitment to the provider is
low because there are usually alternative providers just as conveniently located.
Examples include banks (branch operations), restaurants, copy centers, bar-
bers, check-in counters of airlines, hotels, and car rental agencies.
process. For large items such as furniture or appliances, the customer may have
to order and pay for the merchandise first. The delivery of the product may
take place later.
Examples include department stores, grocery stores, hardware stores, and
convenience stores.
process, then the service time is short. If the service is a technical support
process, then the service time may be long or the call may require a callback
after some research.
Examples include technical support services (hotlines) for software or hard-
ware, mail-order services, and airline and hotel reservations.
14.7.1 Background
Society Bank’s Information Technology and Operations (ITO) group offers a
“help desk” service to customers of the ITO function. This service is offered to
both in- ternal and external customers, handling over 12,000 calls per month.
The client services help desk provides technical support and information on a
variety of tech- nical topics including resetting passwords, ordering PC
equipment, requesting phone installation, ordering extra copies of internal
reports, and reporting main- frame and network problems. The help desk acts as
the primary source of commu- nication between ITO and its customers. It
interacts with authority groups within ITO by providing work and support when
requested by a customer.
The old client services help desk process consisted of (1) a mainframe help
desk, (2) a phone/local area network help desk, and (3) a PC help desk. Each of
the three operated separately with separate phone numbers, operators, and facili-
ties. All calls were received by their respective help desk operators, who
manually logged all information about the problem and the customer, and then
proceeded to pass the problem on to an authority group or expert for resolution.
Because of acquisitions, the increased use of information technologies, and
the passing of time, Society’s help desk process had become fragmented and
layered with bureaucracy. This made the help desk a good candidate for a
process redesign. It was determined that the current operation did not have a set
of clearly defined goals, other than to provide a help desk service. The
organizational boundaries of the current process were often obscured by the fact
that much of the different help desks’ work overlapped and was consistently
being handed off. There were no process per- formance measures in the old
process, only measures of call volume. A proposal was made to consolidate the
help desk functions. The proposal also called for the intro- duction of automation
to enhance the speed and accuracy of the services.
call level. Level 1 calls are resolved immediately by the help desk, Level 1A
calls are resolved later by the help desk, and Level 2 calls are handed off to an
author- ity group for resolution.
Historically, calls averaged 2.5 minutes, lasting anywhere from 30 seconds
to 25 minutes. Periodically, follow-up work is required after calls that ranges
from 1 to 10 minutes. Overall, the help desk service abandonment rate was 4 to
12 per- cent (as measured by the percentage of calls abandoned), depending on
staffing levels.
The help desk process was broken down into its individual work steps and
owners of each work step were identified. Then a flowchart that described the
process was developed (Figure 14.1). From the flowchart, a computer simulation
model was developed of the old operation, which was validated by comparing
actual performance of the help desk with that of the simulation’s output. During
the 10-day test period, the simulation model produced results consistent with
those of the actual performance. The user of the model was able to define such
model parameters as daily call volume and staffing levels through the use of the
model’s Interact Box, which provided sensitivity analysis.
Joint requirements planning (JRP) sessions allowed the project team to col-
lect information about likes, dislikes, needs, and improvement suggestions from
users, customers, and executives. This information clarified the target goals of
the process along with its operational scope. Suggestions were collected and
priori- tized from the JRP sessions for improving the help desk process. Internal
bench- marking was also performed using Society’s customer service help desk
as a ref- erence for performance and operational ideas.
FIGURE 14.1
Customer problem, Client notification
Flow diagram query, or change and escalation
of client
services.
Customer self-service
Automated resolution
will change when the average daily call volume or the number of operators
varies. The importance of this graph is realized when one notices that it becomes
increas- ingly harder to lower the abandonment rate once the number of
operators increases above seven. Above this point, the help desk can easily
handle substantial increases in average daily call volume while maintaining
approximately the same abandonment rate.
After modeling and analyzing the current process, the project team
evaluated the following operational alternatives using the simulation model:
• The option to select from a number of different shift schedules so that
staffing can easily be varied from current levels.
• The introduction of the automated voice response unit and its ability to
both receive and place calls automatically.
• The ability of the automated voice response unit to handle device resets,
password resets, and system inquiries.
• The incorporation of the PC and LAN help desks so that clients with
PC-related problems can have their calls routed directly to an available
expert via the automated voice response unit.
• The ability to change the response time of the ASIM problem-
logging system.
Additionally, two alternative staffing schedules were proposed. The alterna-
tive schedules attempt to better match the time at which operators are available
for answering calls to the time the calls are arriving. The two alternative
schedules reduce effort hours by up to 8 percent while maintaining current
service levels.
Additional results related to the Alternative Operations simulation model were
• The automated voice response unit will permit approximately 75 percent
of PC-related calls to be answered immediately by a PC expert directly.
• Using Figure 14.2, the automated voice response unit’s ability to aid in
reducing the abandonment rate can be ascertained simply by estimating
the reduction in the number of calls routed to help desk operators and
finding the appropriate point on the chart for a given number of operators.
• Improving the response time of ASIM will noticeably affect the operation
when staffing levels are low and call volumes are high. For example, with
five operators on staff and average call volume of 650 calls per day, a
25 percent improvement in the response time of ASIM resulted in a
reduction in the abandonment rate of approximately 2 percent.
14.7.3 Results
The nonlinear relationship between the abandonment rate and the number of op-
erators on duty (see Figure 14.2) indicates the difficulty in greatly improving
per- formance once the abandonment rate drops below 5 percent. Results
generated from the validated simulation model compare the impact of the
proposed staffing changes with that of the current staffing levels. In addition,
the analysis of the
372 Part I Study Chapters
effect of the automated voice response unit can be predicted before implementa-
tion so that the best alternative can be identified.
The introduction of simulation to help desk operations has shown that it can
be a powerful and effective management tool that should be utilized to better
achieve operational goals and to understand the impact of changes. As the au-
tomation project continues to be implemented, the simulation model can greatly
aid management and the project team members by allowing them to intelligently
predict how each new phase will affect the help desk.
14.8 Summary
Service systems provide a unique challenge in simulation modeling, largely due
to the human element involved. Service systems have a high human content in
the process. The customer is often involved in the process and, in many cases, is
the actual entity being processed. In this chapter we discussed the aspects that
should be considered when modeling service systems and suggested ways
in which different situations might be modeled. We also discussed the
different types of service systems and addressed the modeling issues associated
with each. The ex- ample case study showed how fluid service systems can be.
References
Aran, M. M., and K. Kang. “Design of a Fast Food Restaurant Simulation Model.” Simu-
lation. Norcross, GA: Industrial Engineering and Management Press, 1987.
Collier, D. A. The Service/Quality Solution. Milwaukee: ASQC Quality Press, 1994.
Chapter 14 Modeling Service Systems 373
L A B
1 INTRODUCTION TO
PROMODEL 6.0
Imagination is the beginning of creation. You imagine what you desire, you will
what you imagine and at last you create what you will.
—George Bernard Shaw
377
378 Part II Labs
ProModel’s opening screen (student package) is shown in Figure L1.1. There are
six items (buttons) in the opening menu:
1. Open a model: Allows models created earlier to be opened.
2. Install model package: Copies to the specified destination directory all of
the files contained in a model package file.
3. Run demo model: Allows one of several example models packed
with the software to be run.
4. www.promodel.com: Allows the user to connect to the PROMODEL
Corporation home page on the World Wide Web.
5. SimRunner: This new addition to the ProModel product line evaluates
your existing simulation models and performs tests to find better ways
to achieve desired results. A design of experiment methodology is
used in SimRunner. For a detailed description of SimRunner, please
refer to Lab 11.
6. Stat::Fit: This module allows continuous and/or discrete distributions
to be fitted to a set of input data automatically. For a detailed discussion
on the modeling of input data distribution, please refer to Lab 6.
Problem Statement
At a call center for California Cellular, customer service associates are
employed to respond to customer calls and complaints. On average, 10
customers call per hour. The time between two calls is exponentially
distributed with a mean of six minutes. Responding to each call takes a time
that varies from a low of 2 min- utes to a high of 10 minutes, with a mean of
6 minutes. If the company had a policy that
a. The average time to respond to a customer call should not be any more
than six minutes, how many customer service associates should be
employed by the company?
b. The maximum number of calls waiting should be no more than five,
how many customer service associates should be employed by the
company?
380 Part II Labs
FIGURE L1.2
California Cellular with one customer service agent.
FIGURE L1.3
Customer waiting time statistics with one customer service agent.
FIGURE L1.4
California Cellular with two customer service agents.
Lab 1 Introduction to ProModel 6.0 381
FIGURE L1.5
Customer waiting time statistics with two customer service agents.
FIGURE L1.6
Number of calls waiting with one customer service agent.
FIGURE L1.7
Number of calls waiting with two customer service agents.
FIGURE L1.8
Graph of number of customers waiting versus simulation run time.
L1.3 Exercises
1. How do you open an existing simulation model?
2. What is SimRunner? How can you use it in your simulation analysis?
3. What does the Stat::Fit package do? Do you need it when building a
simulation model?
4. At the most, how many locations, entities, and types of resources can be
modeled using the student version of ProModel?
5. Open the Manufacturing Cost model from the Demos subdirectory and
run the model three different times to find out whether one, two, or three
operators are optimal for minimizing the cost per part (the cost per part is
displayed on the scoreboard during the simulation). Selecting Model
Parameters, you can change the number of operators from the Simulation
menu by double-clicking on the first parameter (number of operators)
and entering 1, 2, or 3. Then select Run from the Model Parameters
dialog. Each simulation will run for 15 hours.
6. Without knowing how the model was constructed, can you give a
rational explanation for the number of operators that resulted in the least
cost?
7. Go to the ProModel website on the Internet (www.promodel.com). What
are some of the successful real-world applications of the ProModel
software? Is ProModel applied only to manufacturing problems?
Harrell−Ghosh−Bo II. 2. ProModel World © The
wden: Simulation Labs View, Menu, and McGraw−Hill
Using ProModel, Tutorial Companies,
Second Edition
L A B
2 PROMODEL WORLD
VIEW, MENU,
AND TUTORIAL
I only wish that ordinary people had an unlimited capacity for doing harm; then
they might have an unlimited power for doing good.
—Socrates (469–399 B.C.)
In this lab, Section L2.1 introduces you to various commands in the ProModel
menu. In Section L2.2 we discuss the basic modeling elements in a ProModel
model file. Section L2.3 discusses some of the innovative features of ProModel.
Section L2.4 refers to a short tutorial on ProModel in a PowerPoint presentation
format. Some of the material describing the use and features of ProModel
has been taken from the ProModel User Guide as well as ProModel’s online help
system.
383
384 Part II Labs
FIGURE L2.1
The title and the
menu bars.
FIGURE L2.2
The File
menu.
The menu bar, just below the title bar (Figure L2.1), is used to call up
menus, or lists of tasks. The menu bar of the ProModel screen displays the
commands you use to work with ProModel. Some of the items in the menu, like
File, Edit, View, Tools, Window, and Help, are common to most Windows
applications. Others such as Build, Simulation, and Output provide commands
specific to program- ming in ProModel. In the following sections we describe
all the menu commands and the tasks within each menu.
vary according to the currently selected window. The Edit menu is active only
when a model file is open.
Basic Modules
Locations Processing
Entities Arrivals
Optional Modules
Path Networks Macros
Resources Subroutines
Shifts Arrival Cycles
Cost Table Functions
Attributes User Distributions
Variables External Files
Arrays Streams
386 Part II Labs
FIGURE L2.5
General Information dialog box.
In addition, two more modules are available in the Build menu: General
Informa- tion and Background Graphics.
General Information. This dialog box (Figure L2.5) allows the user to specify
the name of the model, the default time unit, the distance unit, and the graphic
library to be used. The model’s initialization and termination logic can also be
specified using this dialog box. A Notes window allows the user to save
informa- tion such as the analyst’s name, the revision date, any assumptions
made about the model, and so forth. These notes can also be displayed at the
beginning of a sim- ulation run.
FIGURE L2.6
Background Graphics dialog box.
• Functions.
• Customer support—telephone, pager, fax, e-mail, online file transfer,
and so on.
To quickly learn what is new in ProModel Version 6.0, go to the Help →
Index menu and type “new features” for a description of the latest features of the
product.
L2.2.1 Locations
Locations represent fixed places in the system where entities are routed for pro-
cessing, delay, storage, decision making, or some other activity. We need some
type of receiving locations to hold incoming entities. We also need processing
locations where entities have value added to them. To build locations:
a. Left-click on the desired location icon in the Graphics toolbox. Left-click
in the layout window where you want the location to appear.
b. A record is automatically created for the location in the Locations
edit table (Figure L2.16).
c. Clicking in the appropriate box and typing in the desired changes can
now change the name, units, capacity, and so on. Note that in Lab 3 we
will actually fill in this information for an example model.
L2.2.2 Entities
Anything that a model can process is called an entity. Some examples are parts
or widgets in a factory, patients in a hospital, customers in a bank or a grocery
store, and travelers calling in for airline reservations.
Lab 2 ProModel World View, Menu, and Tutorial 391
FIGURE L2.16
The Locations edit screen.
To build entities:
a. Left-click on the desired entity graphic in the Entity Graphics toolbox.
b. A record will automatically be created in the Entities edit
table (Figure L2.17).
c. Moving the slide bar in the toolbox can then change the name. Note that
in Lab 3 we will actually fill in this information for an example model.
L2.2.3 Arrivals
The mechanism for defining how entities enter the system is called arrivals.
Enti- ties can arrive singly or in batches. The number of entities arriving at a
time is called the batch size (Qty each). The time between the arrivals of
successive enti- ties is called interarrival time (Frequency). The total number of
batches of arrivals is termed Occurrences. The batch size, time between
successive arrivals, and total number of batches can be either constants or
random (statistical distributions). Also, the first time that the arrival pattern is to
begin is termed First Time.
To create arrivals:
a. Left-click on the entity name in the toolbox and left-click on the location
where you would like the entities to arrive (Figure L2.18).
b. Enter the various required data about the arrival process. Note that in
Lab 3 we will actually fill in this information for an example model.
392 Part II Labs
FIGURE L2.17
The Entities edit table.
FIGURE L2.18
The Arrivals edit table.
L2.2.4 Processing
Processing describes the operations that take place at a location, such as the
amount of time an entity spends there, the resources it needs to complete
process- ing, and anything else that happens at the location, including selecting
an entity’s next destination.
Lab 2 ProModel World View, Menu, and Tutorial 393
FIGURE L2.19
The Process edit
table.
FIGURE L2.20
The Logic Builder tool
menu.
When the Logic Builder is open from a logic window, it remains on the
screen until you click the Close button or close the logic window or table from
which it was invoked. This allows you to enter multiple statements in a logic
window and even move around to other logic windows without having to
constantly close and reopen the Logic Builder. However, the Logic Builder
closes automatically after pasting to a field in a dialog box or edit table or to an
expression field because you must right-click anyway to use the Logic Builder in
another field.
You can move to another logic window or field while the Logic Builder is
still up by right clicking in that field or logic window. The Logic Builder is then
reset with only valid statements and elements for that field or window, and it
will paste the logic you build into that field or window. Some of the commonly
used logic statements available in ProModel are as follows:
• WAIT: Used for delaying an entity for a specified duration at a location,
possibly for processing it.
• STOP: Terminates the current replication and optionally displays a message.
• GROUP: Temporarily consolidates a specified quantity of similar entities
together.
• LOAD: Temporarily attaches a specified quantity of entities to the current
entity.
Lab 2 ProModel World View, Menu, and Tutorial 395
Button Controls
• Save: Saves a copy of all model data—from the time you start the graphic
display to when you click save—to an Excel spreadsheet.
• Snapshot: Saves a copy of currently displayed, graphed model data to
an Excel spreadsheet.
• Grid: Turns the main panel grid lines on and off.
• Multi-line: Displays a combined graph of panels 1, 2, and 3.
• Refresh: Redraws the graph.
FIGURE L2.21
Dynamic Plots menu.
396 Part II Labs
FIGURE L2.22
Dynamic Plot edit
table.
FIGURE L2.23
Dynamic Plot of the
current value of WIP.
Lab 2 ProModel World View, Menu, and Tutorial 397
Right-Click Menus
The right-click menu for the graphic display is available for panels 1, 2, and 3,
and the main panel. When you right-click in any of these panels, the right-click
menu appears.
Panels 1, 2, and 3
• Move Up: Places the graph in the main panel.
• Clear Data: Removes the factor and its graph from panel 1, 2, or 3 and
the main panel. If you created a multi-line graph, Clear Data removes
the selected line from the graph and does not disturb the remaining graph
lines.
• Line Color: Allows you to assign a specific line color to the graph.
• Background Color: Allows you to define a specific background color
for panels 1, 2, and 3.
Main Panel
• Clear All Data: Removes all factors and graphs from panels 1, 2, 3,
and the main panel.
• Remove Line 1, 2, 3: Deletes a specific line from the main panel.
• Line Color: Allows you to assign a specific line color to the graph.
• Background Color: Allows you to define a specific background color
for panels 1, 2, and 3.
• Grid Color: Allows you to assign a specific line color to the grid.
L2.3.3 Customize
Customize
You can add direct links to applications and files right on your ProModel toolbar.
Create a link to open a spreadsheet, a text document, or your favorite calculator
(Figure L2.24).
To create or modify your Custom Tools menu, select Tools → Customize
from your ProModel menu bar. This will pull up the Custom Tools dialog
window. The Custom Tools dialog window allows you to add, delete, edit, or
rearrange the menu items that appear on the Tools drop-down menu in
ProModel.
FIGURE L2.24
Adding Calculator to the Customized Tools menu.
FIGURE L2.25
The QuickBar task bar.
System Button
Selecting the System button brings up a menu with the following options:
• Auto-Hide: The Auto-Hide feature is used when QuickBar is in the
docked state. If docked, and the Auto-Hide is on, QuickBar will shrink, so
that only a couple of pixels show. When your pointer touches these visible
pixels, the QuickBar will appear again.
• Docking Position: The Docking Position prompt allows you to “dock” the
QuickBar along one side of your screen, or leave it floating.
• Customize: Access the customize screen by clicking the System button,
then choose “Customize. . . .” Once in the Customize dialog, you can add
or remove toolbars, add or remove buttons from the toolbars, show or
hide toolbars, or move buttons up or down on the toolbar.
Lab 2 ProModel World View, Menu, and Tutorial 399
FIGURE L2.26
Tutorial on ProModel.
400 Part II Labs
6. Resources
7. Processing
8. Arrivals
9. Run simulation
10.View output
L2.5 Exercises
1. Identify the ProModel menu where you will
find the following items:
a. Save As
b. Delete
c. View Trace
d. Shifts
e. Index
f. General Information
g. Options
h. Printer Setup
i. Processing
j. Scenarios
k. Tile
l. Zoom
2. Which of the following is not a valid
ProModel menu or submenu item?
a. AutoBuild
b. What’s This?
c. Merge
d. Merge Documents
e. Snap to Grid
f. Normal
g. Paste
h. Print Preview
i. View Text
3. Some of the following are not valid ProModel
element names. Which ones?
a. Activities
b. Locations
c. Conveyors
d. Queues
e. Shifts
f. Station
g. Server
h. Schedules
i. Arrivals
j. Expressions
Lab 2 ProModel World View, Menu, and Tutorial 401
k. Variables
l. Create
4. What are some of the valid logic statements used in ProModel?
5. What are some of the differences between the following logic
statements:
a. Wait versus Wait Until.
b. Move versus Move For.
c. Pause versus Stop.
d. View versus Graphic.
e. Split versus Ungroup.
6. Describe the functions of the following items in the ProModel Edit menu:
a. Delete
b. Insert
c. Append
d. Move
e. Move to
f. Copy Record
g. Paste Record
7. Describe the differences between the following items in the ProModel
View menu:
a. Zoom vs. Zoom to Fit Layout
b. Show Grid vs. Snap to Grid
Harrell−Ghosh−Bo II. 3. Running a © The
wden: Simulation Labs ProModel McGraw−Hill
Using ProModel, Simulation Companies,
Second Edition
L A B
3 RUNNING A PROMODEL
SIMULATION
As far as the laws of mathematics refer to reality, they are not certain; and as far
as they are certain, they do not refer to reality.
—Albert Einstein
The objective is to simulate the system to determine the expected waiting time
for customers in the queue (the average time customers wait in line for the ATM)
and the expected time in the system (the average time customers wait in the
queue plus the average time it takes them to complete their transaction at the
ATM).
This is the same ATM system simulated by spreadsheet in Chapter 3 but
with a different objective. The Chapter 3 objective was to simulate only the first
25 cus- tomers arriving to the system. Now no such restriction has been applied.
This new
403
404 Part II Labs
objective provides an opportunity for comparing the simulated results with those
computed using queuing theory, which was presented in Section 2.9.3.
Queuing theory allows us to compute the exact values for the expected time
that customers wait in the queue and in the system. Given that queuing theory
can be used to get exact answers, why are we using simulation to estimate the
two ex- pected values? There are two parts to the answer. First, it gives us an
opportunity to measure the accuracy of simulation by comparing the
simulation output with the exact results produced using queuing theory. Second,
most systems of interest are too complex to be modeled with the mathematical
equations of queuing the- ory. In those cases, good estimates from simulation
are valuable commodities when faced with expensive decisions.
FIGURE L3.1
ATM simulation in
progress. System
events are animated
and key performance
measures are
dynamically updated.
FIGURE L3.2
The simulation
clock shows 1 hour
and 20 minutes (80
minutes) to process
the first 25 customers
arriving to the system.
The simulation takes
only a second of your
time to process 25
customers.
FIGURE L3.3
ATM simulation on its
15,000th customer and
approaching steady
state.
406 Part II Labs
FIGURE L3.4
ATM simulation after
reaching steady
state.
FIGURE L3.5
ATM simulation at
its end.
collects a multitude of statistics over the course of the simulation. You will learn
this feature in Lab 4.
The simulation required only a minute of your time to process 19,496 cus-
tomers. In Lab 4, you will begin to see how easy it is to build models using Pro-
Model as compared to building them with spreadsheets.
L3.2 Exercises
1. The values obtained for average time in queue and average time in
system from the ATM ProModel simulation of the first 25 customers
processed represent a third set of observations that can be combined
with the observations for the same performance measures presented in
Table 3.3 of Section 3.5.3 in Chapter 3 that were derived from two
Lab 3 Running a ProModel Simulation 407
L A B
4 BUILDING YOUR
FIRST MODEL
Knowing is not enough; we must apply. Willing is not enough; we must do.
—Johann von Goethe
In this lab we build our first simulation model using ProModel. In Section L4.1
we describe some of the basic concepts of building your first ProModel
simulation model. Section L4.2 introduces the concept of queue in ProModel.
Section L4.3 lets us build a model with multiple locations and multiple entities.
In Section L4.4 we show how to modify an existing model and add more
locations to it. Finally, in Section L4.5 we show how variability in arrival time
and customer service time affect the performance of the system.
409
410 Part II Labs
FIGURE L4.1
General Information
for the Fantastic
Dan simulation
model.
FIGURE L4.2
Defining locations Waiting_ for_Barber and BarberDan.
FIGURE L4.3
The Graphics panel.
412 Part II Labs
FIGURE L4.4
Define the entity—Customer.
FIGURE L4.5
Process and Routing tables for Fantastic Dan model.
FIGURE L4.6
Process and Routing tables for Fantastic Dan model in text format.
To define the haircut time, click Operation in the Process table. Click the
but- ton with the hammer symbol. A new window named Logic Builder
opens up. Select the command Wait. The ProModel expression Wait causes the
customer (en- tity) to be delayed for a specified amount of time. This is how
processing times are modeled.
Click Build Expression. In the Logic window, select Distribution Function
(Figure L4.7). In the Distribution Function window, select Uniform distribution.
Click Mean and select 9. Click Half-Range and select 1. Click Return. Click
Paste. Close the Logic Builder window. Close the Operation window.
Finally the customers leave the barbershop. They are routed to a default lo-
cation called EXIT in ProModel. When entities (or customers) are routed to the
EXIT location, they are in effect disposed from the system. All the information
as- sociated with the disposed entity is deleted from the computer’s memory to
con- serve space.
Lab 4 Building Your First Model 413
FIGURE L4.7
The Logic
Builder menu.
The distribution functions are built into ProModel and generate random val-
ues based on the specified distribution. Some of the commonly used distribution
functions are shown in Table 4.1.
Now we will define the entity arrival process, as in Figure L4.8.
Next we will define some of the simulation options—that is, run time, num-
ber of replications, warm-up time, unit of time, and clock precision (Figure
L4.9). The run time is the number of hours the simulation model will be run.
The num- ber of replications refers to number of times the simulation model
will be run (each time the model will run for an amount of time specified by run
hours). The
414 Part II Labs
FIGURE L4.8
Customer arrival
table.
FIGURE L4.9
Definition of
simulation run
options.
warm-up time refers to the amount of time to let the simulation model run to
achieve steady-state behavior. Statistics are usually collected after the warm-up
period is over. The run time begins at the end of the warm-up period. The unit of
time used in the model can be seconds, minutes, or hours. The clock precision
refers to the precision in the time unit used to measure all simulation event
timings.
Let us select the Run option from the Simulation Options menu (or click
F10). Figure L4.10 shows a screen shot during run time. The button in the
middle of the scroll bar at the top controls the speed of the simulation run. Pull it
right to increase the speed and left to decrease the speed.
After the simulation runs to its completion, the user is prompted, “Do you
want to see the results?” (Figure L4.11). Click Yes. Figures L4.12 and L4.13 are
part of the results that are automatically generated by ProModel in the 3DR
(three-dimensional report) Output Viewer.
Lab 4 Building Your First Model 415
FIGURE L4.10
Screen shot at run
time.
FIGURE L4.11
Simulation complete
prompt.
FIGURE L4.12
The 3DR Output
Viewer for the
Fantastic Dan model.
Note that the average time a customer spends waiting for Barber
Dan is 22.95 minutes. The average time spent by a customer in the
barbershop is
32.28 minutes. The utilization of the Barber is 89.15 percent. The number of
customers served in 480 minutes (or 8 hours) is 47. On average 5.875 customers
are served per hour. The maximum number of customers waiting for a haircut is
8, although the average number of customers waiting is only 2.3.
416 Part II Labs
FIGURE L4.13
Results of the Fantastic Dan simulation model.
FIGURE L4.14
General Information
for the Bank of USA
ATM simulation
model.
418 Part II Labs
FIGURE L4.15
Defining locations ATM_Queue and ATM.
FIGURE L4.16
Click on the Queue
option in the
Conveyor/Queue
options window.
Lab 4 Building Your First Model 419
Check off the New button on the graphics panel (Figure L4.15) and click
the button marked Aa (fourth icon from top). Click on the location icon in the
layout panel. The name of the location (ATM) appears on the location icon. Do
the same for the ATM_Queue location.
Define the entity (Figure L4.17) and change its name to ATM_Customer.
Define the processes and the routings (Figures L4.18 and L4.19) the customers
go through at the ATM system. All customers arrive and wait at the location
ATM_Queue. Then they are routed to the location ATM. At this location the
customers deposit or withdraw money or check their balances, which takes an
average of 2.4 minutes exponentially distributed. Use the step-by-step procedure
detailed in section L2.2.4 to create the process and routing tables graphically.
To define the service time at the ATM, click Operation in the Process table.
Click the button with the hammer symbol. A new window named Logic
Builder opens up. Select the command Wait. The ProModel expression Wait
causes the ATM customer (entity) to be delayed for a specified amount of time.
This is how processing times are modeled.
FIGURE L4.17
Define the entity—ATM_Customer.
FIGURE L4.18
Process and Routing tables for Bank of USA ATM model.
FIGURE L4.19
Process and Routing tables for Bank of USA ATM model in text format.
420 Part II Labs
FIGURE L4.20
The Logic Builder
menu.
FIGURE L4.21
Customer arrival table.
FIGURE L4.22
Definition of
simulation run
options.
we are going to model 980 hours of operation of the ATM system. The number of
replications refers to the number of times the simulation model will be run (each
time the model will run for an amount of time specified by run hours). The
warm-up time refers to the amount of time to let the simulation model run to
achieve steady- state behavior. Statistics are usually collected after the warm-up
period is over. The run time begins at the end of the warm-up period. For a more
detailed discussion on warm-up time, please refer to Chapter 9, Section 9.6.1 and
Lab 9. The unit of time used in the model can be seconds, minutes, or hours. The
clock precision refers to the precision in the time unit used to measure all
simulation event timings.
Let us select the Run option from the Simulation Options menu (or click
F10). Figure L4.23 shows a screen shot during run time. The button in the
middle of the scroll bar at the top controls the speed of the simulation run. Pull it
right to increase the speed and left to decrease the simulation execution speed.
After the simulation runs to its completion, the user is prompted, “Do you
want to see the results?” (Figure L4.24). Click Yes. Figures L4.25 and L4.26 are
part of the results that are automatically generated by ProModel in the Output
Viewer.
Note that the average time a customer spends waiting in the ATM Queue
is 9.62 minutes. The average time spent by a customer in the ATM system is
12.02 minutes. The utilization of the ATM is 79.52 percent. Also, 20,000
customers are served in 60,265.64 minutes or 19.91 customers per hour. The
maximum num- ber of customers waiting in the ATM Queue is 31, although the
average number of
422 Part II Labs
FIGURE L4.23
Screen shot at run time.
FIGURE L4.24
Simulation complete
prompt.
FIGURE L4.25
The output viewer for
the Bank of USA
ATM model.
Lab 4 Building Your First Model 423
FIGURE L4.26
Results of the Bank of USA ATM simulation model.
FIGURE L4.27
General information
for the Poly Furniture
Factory simulation.
Lab 4 Building Your First Model 425
FIGURE L4.28
Locations in the Poly Furniture Factory.
FIGURE L4.29
Entities in the Poly Furniture Factory.
FIGURE L4.30
Entity arrivals in the Poly Furniture Factory.
426 Part II Labs
FIGURE L4.31
Processes and routings in the Poly Furniture Factory.
FIGURE L4.32
Simulation options in
the Poly Furniture
Factory.
The time to move material between processes is modeled in the Move Logic
field of the Routing table. Four choices of constructs are available in the Move
Logic field:
• MOVE—to move the entity to the end of a queue or conveyor.
• MOVE FOR—to move the entity to the next location in a specific time.
• MOVE ON—to move the entity to the next location using a specific path
network.
• MOVE WITH—to move the entity to the next location using a specific
resource (forklift, crane).
Define some of the simulation options: the simulation run time (in hours),
the number of replications, the warm-up time (in hours), and the clock
precision (Figure L4.32).
Now we go on to the Simulation menu. Select Save & Run. This will save
the model we have built so far, compile it, and also run it. When the simulation
model finishes running, we will be asked if we would like to view the results.
Select Yes. A sample of the results is shown in Figure L4.33.
Lab 4 Building Your First Model 427
FIGURE L4.33
Sample of the results of the simulation run for the Poly Furniture Factory.
428 Part II Labs
FIGURE L4.34
Locations at the Poly Furniture Factory with oven.
Lab 4 Building Your First Model 429
FIGURE L4.35
Simulation model layout of the Poly Furniture Factory.
FIGURE L4.36
Processes and routings at the Poly Furniture Factory.
The contents of a location can be displayed in one of the following two alter-
native ways:
a. To show the contents of a location as a counter, first deselect the New
option from the Graphics toolbar. Left-click on the command button 00
in the Graphics toolbar (left column, top). Finally, left-click on the
location selected (Oven). The location counter will appear in the
Layout window next to the location Oven (Figure L4.35).
b. To show the contents of a location (Paint Booth) as a gauge, first deselect
the New option from the Graphics toolbar. Left-click on the second
command button from the top in the left column in the Graphics toolbar.
430 Part II Labs
The gauge icon will appear in the Layout window next to the location
Paint Booth (Figure L4.35). The fill color and fill direction of the gauge
can now be changed if needed.
FIGURE L4.37
Customer arrival for haircut.
FIGURE L4.38
Processing of customers at the barbershop.
L4.6 Blocking
With respect to the way statistics are gathered, here are the rules that are used in
ProModel (see the ProModel Users Manual, p. 636):
1. Average < time > in system: The average total time the entity spends
in the system, from the time it arrives till it exits the system.
2. Average < time > in operation: The average time the entity spends
in processing at a location (due to a WAIT statement) or traveling on
a conveyor or queue.
3. Average < time > in transit: The average time the entity spends
traveling to the next location, either in or out of a queue or with a
resource. The move time in a queue is decided by the length of the
queue (defined in the queue dialog, Figure L4.16) and the speed of the
entity (defined in the entity dialog, Figure L4.4 or L4.17).
4. Average < time > wait for resource, etc.: The average time the entity
spends waiting for a resource or another entity to join, combine, or the
like.
5. Average < time > blocked: The average time the entity spends waiting
for a destination location to become available. Any entities held up
behind another blocked entity are actually waiting on the blocked entity,
so they are reported as “time waiting for resource, etc.”
Example
At the SoCal Machine Shop (Figure L4.39) gear blanks arriving to the shop wait
in a queue (Incoming_Q) for processing on a turning center and a mill, in that
FIGURE L4.39
The Layout of the SoCal Machine Shop.
432 Part II Labs
order. A total of 100 gear blanks arrive at the rate of one every eight minutes.
The processing times on the turning center and mill are eight minutes and nine
min- utes, respectively. Develop a simulation model and run it.
To figure out the time the “gear blanks” are blocked in the machine shop,
waiting for a processing location, we have entered “Move for 0” in the operation
logic (Figure L4.40) of the Incoming_Q. Also, the decision rule for the queue
has been changed to “No Queuing” in place of FIFO (Figure L4.41). This way
all the entities waiting in the queue for the turning center to be freed up are
reported as blocked. When you specify FIFO as the queuing rule for a location,
only the lead entity is ever blocked (other entities in the location are waiting for
the lead entity and are reported as “wait for resource, etc.”).
FIGURE L4.40
Process and Routing tables for SoCal Machine Shop.
FIGURE L4.41
Decision rules for Incoming_Q.
Lab 4 Building Your First Model 433
FIGURE L4.42
Entity activity statistics at the SoCal Machine Shop.
FIGURE L4.43
Entity activity statistics at the SoCal Machine Shop with two mills.
From the entity activity statistics (Figure L4.42) in the output report we can
see the entities spend on average or 66.5 minutes in the system, of which 49.5
minutes are blocked (waiting for another process location) and 17 minutes are
spent in op- eration (eight minutes at the turning center and nine minutes at the
mill). Blocking as a percentage of the average time in system is 74.44 percent.
The utilization of the turning center and the mill are 98.14 percent and 98.25
percent, respectively.
In general, the blocking time as a percentage of the time in system increases
as the utilization of the processing locations increases. To reduce blocking in the
ma- chine shop, let us install a second mill. In the location table, change the
number of units of mill to 2. The entity activity statistics from the resulting
output report are shown in Figure L4.43. As expected, the blocking time has
been reduced to zero.
L4.7 Exercises
1. Run the Tube Distribution Supply Chain example model (logistics.mod)
from the demos subdirectory for 40 hours. What are the various entities
modeled in this example? What are the various operations and
processes modeled in this example? Look at the results and find
a. The percentage utilization of the locations “Mill” and the “Process
Grades Threads.”
b. The capacities of Inventory, and Inventory 2–6; the maximum
contents of Inventory and Inventory 2–6.
434 Part II Labs
L A B
5 PROMODEL’S
OUTPUT MODULE
437
438 Part II Labs
FIGURE L5.1
File menu in the
3DR Output Viewer.
FIGURE L5.2
Output menu options
in ProModel.
FIGURE L5.3
View menu in the
3DR Output Viewer.
The Output menu in ProModel (Figure L5.2) has the following options:
• View Statistics: Allows the user to view the statistics generated from
running a simulation model. Selecting this option loads the Output Viewer
3DR.
• View Trace: Allows the user to view the trace file generated from running
a simulation model. Sending a trace listing to a text file during runtime
generates a trace. Please refer to Lab 8, section L8.2, for a more complete
discussion of tracing a simulation model.
The View menu (Figure L5.3) allows the user to select the way in which
out- put data, charts, and graphs can be displayed. The View menu has the
following options:
a. Report d. Histogram
b. Category Chart e. Time Plot
c. State Chart f. Sheet Properties
Lab 5 ProModel’s Output Module 439
FIGURE L5.4
3DR Report view of the results of the ATM System in Lab3.
FIGURE L5.5
Categories of charts
available in the
Category Chart
Selection menu.
FIGURE L5.6
An example of a category chart presenting entity average time in system.
Lab 5 ProModel’s Output Module 441
FIGURE L5.7
Categories of charts
available in the State
Chart Selection
menu.
FIGURE L5.8
State chart representation of location utilization.
FIGURE L5.9
A state chart
representation of all
the locations’ states.
FIGURE L5.10
A pie chart
representing the states
of the location
Inspect.
FIGURE L5.11
State chart for the Cell Operator resource states.
FIGURE L5.12
State chart representation of entity states.
Lab 5 ProModel’s Output Module 445
FIGURE L5.13
Dialog box for plotting a histogram.
FIGURE L5.14
A time-weighted histogram of the contents of the Bearing Queue.
Lab 5 ProModel’s Output Module 447
FIGURE L5.15
A time series plot of the Bearing Queue contents over time.
FIGURE L5.16
A time series plot of WIP.
448 Part II Labs
FIGURE L5.17
Sheet Properties
menu in the Output
Viewer.
FIGURE L5.18
The results of Poly
Furniture Factory
(with Oven) in
Classic view.
450 Part II Labs
FIGURE L5.19
View menu in the
Classic output viewer.
FIGURE L5.20
Time series plot of customers waiting for Barber Dan.
FIGURE L5.21
Time series histogram
of customers waiting
for Barber Dan.
states are Operation, Setup, Idle, Waiting, Blocked, and Down. The location
states for the Splitter Saw are shown in Figure L5.23 as a pie graph. The
utilization of multiple capacity locations at Poly Furniture Factory is shown in
Figure L5.24. All the states the entity (Painted_Logs) is in are shown in Figure
L5.25 as a state graph and in Figure L5.26 as a pie chart. The different states are
move, wait for re- source, and operation.
452 Part II Labs
FIGURE L5.22
State graphs for
the utilization of
single capacity
locations.
FIGURE L5.23
Pie chart for
the utilization
of the Splitter
Saw.
FIGURE L5.24
State graphs for the
utilization of multiple
capacity locations.
FIGURE L5.25
Graph of the
states of the entity
Painted_Logs.
Lab 5 ProModel’s Output Module 453
FIGURE L5.26
Pie graph of the
states of the entity
Painted_Logs.
L5.3 Exercises
1. Customers arrive at the Lake Gardens post office for buying stamps,
mailing letters and packages, and so forth. The interarrival time is
exponentially distributed with a mean of 2 minutes. The time to
process each customer is normally distributed with a mean of 10
minutes and a standard deviation of 2 minutes.
a. Make a time series plot of the number of customers waiting in line at
the post office in a typical eight-hour day.
b. How many postal clerks are needed at the counter so that there are
no more than 15 customers waiting in line at the post office at any
time? There is only one line serving all the postal clerks. Change the
number of postal clerks until you find the optimum number.
2. The Lake Gardens postmaster in Exercise 1 wants to serve his customers
well. She would like to see that the average time spent by a postal
customer at the post office is no more than 15 mins. How many postal
clerks should she hire?
3. For the Poly Furniture Factory example in Lab 4, Section L4.3,
a. Make a state graph and a pie graph for the splitter and the lathe.
b. Find the percentage of time the splitter and the lathe are idle.
4. For the Poly Furniture Factory example in Lab 4, Section L4.4,
a. Make histograms of the contents of the oven and the paint booth.
Make sure the bar width is set equal to one. What information
can you gather from these histograms?
b. Plot a pie chart for the various states of the entity Painted_Logs.
What percentage of time the Painted_Logs are in operation?
c. Make a time series plot of the oven and the paint booth contents. How
would you explain these plots?
454 Part II Labs
L A B
6 FITTING STATISTICAL
DISTRIBUTIONS
TO INPUT DATA
There are three kinds of lies: lies, damned lies, and statistics.
—Benjamin Disraeli
Input data drive our simulation models. Input data can be for interarrival times,
material handling times, setup and process times, demand rates, loading and un-
loading times, and so forth. The determination of what data to use and where to
get the appropriate data is a complicated and time-consuming task. The quality
of data is also very important. We have all heard the cliché “garbage in,
garbage out.” In Chapter 6 we discussed various issues about input data
collection and analysis. We have also described various empirical discrete and
continuous distri- butions and their characteristics. In this lab we describe how
ProModel helps in fitting empirical statistical distributions to user input data.
455
456 Part II Labs
FIGURE L6.1
Stat::Fit opening
screen.
FIGURE L6.2
Stat::Fit opening menu.
small, the goodness-of-fit tests are of little use in selecting one distribution over
an- other because it is inappropriate to fit one distribution over another in such a
situation. Also, when conventional techniques have failed to fit a distribution, the
empirical distribution is used directly as a user distribution (Chapter 6, Section
6.9).
The opening menu of Stat::Fit is shown in Figure L6.2. Various options are
available in the opening menu:
1. File: File opens a new Stat::Fit project or an existing project or data file.
The File menu is also used to save a project.
2. Edit:
3. Input:
4. Statistics:
5. Fit: The Fit menu provides a Fit Setup dialog and a Distribution Graph
dialog. Other options are also available when a Stat::Fit project is
opened. The Fit Setup dialog lists all the distributions supported by
Stat::Fit and the relevant choices for goodness-of-fit tests. At least one
distribution must be chosen before the estimate, test, and graphing
commands become available. The Distribution Graph command uses
the distribution and parameters provided in the Distribution Graph
dialog to create a graph of any analytical distribution supported by
Stat::Fit. This graph is not connected to any input data or document.
Lab 6 Fitting Statistical Distributions to Input Data 457
FIGURE L6.4
Stat::Fit data
input options.
458 Part II Labs
here. The Input Options command can be accessed from the Input menu as well as
the Input Options button on the Speed Bar.
TABLE L6.1 Times between Arrival of Cars at San Dimas Gas Station
1 12.36
2 5.71
3 16.79
4 18.01
5 5.12
6 7.69
7 19.41
8 8.58
9 13.42
10 15.56
11 10.
12 18.
13 16.75
14 14.13
15 17.46
16 10.72
17 11.53
18 18.03
19 13.45
20 10.54
21 12.53
22 8.91
23 6.78
24 8.54
25 11.23
26 10.1
27 9.34
28 6.53
29 14.35
30 18.45
Lab 6 Fitting Statistical Distributions to Input Data 459
FIGURE L6.5
Times between arrival
of cars at San Dimas
Gas Station.
FIGURE L6.6
Histogram of the times
between arrival data.
460 Part II Labs
FIGURE L6.7
Descriptive statistics
for the input data.
FIGURE L6.8
The Auto::Fit
submenu.
FIGURE L6.9
Various distributions fitted to the input data.
rank in terms of the amount of fit. Both the Kolmogorov–Smirnov and the
Anderson–Darling goodness-of-fit tests will be performed on the input data as
shown in Figure L6.10. The Maximum Likelihood Estimates will be used with
an accuracy of at least 0.00003. The actual data and the fitted uniform
distribution are compared and shown in Figure L6.11.
462 Part II Labs
FIGURE L6.10
Goodness-of-fit tests
performed on the input
data.
FIGURE L6.11
Comparison of actual data and fitted uniform distribution.
Lab 6 Fitting Statistical Distributions to Input Data 463
Because the Auto::Fit function requires a specific setup, the Auto::Fit view
can be printed only as the active window or part of the active document, not as
part of a report. The Auto::Fit function will not fit discrete distributions. The
man- ual method, previously described, should be used instead.
L6.4 Exercises
11 11 12 8 15 14 15 13
9 13 14 9 14 9 13 7
12 12 7 13 12 16 7 10
8 8 17 15 10 7 16 11
11 10 16 10 11 12 14 15
2. The servers at the restaurant in Question 1 took the following time (minutes) to serve food to these
40 customers. Use Stat::Fit to analyze the data and fit an appropriate continuous distribution. What
are the parameters of this distribution?
11 11 12 8 15 14 15 13
9 13 14 10 14 9 13 12
12 12 11 13 12 16 11 10
10 8 17 12 10 7 13 11
11 10 13 10 11 12 14 15
3. The following are the numbers of incoming calls (each hour for 80 successive hours) to a call center
set up for serving customers of a certain Internet service provider. Use Stat::Fit to analyze the data
and fit an appropriate discrete distribution. What are the parameters of this distribution?
12 12 11 13 12 16 11 10
9 13 14 10 14 9 13 12
12 12 11 13 12 16 11 10
10 8 17 12 10 7 13 11
11 11 12 8 15 14 15 13
9 13 14 10 14 9 13 12
12 12 11 13 12 16 11 10
10 8 17 12 10 7 13 11
11 10 13 10 11 12 14 15
10 8 17 12 10 7 13 11
464 Part II Labs
4. Observations were taken on the times to serve online customers at a stockbroker’s site
(STOCK.com) on the Internet. The times (in seconds) are shown here, sorted in ascending order.
Use Stat::Fit and fit an appropriate distribution to the data. What are the parameters of this
distribution?
5. Forty observations for a bagging operation was shown in Chapter 6, Table 6.5. Use Stat::Fit to find
out what distribution best fits the data. Compare your results with the results obtained in Chapter 6
(Suggestion: In Chapter 6 five equal probability cells were used for the chi-square goodness of fit
test. Use same number of cells in Stat::Fit to match the results).
Harrell−Ghosh−Bo II. 7. Basic © The
wden: Simulation Labs Modeling McGraw−Hill
Using ProModel, Concepts Companies,
Second Edition
L A B
7 BASIC MODELING
CONCEPTS
465
466 Part II Labs
FIGURE L7.1
The three processing locations and the receiving dock.
FIGURE L7.2
Layout of Pomona
Electronics.
1 10 2 5 3 12
2 12 1 6 2 14
3 15 3 8 1 15
arrive (Figure L7.1). Assume each of the assembly areas has infinite capacity.
The layout of Pomona Electronics is shown in Figure L7.2. Note that we used
Background Graphics → Behind Grid, from the Build menu, to add the Pomona
Electronics logo on the simulation model layout. Add the robot graphics (or
some- thing appropriate). Define three entities as PCB1, PCB2, and PCB3
(Figure L7.3).
Lab 7 Basic Modeling Concepts 467
FIGURE L7.3
The three types of circuit boards.
FIGURE L7.4
The arrival process for all circuit boards.
FIGURE L7.5
Processes and routings for Pomona Electronics.
Define the arrival process (Figure L7.4). Assume all 1500 boards are in
stock when the assembly operations begin. The process and routing tables are
devel- oped as in Figure L7.5.
Run the simulation model. Note that the whole batch of 1500 printed circuit
boards (500 of each) takes a total of 2 hours and 27 minutes to be processed.
468 Part II Labs
FIGURE L7.6
Single unit of multicapacity location.
FIGURE L7.7
Multiple units of single-capacity locations.
FIGURE L7.8
Multiple single-capacity locations.
Problem Statement
At San Dimas Electronics, jobs arrive at three identical inspection machines ac-
cording to an exponential distribution with a mean interarrival time of 12
minutes. The first available machine is selected. Processing on any of the
parallel machines is normally distributed with a mean of 10 minutes and a
standard deviation of 3 minutes. Upon completion, all jobs are sent to a
fourth machine, where they queue up for date stamping and packing for
shipment; this takes five minutes nor- mally distributed with a standard
deviation of two minutes. Completed jobs then leave the system. Run the
simulation for one month (20 days, eight hours each). Calculate the average
utilization of the four machines. Also, how many jobs are processed by each of
the four machines?
Define a location called Inspect. Change its units to 3. Three identical
parallel locations—that is, Inspect.1, Inspect.2, and Inspect.3—are thus created.
Also, de- fine a location for all the raw material to arrive (Material_Receiving).
Change the capacity of this location to infinite. Define a location for Packing
(Figure L7.9). Select Background Graphics from the Build menu. Make up a
label “San Dimas Electronics.” Add a rectangular border. Change the font and
color appropriately.
470 Part II Labs
FIGURE L7.9
The locations and the layout of San Dimas Electronics.
FIGURE L7.10
Arrivals of PCB at San Dimas Electronics.
Define an entity called PCB. Define the frequency of arrival of the entity
PCB as exponential with a mean interarrival time of 12 minutes (Figure L7.10).
Define the process and routing at San Dimas Electronics as shown in Figure
L7.11.
In the Simulation menu select Options. Enter 160 in the Run Hours box.
Run the simulation model. The average utilization and the number of jobs
processed at the four locations are given in Table L7.2.
Lab 7 Basic Modeling Concepts 471
FIGURE L7.11
Process and routing tables at San Dimas Electronics.
Problem Statement
Amar, Akbar, and Anthony are three tellers in the local branch of Bank of India.
Figure L7.12 shows the layout of the bank. Assume that customers arrive at the
bank according to a uniform distribution (mean of five minutes and half-width of
four minutes). All the tellers service the customers according to another uniform
FIGURE L7.12
Layout of the Bank of
India.
Lab 7 Basic Modeling Concepts 473
FIGURE L7.13
Locations at the Bank of India.
FIGURE L7.14
Queue menu.
FIGURE L7.15
Customer arrival at the Bank of India.
FIGURE L7.16
Process and routing tables at the Bank of India.
% Utilization
Amar 79 63.9
Akbar 64.7 65.1
Anthony 46.9 61.5
customer arrival process is shown in Figure L7.15. The processes and routings
are shown in Figure L7.16. Note that the customers go to the tellers Amar,
Akbar, and Anthony in the order they are specified in the routing table.
The results of the simulation model are shown in Table L7.3. Note that
Amar, being the favorite teller, is much more busy than Akbar and Anthony.
If the customers were routed to the three tellers in turn (selected in rotation),
the process and routing tables would be as in Figure L7.17. Note that By Turn
was selected from the Rule menu in the routing table. These results of the
simulation model are also shown in Table L7.3. Note that Amar, Akbar, and
Anthony are now utilized almost equally.
Lab 7 Basic Modeling Concepts 475
FIGURE L7.17
Process and routing tables for tellers selected by turn.
L7.4 Variables
Variables are placeholders for either real or integer numbers that may change
during the simulation. Variables are typically used for making decisions or for
gathering data. Variables can be defined to track statistics and monitor other
activities during a simulation run. This is useful when the built-in statistics don’t
capture a particular performance metric of interest. Variables might be defined
to track
• The number of customers waiting in multiple queues.
• Customer waiting time during a specific time period.
• Customer time in the bank.
• Work-in-process inventory.
• Production quantity.
In ProModel two types of variables are used—local variables and global variables.
• Global variables are accessible from anywhere in the model and at any
time. Global variables are defined through the Variables(global) editor
in the Build menu. The value of a global variable may be displayed
dynamically during the simulation. It can also be changed interactively.
Global variables can be referenced anywhere a numeric expression is
valid.
• Local variables are temporary variables that are used for quick
convenience when a variable is needed only within a particular operation
(in the Process table), move logic (in the Routing table), logic (in the
Arrivals, Resources, or Subroutine tables), the initialization or termination
logic (in the General Information dialog box), and so forth. Local
variables are available only within the logic in which they are declared
476 Part II Labs
and are not defined in the Variables edit table. They are created for each
entity, downtime occurrence, or the like executing a particular section of
logic. A new local variable is created for each entity that encounters an
INT or REAL statement. It exists only while the entity processes the
logic that declared the local variable. Local variables may be passed to
subroutines as parameters and are available to macros.
A local variable must be declared before it is used. To declare a local
variable, use the following syntax:
INT or REAL <name1>{= expression}, <name2>{= expression}
Examples:
INT HourOfDay, WIP
REAL const1 = 2.5, const2 = 5.0
INT Init_Inventory = 170
In Section L7.11 we show you how to use a local variable in your simulation
model logic.
FIGURE L7.18
Layout of Poly Casting Inc.
FIGURE L7.19
Locations at Poly Casting Inc.
FIGURE L7.20
Arrival of castings at Poly Casting Inc.
478 Part II Labs
FIGURE L7.21
Processes and routings for the Poly Casting Inc. model.
FIGURE L7.22
Variables for the Poly Casting Inc. model.
Problem Statement
Poly Casting Inc. in Section L7.4 decides to add an inspection station at the end
of the machine shop, after the grinding operation. After inspection, 30 percent
of the widgets are sent back to the mill for rework, 10 percent are sent back to
the grinder for rework, and 5 percent are scrapped. The balance, 55 percent,
pass inspection and go on to the finished parts store. The inspection takes a
time that is triangularly distributed with a minimum of 4, mode of 5, and
maximum of 6 minutes. The process times for rework are the same as those for
new jobs. Track the amount of rework at the mill and the grinder. Also
track the amount of scrapped parts and finished production. Run the simulation
for 100 hours.
The locations are defined as mill, grinder, inspection, finish parts store,
receiv- ing dock, scrap parts, mill rework, and grind rework as shown in
Figure L7.23.
Lab 7 Basic Modeling Concepts 479
FIGURE L7.23
Simulation model layout for Poly Castings Inc. with inspection.
FIGURE L7.24
Variables for the Poly Castings Inc. with inspection model.
The last four locations are defined with infinite capacity. The arrivals of
castings are defined in batches of four every hour. Next we define five vari-
ables (Figure L7.24) to track work in process, production quantity, mill
rework, grind rework, and scrap quantity. The processes and routings are
defined as in Figure L7.25.
480 Part II Labs
FIGURE L7.25
Processes and routings for the Poly Castings Inc. with inspection model.
Problem Statement
El Segundo Composites receive orders for aerospace parts that go through cut-
ting, lay-up, and bonding operations. Cutting and lay-up take uniform (20,5)
min- utes and uniform (30,10) minutes. The bonding is done in an autoclave in
batches of five parts and takes uniform (100,10) minutes. After bonding, the
parts go to the shipment clerk individually. The shipment clerk takes normal
(20,5) minutes to get each part ready for shipment. The orders are received on
average once every 60 minutes, exponentially distributed. The time to transport
these parts from one machine to another takes on average 15 minutes. Figure out
the amount of WIP in the shop. Simulate for six months or 1000 working hours.
Lab 7 Basic Modeling Concepts 481
FIGURE L7.26
Layout of El Segundo Composites.
FIGURE L7.27
Process and routing tables for El Segundo Composites.
482 Part II Labs
FIGURE L7.28
Work in process value history.
Problem Statement
At the Garden Reach plant of the Calcutta Tea Company, the filling machine
fills empty cans with 50 bags of the best Darjeeling tea at the rate of one can
every 1±0.5 seconds uniformly distributed. The tea bags arrive to the packing
line with a mean interarrival time of one second exponentially distributed. The
filled cans go to a packing machine where 20 cans are combined into one box.
The packing operation takes uniform (20±10) seconds. The boxes are shipped
to the dealers. This facility runs 24 hours a day. Simulate for one day.
The various locations at the Calcutta Tea Company plant are shown in Fig-
ure L7.29. Three entities (Teabag, Can, and Box) are defined next. Teabags are
defined to arrive with exponential interarrival time with a mean of one second.
The processes and routing logic are shown in Figure L7.30. The layout of the
Calcutta Tea Company plant is shown in Figure L7.31.
Lab 7 Basic Modeling Concepts 483
FIGURE L7.29
Locations at the Calcutta Tea Company.
FIGURE L7.30
Process and routing tables at the Calcutta Tea Company.
FIGURE L7.31
Layout of the Calcutta
Tea Company.
484 Part II Labs
Problem Statement
At Shipping Boxes Unlimited computer monitors arrive at Monitor_Q at the
rate of one every 15 minutes (exponential) and are moved to the packing table.
Boxes arrive at Box_Q, at an average rate of one every 15 minutes (exponential)
and are also moved to the packing table. At the packing table, monitors are
packed into boxes. The packing operation takes normal (5,1) minutes. Packed
boxes are sent to the inspector (Inspect_Q). The inspector checks the contents
of the box and tallies with the packing slip. Inspection takes normal (4,2)
minutes. After inspec- tion, the boxes are loaded into trucks at the shipping
dock (Shipping_Q). The loading takes uniform (5,1) minutes. Simulate for 100
hours. Track the number of monitors shipped and the WIP of monitors in the
system.
The locations at Shipping Boxes Unlimited are defined as Monitor_Q,
Box_Q, Shipping_Q, Inspect_Q, Shipping_Dock, Packing_Table, and Inspector.
All the queue locations are defined with a capacity of infinity. Three entities
(Monitor, Empty Box, and Full_Box) are defined next. The arrivals of monitors
and empty boxes are shown in Figure L7.32. The processes and routings
are shown in Figure L7.33. A snapshot of the simulation model is captured in
FIGURE L7.32
Arrival of monitors and empty boxes at Shipping Boxes Unlimited.
Lab 7 Basic Modeling Concepts 485
FIGURE L7.33
Processes and routings for Shipping Boxes Unlimited.
FIGURE L7.34
A snapshot of the simulation model for Shipping Boxes Unlimited.
Figure L7.34. The plot of the work-in-process inventory for the 100 hours of
sim- ulation run is shown in Figure L7.35. Note that the work-in-process
inventory rises to as much as 12 in the beginning. However, after achieving
steady state, the WIP inventory stays mostly within the range of 0 to 3.
486 Part II Labs
FIGURE L7.35
Time-weighted plot of the WIP inventory at Shipping Boxes Unlimited.
Problem Statement
For the Shipping Boxes Unlimited problem in Section L7.7.1, assume the in-
spector places (loads) a packed box on an empty pallet. The loading takes any-
where from two to four minutes, uniformly distributed. The loaded pallet is sent
to the shipping dock and waits in the shipping queue. At the shipping dock, the
packed boxes are unloaded from the pallet. The unloading time is also uniformly
distributed; U(3,1) min. The boxes go onto a waiting truck. The empty pallet is
re- turned, via the pallet queue, to the inspector. One pallet is used and
recirculated in the system. Simulate for 100 hours. Track the number of
monitors shipped and the WIP of monitors.
The locations defined in this model are Monitor_Q, Box_Q, Inspect_Q,
Ship- ping_Q, Pallet_Q, Packing_Table, Inspector, and Shipping_Dock (Figure
L7.36). All queue locations have infinite capacity. The layout of Shipping
Boxes Unlim- ited is shown in Figure L7.37. Five entities are defined:
Monitor, Box, Empty_Box, Empty_Pallet, and Full_Pallet as shown in
Figure L7.38. The
Lab 7 Basic Modeling Concepts 487
FIGURE L7.36
The locations at Shipping Boxes Unlimited.
FIGURE L7.37
Boxes loaded on
pallets at Shipping
Boxes Unlimited.
arrivals of monitors, empty boxes, and empty pallets are shown in Figure L7.39.
The processes and routings are shown in Figure L7.40. Note that comments can
be inserted in a line of code as follows (Figure L7.40):
/* inspection time */
FIGURE L7.38
Entities at Shipping Boxes Unlimited.
FIGURE L7.39
Arrival of monitors, empty boxes, and empty pallets at Shipping Boxes Unlimited.
FIGURE L7.40
Process and routing tables at Shipping Boxes Unlimited.
Lab 7 Basic Modeling Concepts 489
FIGURE L7.41
Time-weighted plot of the WIP inventory at Shipping Boxes Unlimited.
Problem Statement
Visitors arrive at California Adventure Park in groups that vary in size from
two to four (uniformly distributed). The average time between arrival of two
successive groups is five minutes, exponentially distributed. All visitors wait
in front of the gate until five visitors have accumulated. At that point the gate
opens and allows the visitors to enter the park. On average, a visitor spends
20±10 minutes (uniformly distributed) in the park. Simulate for 1000 hours.
Track how many visitors are waiting outside the gate and how many are in
the park.
Three locations (Gate_In, Walk_In_Park, and Gate_Out) are defined in this
model. Visitor is defined as the entity. The processes and the layout of the adven-
ture park are shown in Figures L7.42 and L7.43.
490 Part II Labs
FIGURE L7.42
Process and routing tables for California Adventure Park.
FIGURE L7.43
Layout of California Adventure Park.
Problem Statement
The cafeteria at San Dimas High School receives 10 cases of milk from a
vendor each day before the lunch recess. On receipt, the cases are split open and
individual
Lab 7 Basic Modeling Concepts 491
cartons (10 per case) are stored in the refrigerator for distribution to students
dur- ing lunchtime. The distribution of milk cartons takes triangular(.1,.15,.2)
minute per student. The time to split open the cases takes a minimum of 5
minutes and a maximum of 7 minutes (uniform distribution) per case. Moving
the cases from re- ceiving to the refrigerator area takes five minutes per case,
and moving the cartons from the refrigerator to the distribution area takes 0.2
minute per carton. Students wait in the lunch line to pick up one milk carton
each. There are only 100 students at this high school. Students show up for
lunch with a mean interarrival time of 1 minute (exponential). On average, how
long does a carton stay in the cafeteria before being distributed and consumed?
What are the maximum and the minimum times of stay? Simulate for 10 days.
The layout of the San Dimas High School cafeteria is shown in Figure
L7.44. Three entities—Milk_Case, Milk_Carton, and Student—are defined.
Ten milk cases arrive with a frequency of 480 minutes. One hundred students
show up for lunch each day. The arrival of students and milk cases is shown in
Figure L7.45. The processing and routing logic is shown in Figure L7.46.
FIGURE L7.44
Layout of the San Dimas High School cafeteria.
FIGURE L7.45
Arrival of milk and students at the San Dimas High School cafeteria.
492 Part II Labs
FIGURE L7.46
Process and routing logic at the San Dimas High School cafeteria.
causes the program to take action1 if condition is true and action2 if condition is
false. Each action consists of one or more ProModel statements. After an action
is taken, execution continues with the line after the IF block.
Problem Statement
The Bombay Restaurant offers only a drive-in facility. Customers arrive at the
rate of six each hour (exponential interarrival time). They place their orders at
the first window, drive up to the next window for payment, pick up food from
the last window, and then leave. The activity times are given in Table L7.4. The
drive-in facility can accommodate 10 cars at most. However, customers
typically leave
Lab 7 Basic Modeling Concepts 493
FIGURE L7.47
Control statements
available in
ProModel.
and go to the Madras Café across the street if six cars are waiting in line when
they arrive. Simulate for 100 days (8 hours each day). Estimate the number of
cus- tomers served each day. Estimate on average how many customers are lost
each day to the competition.
An additional location (Arrive) is added to the model. After the customers
ar- rive, they check if there are fewer than six cars at the restaurant. If yes, they
join the line and wait; if not, they leave and go across the street to the Madras
Café. An IF-THEN-ELSE statement is added to the logic window in the
processing table (Figure L7.48). A variable (Customer_Lost) is added to the
model to keep track of the number of customers lost to the competition. The
layout of the Bombay
494 Part II Labs
FIGURE L7.48
Process and routing logic at the Bombay Restaurant.
FIGURE L7.49
Layout of the Bombay
Restaurant.
Restaurant is shown in Figure L7.49. Note that Q_1 and Q_2 are each 100 feet
long and Q_3 is 200 feet long.
The total number of customers lost is 501 in 100 days. The number of
customers served in 100 days is 4791. The average cycle time per customer is
36.6 minutes.
FIGURE L7.50
An example of the WHILE-DO logic for Shipping Boxes Unlimited.
Problem Statement
The inspector in Section L7.7.2 is also the supervisor of the shop. As such, she
inspects only when at least five full boxes are waiting for inspection in the
Inspect_Q. A WHILE-DO loop is used to check if the queue has five or more
boxes waiting for inspection (Figure L7.50). The loop will be executed
every hour. Figure L7.51 shows a time-weighted plot of the contents of the
inspection queue. Note how the queue builds up to 5 (or more) before the
inspector starts inspecting the full boxes.
Problem Statement
For the Poly Casting Inc. example in Section L7.4, we would like to assign
an upper limit to the work-in-process inventory in the shop. A DO-WHILE loop
will be used to check if the level of WIP is equal to or above five; if so, the
incoming castings will wait at the receiving dock (Figure L7.52). The loop will
be executed once every hour to check if the level of WIP has fallen below five.
Figure L7.53 show a time-weighted plot of the value of WIP at Poly Castings
Inc. Note that the level of WIP is kept at or below five.
496 Part II Labs
FIGURE L7.51
A plot of the contents of the inspection queue.
FIGURE L7.52
An example of a DO-WHILE loop.
FIGURE L7.53
A plot of the value of WIP at Poly Castings Inc.
FIGURE L7.54
The layout of the
Indian Bank.
498 Part II Labs
FIGURE L7.55
An example of a GOTO statement.
Problem Statement
The Bank of India in Section L7.3 opens for business each day for 8 hours (480
minutes). Assume that the customers arrive to the bank according to an
exponen- tial distribution (mean of 4 minutes). All the tellers service the
customers accord- ing to a uniform distribution (mean of 10 minutes and half-
width of 6 minutes). Customers are routed to the three tellers in turn (selected in
rotation).
Each day after 300 minutes (5 hours) of operation, the front door of the
bank is locked, and any new customers arriving at the bank are turned away.
Customers already inside the bank continue to get served. The bank reopens
the front door 90 minutes (1.5 hours) later to new customers. Simulate the
system for 480 min- utes (8 hours). Make a time-series plot of the Teller_Q to
show the effect of lock- ing the front door on the bank.
The logic for locking the front door is shown in Figure L7.56. The
simulation clock time clock(min) is a cumulative counter of hours elapsed since
the start of the simulation run. The current time of any given day can be
determined by mod- ulus dividing the current simulation time, clock(min), by
480 minutes. If the re- mainder of clock(min) divided by 480 minutes is
between 300 minutes (5 hours) and 390 minutes (6.5 hours), the arriving
customers are turned away (disposed). Otherwise, they are allowed into the
bank. An IF-THEN-ELSE logic block as de- scribed in Section L7.10.1 is used
here.
A time-series plot of the contents of the Teller_Q is shown in Figure L7.57.
This plot clearly shows how the Teller_Q (and the whole bank) builds up during
Lab 7 Basic Modeling Concepts 499
FIGURE L7.56
Process and routing logic for the Bank of India.
FIGURE L7.57
Time-series plot of Teller_Q at the Bank of India.
500 Part II Labs
FIGURE L7.58
Histogram of Teller_Q contents at the Bank of India.
the day; then, after the front door is locked at the fifth hour (300 minutes) into
the simulated day, customers remaining in the queue are processed and the
queue length decreases (down to zero in this particular simulation run). The
queue length picks back up when the bank reopens the front door at simulation
time 6.5 hours (390 minutes).
The histogram of the same queue (Figure L7.58) shows that approximately
49% of the time the queue was empty. About 70% of the time there are 3 or
fewer customers waiting in line. What is the average time a customer spends
in the bank? Would you recommend that the bank not close the door after 5
hours of op- eration (customers never liked this practice anyway)? Will the
average customer stay longer in the bank?
L7.12 Exercises
1. Visitors arrive at Kid’s World entertainment park according to an
exponential interarrival time distribution with mean 2.5 minutes.
The travel time from the entrance to the ticket window is normally
distributed with a mean of three minutes and a standard deviation of
1.5 minute. At the ticket window, visitors wait in a single line until one
Lab 7 Basic Modeling Concepts 501
of four cashiers is available to serve them. The time for the purchase of
tickets is normally distributed with a mean of five minutes and a
standard deviation of one minute. After purchasing tickets, the visitors
go to their respective gates to enter the park. Create a simulation model,
with animation, of this system. Run the simulation model for 200 hours
to determine
a. The average and maximum length of the ticketing queue.
b. The average number of customers completing ticketing per hour.
c. The average utilization of the cashiers.
d. Whether management should add more cashiers.
2. A consultant for Kid’s World recommended that four individual queues be formed at the ticket window
(one for each cashier) instead of one common queue. Create a simulation model, with animation, of this
system. Run the simulation model for 200 hours to determine
a. The average and maximum length of the ticketing queues.
b. The average number of customers completing ticketing per hour.
c. The average utilization of the cashiers.
d. Whether you agree with the consultant’s decision. Would you
recommend a raise for the consultant?
3. At the Kid’s World entertainment park in Exercise 1, the operating hours are 8 A.M. till 10 P.M. each
day (all week). Simulate for a whole year (365 days) and answer questions a– d as given in Exercise
1.
4. At Southern California Airline’s traveler check-in facility, three types of customers arrive: passengers
with e-tickets (Type E), passengers with paper tickets (Type T), and passengers that need to purchase
tickets (Type P). The interarrival distribution and the service times for these passengers are given in
Table L7.5. Create a simulation model, with animation, of this system. Run the simulation model for
2000 hours.
If separate gate agents serve each type of passenger, determine the
following:
a. The average and maximum length of the three queues.
b. The average number of customers of each type completing check-in
procedures per hour.
c. The average utilization of the gate agents.
Type E Exponential (mean 5.5 min.) Normal (mean 3 min., std. dev. 1 min.)
Type T Exponential (mean 10.5 min.) Normal (mean 8 min., std. dev. 3 min.)
Type P Exponential (mean 15.5 min.) Normal (mean 12 min., std. dev. 3
min.)
502 Part II Labs
From To Probability
However, every seven hours (420 minutes) the front door is locked for
an hour (60 minutes). No new patients are allowed in the nursing home
during this time. Patients already in the system continue to get served.
Simulate for one year (365 days, 24 hours per day).
a. Figure out the utilization of each department.
b. What are the average and maximum numbers of patients in each
department?
c. Which is the bottleneck department?
d. What is the average time spent by a patient in the nursing home?
7. United Electronics manufactures small custom electronic assemblies. Parts must be processed through
four stations: assembly, soldering, painting, and inspection. Orders arrive with an exponential
interarrival distribution (mean 20 minutes). The process time distributions are shown in Table L7.8.
The soldering operation can be performed on three jobs at a time.
Painting can be done on four jobs at a time. Assembly and inspection
are performed on one job at a time. Create a simulation model, with
animation, of this system. Simulate this manufacturing system for
100 days, eight hours each day. Collect and print statistics on the
utilization of each station, associated queues, and the total number
of jobs manufactured during each eight-hour shift (average).
8. In United Electronics in Exercise 7, 10 percent of all finished assemblies are sent back to soldering
for rework after inspection, five percent are sent back to assembly for rework after inspection, and one
504 Part II Labs
1 .7 .2 .2 .05 .1 .2 .05
2 .75 .25 .2 .05 .05 .15 .04
3 .8 .15 .15 .03 .03 .1 .02
Lab 7 Basic Modeling Concepts 505
Loader 1
Weighing
Loader queue Weighing queue scale
Loader 2
Number Time
Number of Jobs between
Job of per Assembly Soldering Painting Inspection Batch
Type Batches Batch Time Time Time Time Arrivals
1 15 5 Tria (5,7,10) Normal (36,10) Uniform (55±15) Exponential (8) Exp (14)
2 25 3 Tria (7,10,15) Uniform (35±5) Exponential (5) Exp (10)
to the scale to be weighed as soon as possible. Both the loaders and the
scale have a first-come, first-served waiting line (or queue) for trucks.
Travel time from a loader to the scale is considered negligible. After
being weighed, a truck begins travel time (during which time the truck
unloads), and then afterward returns to the loader queue. The
distributions of loading time, weighing time, and travel time are shown
in Table L7.11.
a. Create a simulation model, with animation, of this system. Simulate
for 200 days, eight hours each day.
b. Collect statistics to estimate the loader and scale utilization
(percentage of time busy).
c. About how many trucks are loaded each day on average?
506 Part II Labs
12. At the Pilot Pen Company, a molding machine produces pen barrels of
two different colors—red and blue—in the ratio of 3:2. The molding time is triangular (3,4,6) minutes
per barrel. The barrels go to a filling machine, where ink of appropriate color is filled at the rate of 20
pens per hour (exponentially distributed). Another molding machine makes caps of the same two
colors in the ratio of 3:2. The molding time is triangular (3,4,6) minutes per cap. At the next station,
caps and filled barrels of matching colors are joined together. The joining time is exponentially
distributed with a mean of 1 min. Simulate for 2000 hours. Find the average number of pens produced
per hour. Collect statistics on the utilization of the molding machines and the joining equipment.
13. Customers arrive at the NoWaitBurger hamburger stand with an
interarrival time that is exponentially distributed with a mean of one minute. Out of 10 customers, 5
buy a hamburger and a drink, 3 buy a hamburger, and 2 buy just a drink. One server handles the
hamburger while another handles the drink. A person buying both items needs to wait in line for
both servers. The time it takes to serve a customer is N(70,10) seconds for each item. Simulate for
100 hours. Collect statistics on the number of customers served per hour, size of the queues, and
utilization of the servers. What changes would you suggest to make the system more efficient?
14. Workers who work at the Detroit ToolNDie plant must check out tools
from a tool crib. Workers arrive according to an exponential distribution with a mean time between
arrivals of five minutes. At present, three tool crib clerks staff the tool crib. The time to serve a
worker is normally distributed with a mean of 10 minutes and a standard deviation of
2 minutes. Compare the following servicing methods. Simulate for
2000 hours and collect data.
a. Workers form a single queue, choosing the next available tool crib
clerk.
b. Workers enter the shortest queue (each clerk has his or her own
queue).
c. Workers choose one of three queues at random.
15. At the ShopNSave, a small family-owned grocery store, there are only
four aisles: aisle 1—fruits/vegetables, aisle 2—packaged goods (cereals and the like), aisle 3—dairy
products, and aisle 4—meat/fish. The time between two successive customer arrivals is exponentially
distributed with a mean of 5 minutes. After arriving to the store, each customer grabs a shopping cart.
Twenty percent of all customers go to aisle 1,
30 percent go to aisle 2, 50 percent go to aisle 3, and 70 percent go to
aisle 4. The number of items selected for purchase in each aisle is
uniformly distributed between 2 and 8. The time spent to browse and
pick up each item is normally distributed: N(5,2) minutes. There are
three identical checkout counters; each counter has its own checkout
Lab 7 Basic Modeling Concepts 507
line. The customer chooses the shortest line. Once a customer joins a
line, he or she is not allowed to leave or switch lines. The checkout time
is given by the following regression equation:
Checkout time = N(3,0.3) + (#of items) * N(0.5,0.15) minutes
The first term of the checkout time is for receiving cash or a check or
credit card from the customer, opening and closing the cash register,
and handing over the receipt and cash to the customer. After checking
out, a customer leaves the cart at the front of the store and leaves. Build
a simulation model for the grocery store. Use the model to simulate a
14-hour day.
a. The percentages of customers visiting each aisle do not add up
to 100 percent. Why?
b. What is the average amount of time a customer spends at the
grocery store?
c. How many customers check out per cashier per hour?
d. What is the average amount of time a customer spends waiting in
the checkout line?
e. What is the average utilization of the cashiers?
f. Assuming there is no limit to the number of shopping carts,
determine the average and maximum number of carts in use at any
time.
g. On average how many customers are waiting in line to checkout?
h. If the owner adopts a customer service policy that there will never
be any more than three customers in any checkout line, how many
cashiers are needed?
Embellishments:
I. The store manager is considering designating one of the checkout lines
as Express, for customers checking out with 10 or fewer items. Is that a
good idea? Why or why not?
II. In reality, there are only 10 shopping carts at the ShopNSave store.
If there are no carts available when customers arrive they leave
immediately and go to the more expensive ShopNSpend store down the
street. Modify the simulation model to reflect this change. How many
customers are lost per hour? How many shopping carts should they
have so that no more than 5 percent of customers are lost?
16. Planes arrive at the Netaji Subhash Chandra Bose International
Airport, Calcutta with interarrival times that are exponentially distributed with a mean time of 30
minutes. If there is no room at the airport when a plane arrives, the pilot flies around and comes back to
land after a normally distributed time having a mean of 20 minutes and a standard deviation of 5
minutes. There are two runways and three gates at this small airport. The time from touchdown to
arrival at a gate is normally distributed having a mean of five minutes and a standard
508 Part II Labs
L A B
8 MODEL
VERIFICATION AND
VALIDATION
Dew knot trussed yore spell chequer two fined awl yore mistakes.
—Brendan Hills
In this lab we describe the verification and validation phases in the development
and analysis of simulation models. In Section L8.1 we describe an inspection
and rework model. In Section L8.2 we show how to verify the model by
tracing the events in it. Section L8.3 shows how to debug a model. The
ProModel logic, basic, and advanced debugger options are also discussed.
509
510 Part II Labs
FIGURE L8.1
Layout of the Bombay
Clothing Mill.
FIGURE L8.2
Locations at the
Bombay Clothing
Mill warehouse.
FIGURE L8.3
Process and routing tables at the Bombay Clothing Mill.
FIGURE L8.6
Tracing the simulation
model of the Bombay
Clothing Mill
warehouse.
FIGURE L8.7
Plots of garment and relabel queue contents.
Lab 8 Model Verification and Validation 513
FIGURE L8.8
Debugger options
menu.
FIGURE L8.9
The ProModel
Debugger menu.
514 Part II Labs
The user can launch the debugger using a DEBUG statement within the
model code or from the Options menu during run time. The system state can be
monitored to see exactly when and why things occur. Combined with the
Trace window, which shows the events that are being scheduled and executed,
the debugger en- ables a modeler to track down logic errors or model bugs.
FIGURE L8.10
An example of a DEBUG statement in the processing logic.
FIGURE L8.11
The Debugger window.
• Run: Continues the simulation, but still checks the debugger options
selected in the Debugger Options dialog box.
• Next Statement: Jumps to the next statement in the current logic. Note
that if the last statement executed suspends the thread (for example, if
the entity is waiting to capture a resource), another thread that also meets
the debugger conditions may be displayed as the next statement.
• Next Thread: Brings up the debugger at the next thread that is initiated or
resumed.
• Into Subroutine: Steps to the first statement in the next subroutine
executed by this thread. Again, if the last statement executed suspends
the thread, another thread that also meets debugger conditions may be
displayed first. If no subroutine is found in the current thread, a message
is displayed in the Error Display box.
• Options: Brings up the Debugger Options dialog box. You may also bring
up this dialog box from the Simulation menu.
• Advanced: Changes the debugger to Advanced mode.
FIGURE L8.12
The ProModel
Advanced Debugger
options.
L8.4 Exercises
1. For the example in Section L8.1, insert a DEBUG statement when a
garment is sent back for rework. Verify that the simulation model is
actually sending back garments for rework to the location named
Label_Q.
2. For the example in Section L7.1 (Pomona Electronics), trace the model
to verify that the circuit boards of type B are following the routing given
in Table L7.1.
3. For the example in Section L7.5 (Poly Casting Inc.), run the simulation
model and launch the debugger from the Options menu. Turn on the
Local Information in the Basic Debugger. Verify the values of the
variables WIP and PROD_QTY.
4. For the example in Section L7.3 (Bank of India), trace the model to
verify that successive customers are in fact being served by the three
tellers in turn.
5. For the example in Section L7.7.2 (Shipping Boxes Unlimited), trace the
model to verify that the full boxes are in fact being loaded on empty
pallets at the Inspector location and are being unloaded at the Shipping
location.
Harrell−Ghosh−Bo II. 9. Simulation © The
wden: Simulation Labs Output McGraw−Hill
Using ProModel, Analysis Companies,
Second Edition
L A B
9 SIMULATION
OUTPUT ANALYSIS
Nothing has such power to broaden the mind as the ability to investigate
system- atically and truly all that comes under thy observation in life.
—Marcus Aurelius
519
520 Part II Labs
FIGURE L9.1
Layout of the Spuds-n-More simulation model.
Order_Q have been processed and the simulation is terminated by the Stop
state- ment. Notice that the combined capacity of the Entry and Order_Q
locations is five in order to satisfy the requirement that the waiting area in front
of the restau- rant accommodates up to five customers.
We have been viewing the output from our simulations with ProModel’s
new Output Viewer 3DR. Let’s conduct this lab using ProModel’s traditional
Output Viewer, which serves up the same information as the 3DR viewer but
does so a little faster. To switch viewers, select Tools from the ProModel main
menu bar and then select Options. Select the Output Viewer as shown in Figure
L9.3.
L9.2.2 Replications
Run the simulation for five replications to record the number of customers
served each day for five successive days. To run the five replications, select
Options from under the Simulation main menu. Figure L9.4 illustrates the
simulation options set to run five replications of the simulation. Notice that no
run hours are specified.
Lab 9 Simulation Output Analysis 523
FIGURE L9.2
The simulation model of Spuds-n-More.
After ProModel runs the five replications, it displays a message asking if you
wish to see the results. Answer yes.
Next ProModel displays the General Report Type window (Figure L9.5).
Here you specify your desire to see the output results for “<All>” replications
and then click on the Options . . . button to specify that you wish the output
report to also include the sample mean (average), sample standard deviation,
and 90 per- cent confidence interval of the five observations collected via the
five replications. The ProModel output report is shown in Figure L9.6. The
number of customers processed each day can be found under the Current Value
column of the output report in the VARIABLES section. The simulation results
indicate that the number of customers processed each day fluctuates randomly.
The fluctuation may be
524 Part II Labs
FIGURE L9.3
ProModel’s Default
Output Viewer set to
Output Viewer.
FIGURE L9.4
ProModel’s Simulation
Options window set to
run five replications of
the simulation.
FIGURE L9.5
ProModel General
Report Options set to
display the results
from all replications as
well as the average,
standard deviation,
and 90 percent
confidence interval.
FIGURE L9.6
ProModel output report for five replications of the Spuds-n-More simulation.
526 Part II Labs
the sample mean and sample standard deviation. With approximately 90 percent
confidence, the true but unknown mean number of customers processed per day
falls between 59.08 and 65.32 customers. These results convince Mr. Taylor that
the model is a valid representation of the actual restaurant, but he wants to get
a better estimate of the number of customers served per day.
FIGURE L9.7
Saving a ProModel
output report into
a Microsoft Excel
file format.
528 Part II Labs
FIGURE L9.8
The ProModel output report displayed within a Microsoft Excel spreadsheet.
saved the ProModel output to, click on the VARIABLES sheet tab at the bottom
of the spreadsheet (Figure L9.8), highlight the 100 Current Value observations of
the Processed variable—making sure not to include the average, standard
deviation, 90% C.I. Low, and 90% C.I. High values (the last four values in the
column)—and paste the 100 observations into the Stat::Fit data table (Figure
L9.9).
After the 100 observations are pasted into the Stat::Fit Data Table,
display a histogram of the data. Based on the histogram, the observations
appear somewhat normally distributed (Figure L9.9). Furthermore, Stat::Fit
estimates that the nor- mal distribution with mean µ = 59.30 and standard
deviation σ = 4.08 provides an acceptable fit to the data. Therefore, we
probably could have dropped the word “approximate” when we presented
our confidence interval to Mr. Taylor. Be sure to verify all of this for yourself
using the software.
Note that if you were to repeat this procedure in practice because the
problem requires you to be as precise and as sure as possible in your
conclusions, then you would report your confidence interval based on the larger
number of observations. Never discard precious data.
Before leaving this section, look back at Figure L9.6. The Total Failed col-
umn in the Failed Arrivals section of the output report indicates the number of
customers that arrived to eat at Spuds-n-More but left because the waiting
line
Lab 9 Simulation Output Analysis 529
FIGURE L9.9
Stat::Fit analysis of the 100 observations of the number of customers processed per
day by Spuds-n-More.
was full. Mr. Taylor thinks that his proposed expansion plan will allow him to
cap- ture some of the customers he is currently losing. In Lab Chapter 10, we
will add embellishments to the as-is simulation model to reflect Mr. Taylor’s
expansion plan to see if he is right.
Problem Statement
A simulation model of the Green Machine Manufacturing Company
(GMMC) owned and operated by Mr. Robert Vaughn is shown in Figure L9.10.
The interar- rival time of jobs to the GMMC is constant at 1.175 minutes. Jobs
require pro- cessing by each of the four machines. The processing time for a job
at each green machine is given in Table L9.2.
FIGURE L9.10
The layout of the Green Machine Manufacturing Company simulation model.
FIGURE L9.11
The simulation model of the Green Machine Manufacturing Company.
Lab 9 Simulation Output Analysis 533
FIGURE L9.12
Work-in-process (WIP) inventory value history for one replication of the GMMC simulation. (a) WIP value history
without warm-up phase removed. Statistics will be biased low by the transient WIP values. (b) WIP value history
with 100-hour warm-up phase removed. Statistics will not be biased low.
(a) (b)
FIGURE L9.13
ProModel’s Simulation
Options window set
for a single replication
with a 100-hour
warm- up period
followed by a 150-
hour run length.
details on this subject, and exercise 4 in Lab Chapter 11 illustrates how to use
the Welch method implemented in SimRunner.
Figure L9.14 was produced by SimRunner by recording the time-average
WIP levels of the GMMC simulation over successive one-hour time periods.
The results from each period were averaged across five replications to
produce the “raw data” plot, which is the more erratic line that appears red on
the computer screen. A 54-period moving average (the smooth line that
appears green on the
534 Part II Labs
FIGURE L9.14
SimRunner screen for estimating the end of the warm-up time.
computer) indicates that the end of the warm-up phase occurs between the 33rd
and 100th periods. Given that we need to avoid underestimating the warm-up,
let’s declare 100 hours as the end of the warm-up time. We feel much more
comfortable basing our estimate of the warm-up time on five replications.
Notice that SimRunner indicates that at least 10 replications are needed to
estimate the aver- age WIP to within a 7 percent error and a confidence level of
90 percent using a warm-up of 100 periods (hours). You will see how this was
done with SimRunner in Exercise 4 of Section L11.4 in Lab Chapter 11.
Why did we choose an initial run length of 250 hours to produce the time-
series plot in Figure L9.14? Well, you have to start with something, and we
picked 250 hours. You can rerun the experiment with a longer run length if
the time- series plot, produced by averaging the output from several
replications of the simulation, does not indicate a steady-state condition. Long
runs will help prevent
Lab 9 Simulation Output Analysis 535
you from wondering what the time-series plot looks like beyond the point at
which you stopped it. Do you wonder what our plot does beyond 250 hours?
Let’s now direct our attention to answering the question of how long to
run the model past its warm-up time to estimate our steady-state statistic,
mean WIP inventory. We will somewhat arbitrarily pick 100 hours not
because that is equal to the warm-up time but because it will allow ample
time for the simulation events to happen thousands of times. In fact, the 100-
hour duration will allow approxi- mately 5,100 jobs to be processed per
replication, which should give us decently accurate results. How did we
derive the estimate of 5,100 jobs processed per repli- cation? With a 1.175-
minute interarrival time of jobs to the system, 51 jobs arrive per hour (60
minutes/1.175 minutes) to the system. Running the simulation for 100 hours
should result in about 5,100 jobs (100 hours × 51 jobs) exiting the sys- tem.
You will want to check the number of Total Exits in the Entity Activity sec-
tion of the ProModel output report that you just produced to verify this.
FIGURE L9.15
Specification of
warm-up hours and
run hours in the
Simulation Options
menu for running
replications.
536 Part II Labs
FIGURE L9.16
Ten replications of the Green Machine Manufacturing Company simulation using a 100-hour warm-up
and a 100-hour run length.
------------------------------------------------------------------------------------
General Report
Output from C:\Bowden Files\Word\McGraw 2nd Edition\GreenMachine.MOD
Date: Jul/16/2002 Time: 01:04:42 AM
------------------------------------------------------------------------------------
Scenario : Normal Run
Replication : All
Period : Final Report (100 hr to 200 hr Elapsed: 100 hr)
Warmup Time : 100
Simulation Time : 200 hr
------------------------------------------------------------------------------------
VARIABLES
Average
Variable Total Minutes Minimum Maximum Current Average
Name Changes Per Change Value Value Value Value
-------- ------- ---------- ------- ------- ------- -------
WIP 10202 0.588 13 31 27 24.369 (Rep 1)
WIP 10198 0.588 9 32 29 18.928 (Rep 2)
WIP 10207 0.587 14 33 22 24.765 (Rep 3)
WIP 10214 0.587 10 26 22 18.551 (Rep 4)
WIP 10208 0.587 8 27 25 16.929 (Rep 5)
WIP 10216 0.587 13 27 17 19.470 (Rep 6)
WIP 10205 0.587 8 30 24 19.072 (Rep 7)
WIP 10215 0.587 8 25 14 15.636 (Rep 8)
WIP 10209 0.587 9 26 20 17.412 (Rep 9)
WIP 10209 0.587 11 27 24 18.504 (Rep 10)
WIP 10208.3 0.587 10.3 28.4 22.4 19.364 (Average)
WIP 5.735 0.0 2.311 2.836 4.501 2.973 (Std. Dev.)
WIP 10205 0.587 8.959 26.756 19.790 17.640 (90% C.I. Low)
WIP 10211.6 0.587 11.64 30.044 25.009 21.087 (90% C.I. High)
If we decide to make one single long run and divide it into batch intervals
to estimate the expected WIP inventory, we would select the Batch Mean option
in the Output Reporting section of the Simulation Options menu (Figure L9.17).
A guideline in Chapter 9 for making an initial assignment to the length of time
for each batch interval was to set the batch interval length to the simulation run
time you would use for replications, in this case 100 hours. Based on the guide-
line, the Simulation Options menu would be configured as shown in Figure
L9.17 and would produce the output in Figure L9.18. To get the output report to
display the results of all 10 batch intervals, you specify “<All>” Periods in the
General Report Type settings window (Figure L9.19). We are approximately
90 percent confident that the true but unknown mean WIP inventory is between
18.88 and 23.19 jobs. If we desire a smaller confidence interval, we can increase
the sample size by extending the run length of the simulation in increments of
the batch interval length. For example, we would specify run hours of 1500 to
collect 15 observations of average WIP inventory. Try this and see if the confi-
dence interval becomes smaller. Note that increasing the batch interval length is
also helpful.
Lab 9 Simulation Output Analysis 537
FIGURE L9.17
Warm-up hours, run
hours, and batch
interval length in the
Simulation Options
menu for running
batch intervals. Note
that the time units are
specified on the batch
interval length.
FIGURE L9.18
Ten batch intervals of the Green Machine Manufacturing Company simulation using a 100-hour warm-up
and a 100-hour batch interval length.
------------------------------------------------------------------------------------
General Report
Output from C:\Bowden Files\Word\McGraw 2nd Edition\GreenMachine.MOD
Date: Jul/16/2002 Time: 01:24:24 AM
------------------------------------------------------------------------------------
Scenario : Normal Run
Replication : 1 of 1
Period : All
Warmup Time : 100 hr
Simulation Time : 1100 hr
------------------------------------------------------------------------------------
VARIABLES
Average
Variable Total Minutes Minimum Maximum Current Average
Name Changes Per Change Value Value Value Value
-------- ------- ---------- ------- ------- ------- -------
WIP 10202 0.588 13 31 27 24.369 (Batch 1)
WIP 10215 0.587 16 29 26 23.082 (Batch 2)
WIP 10216 0.587 9 30 22 17.595 (Batch 3)
WIP 10218 0.587 13 30 16 22.553 (Batch 4)
WIP 10202 0.588 13 33 28 23.881 (Batch 5)
WIP 10215 0.587 17 37 25 27.335 (Batch 6)
WIP 10221 0.587 8 29 18 20.138 (Batch 7)
WIP 10208 0.587 10 26 22 17.786 (Batch 8)
WIP 10214 0.587 9 29 20 16.824 (Batch 9)
WIP 10222 0.586 9 23 12 16.810 (Batch 10)
WIP 10213.3 0.587 11.7 29.7 21.6 21.037 (Average)
WIP 7.103 0.0 3.164 3.743 5.168 3.714 (Std. Dev.)
WIP 10209.2 0.587 9.865 27.530 18.604 18.884 (90% C.I. Low)
WIP 10217.4 0.587 13.534 31.869 24.595 23.190 (90% C.I. High)
538 Part II Labs
FIGURE L9.19
ProModel General
Report Options set
to display the results
from all batch
intervals.
FIGURE L9.20
Stat::Fit Autocorrelation plot of the observations collected over 100 batch intervals. Lag-1
autocorrelation is within the –0.20 to +0.20 range.
observations under the Average Value column from the VARIABLES sheet tab of
the Excel spreadsheet into Stat::Fit.
Figure L9.20 illustrates the Stat::Fit results that you will want to verify. The
lag-1 autocorrelation value is the first value plotted in Stat::Fit’s Autocorrelation
of Input Data plot. Note that the plot begins at lag-1 and continues to lag-20. For
this range of lag values, the highest autocorrelation is 0.192 and the lowest value
is −0.132 (see correlation(0.192, −0.132) at the bottom of the plot). Therefore,
we know that the lag-1 autocorrelation is within the −0.20 to +0.20 range rec-
ommended in Section 9.6.2 of Chapter 9, which is required before proceeding to
the final step of “rebatching” the data into between 10 and 30 larger batches. We
have enough data to form 10 batches with a length of 1000 hours (10 batches
at 1000 hours each equals 10,000 hours of simulation time, which we just did).
The results of “rebatching” the data in the Microsoft Excel spreadsheet are
shown in Table L9.3. Note that the first batch mean of 21.037 in Table L9.3 is
the average of the first 10 batch means that we originally collected. The second
batch mean of 19.401 is the average of the next 10 batch means and so on.
The confidence
540 Part II Labs
21.037 (Batch 1)
19.401 (Batch 2)
21.119 (Batch 3)
20.288 (Batch 4)
20.330 (Batch 5)
20.410 (Batch 6)
22.379 (Batch 7)
22.024 (Batch 8)
19.930 (Batch 9)
22.233 (Batch 10)
20.915 (Average)
1.023 (Std. Dev.)
20.322 (90% C.I. Low)
21.508 (90% C.I. High)
interval is very narrow due to the long simulation time (1000 hours) of each
batch. The owner of GMMC, Mr. Robert Vaughn, should be very pleased with
the preci- sion of the time-average WIP inventory estimate.
The batch interval method requires a lot of work to get it right. Therefore, if
the time to simulate through the warm-up period is relatively short, the replica-
tions method should be used. Reserve the batch interval method for simulations
that require a very long time to reach steady state.
L9.4 Exercises
1. An average of 100 customers per hour arrive to the Picayune Mutual
Bank. It takes a teller an average of two minutes to serve a customer.
Interarrival and service times are exponentially distributed. The bank
currently has four tellers working. Bank manager Rich Gold wants to
compare the following two systems with regard to the average time
customers spend in the bank.
System #1
A separate queue is provided for each teller. Assume that customers
choose the shortest queue when entering the bank, and that customers
cannot jockey between queues (jump to another queue).
System #2
A single queue is provided for customers to wait for the first available
teller.
Lab 9 Simulation Output Analysis 541
L A B
10 COMPARING
ALTERNATIVE SYSTEMS
In this lab we see how ProModel is used with some of the statistical methods
presented in Chapter 10 to compare alternative designs of a system with the goal
of identifying the superior system relative to some performance measure. We
will also learn how to program ProModel to use common random numbers
(CRN) to drive the simulation models that represent alternative designs for the
systems being compared. The use of CRN allows us to run the opposing simula-
tions under identical experimental conditions to facilitate an objective evaluation
of the systems.
543
544 Part II Labs
Bonferroni approach is used for comparing from three to about five alternatives.
Either the Welch method or paired-t method is used with the Bonferroni
approach to make the comparisons. The Bonferroni approach does not
identify the best alternative but rather identifies which pairs of alternatives
perform differently. This information is helpful for identifying the better
alternatives, if in fact there is a significant difference in the performance of the
competing alternatives.
Chapter 10 recommends that the technique of analysis of variance
(ANOVA) be used when more than about five alternative system designs are
being com- pared. The first step in ANOVA is to evaluate the hypothesis that
the mean per- formances of the systems are equal against the alternative
hypothesis that at least one pair of the means is different. To figure out which
means differ from which other ones, a multiple comparison test is used. There
are several multiple com- parison tests available, and Chapter 10 presents
the Fisher’s protected least significant difference (LSD) test.
The decision to use paired-t confidence intervals, Welch confidence
intervals, the Bonferroni approach, or ANOVA with protected LSD depends on
the statisti- cal requirements placed on the data (observations from the
simulations) by the procedure. In fact, simulation observations produced
using common random numbers (CRN) cannot be used with the Welch
confidence interval method or the technique of ANOVA. Please see Chapter 10
for details.
In this lab we will use the paired-t confidence interval method because it
places the fewest statistical requirements on the observations collected from the
simulation model. Therefore, we are less likely to get into trouble with faulty as-
sumptions about our data (observations) when making decision based on paired-
t confidence intervals. Furthermore, one of the main purposes of this lab is
to demonstrate how the CRN technique is implemented using ProModel. Out of
the comparison procedures of Chapter 10, only the ones based on paired-t
confidence intervals can be used in conjunction with the CRN technique.
already in the waiting area by closing time are served before the kitchen shuts
down for the day.
Mr. Taylor is thinking about expanding into the small newspaper stand next
to Spuds-n-More, which has been abandoned. This would allow him to add a
second window to serve his customers. The first window would be used for
taking orders and collecting payments, and the second window would be used
for filling orders. Before investing in the new space, however, Mr. Taylor needs
to convince the major stockholder of Spuds-n-More, Pritchard Enterprises, that
the investment would allow the restaurant to serve more customers per day.
A baseline (as-is) model of the current restaurant configuration was devel-
oped and validated in Section L9.2 of Lab Chapter 9. See Figures L9.1 and L9.2
for the layout of the baseline model and the printout of the ProModel model. Our
task is to build a model of the proposed restaurant with a second window to de-
termine if the proposed design would serve more customers per day than does
the current restaurant. Let’s call the baseline model Spuds-n-More1 and the
proposed model Spuds-n-More2. The model is included on the CD
accompanying the book under file name Lab 10_2 Spuds-n-More2.MOD.
The ProModel simulation layout of Spuds-n-More2 is shown in Fig-
ure L10.1, and the ProModel printout is given in Figure L10.2. After customers
wait in the order queue (location Order_Q) for their turn to order, they move to
the first window to place their orders and pay the order clerk (location
Order_Clerk). Next customers proceed to the pickup queue (location Pickup_Q)
to wait for their turn to be served by the pickup clerk (Pickup_Clerk) at the
second window. There
FIGURE L10.1
Layout of Spuds-n-
More2 simulation
model.
546 Part II Labs
FIGURE L10.2
The Spuds-n-More2 simulation model.
is enough space for one customer in the pickup queue. The pickup clerk processes
one order at a time.
As the Spuds-n-More2 model was being built, Mr. Taylor determined that
for additional expense he could have a carpenter space the two customer service
win- dows far enough apart to accommodate up to three customers in the order
pickup queue. Therefore, he requested that a third alternative design with an
order pickup queue capacity of three be simulated. To do this, we only need
to change the
Lab 10 Comparing Alternative Systems 547
capacity of the pickup queue from one to three in our Spuds-n-More2 model. We
shall call this model Spuds-n-More3. Note that for our Spuds-n-More2 and
Spuds-n-More3 models, we assigned a length of 25 feet to the Order_Q, a length
of 12 feet to the Pickup_Q, and a Customer entity travel speed of 150 feet
per minute. These values affect the entity’s travel time in the queues (time for a
cus- tomer to walk to the end of the queues) and are required to match the
results pre- sented in this lab.
FIGURE L10.3
Unique random number streams assigned to the Spuds-n-More model.
FIGURE L10.4
ProModel’s Simulation
Options set to run
25 replications using
Common Random
Numbers.
TABLE L10.1 Comparison of the Three Restaurant Designs Based on Paired Differences
1 60 72 75 −12 −15 −3
2 57 79 81 −22 −24 −2
3 58 84 88 −26 −30 −4
4 53 69 72 −16 −19 −3
5 54 67 69 −13 −15 −2
6 56 72 74 −16 −18 −2
7 57 70 71 −13 −14 −1
8 55 65 65 −10 −10 0
9 61 84 84 −23 −23 0
10 60 76 76 −16 −16 0
11 56 66 70 −10 −14 −4
12 66 87 90 −21 −24 −3
13 64 77 79 −13 −15 −2
14 58 66 66 −8 −8 0
15 65 85 89 −20 −24 −4
16 56 78 83 −22 −27 −5
17 60 72 72 −12 −12 0
18 55 75 77 −20 −22 −2
19 57 78 78 −21 −21 0
20 50 69 70 −19 −20 −1
21 59 74 77 −15 −18 −3
22 58 73 76 −15 −18 −3
23 58 71 76 −13 −18 −5
24 56 70 72 −14 −16 −2
25 53 68 69 −15 −16 −1
Sample mean x¯(i −i l ), for all i and i l between 1 and 3, with i < i l −16.20 −18.28 −2.08
Sample standard dev s(i −i l ), for all i and i l between 1 and 3, with i < i l 4.66 5.23 1.61
550 Part II Labs
L10.5 Exercises
1. Without using the Common Random Numbers (CRN) technique,
compare the Spuds-n-More1, Spuds-n-More2, and Spuds-n-More3
models with respect to the average number of customers served per day.
To avoid the use of CRN, assign ProModel Stream 1 to each stochastic
element in the model and do not select the CRN option from the
552 Part II Labs
L A B
11 SIMULATION
OPTIMIZATION
WITH SIMRUNNER
Step 1. Create, verify, and validate a simulation model using ProModel, Med-
Model, or ServiceModel. Next, create a macro and include it in the run-time
interface for each input factor that is believed to influence the output of the
simu- lation model. The input factors are the variables for which you are
seeking opti- mal values, such as the number of nurses assigned to a shift or
the number of machines to be placed in a work cell. Note that SimRunner can
test only those factors identified as macros in ProModel, MedModel, or
ServiceModel.
553
554 Part II Labs
FIGURE L11.1
Relationship
between
SimRunner’s Optimization
Simulation
optimization
Optimization
Algorithms
algorithms and
ProModel simulation
model.
Input Factors
Step 2. Create a new SimRunner project and select the input factors you wish to
test. For each input factor, define its numeric data type (integer or real) and its
lower bound (lowest possible value) and upper bound (highest possible value).
SimRunner will generate solutions by varying the values of the input factors ac-
cording to their data type, lower bounds, and upper bounds. Care should be
taken when defining the lower and upper bounds of the input factors to ensure
that a combination of values will not be created that leads to a solution that
was not envisioned when the model was built.
Step 4. Select the optimization profile and begin the search by starting the
optimization algorithms. The optimization profile sets the size of the
evolutionary algorithm’s population. The population size defines the number of
solutions eval- uated by the algorithm during each generation of its search.
SimRunner provides
Lab 11 Simulation Optimization with SimRunner 555
FIGURE L11.2
Generally, the larger the size of the population the better the result.
Max f(x)
Cautious
Moderate
Aggressive
Time
three population sizes: small, medium, and large. The small population size cor-
responds to the aggressive optimization profile, the medium population size
corre- sponds to the moderate optimization profile, and the large population
size cor- responds to the cautious profile. In general, as the population size is
increased, the likelihood that SimRunner will find the optimal solution
increases, as does the time required to conduct the search (Figure L11.2).
Step 5. Study the top solutions found by SimRunner and pick the best.
SimRunner will show the user the data from all experiments conducted and
will rank each solution based on its utility, as measured by the objective
function. Remember that the value of an objective function is a random variable
because it is produced from the output of a stochastic simulation model.
Therefore, be sure that each experiment is replicated an appropriate number of
times during the optimization.
Another point to keep in mind is that the list of solutions presented by
SimRunner represents a rich source of information about the behavior, or
response surface, of the simulation model. SimRunner can sort and graph the
solutions many different ways to help you interpret the “meaning” of the data.
556 Part II Labs
FIGURE L11.3
The ideal production
system for
Ideal Production System
Prosperity
Company.
Milling Machine
FIGURE L11.4
ProModel model of Prosperity Company.
558 Part II Labs
FIGURE L11.5
Relationship
between the mean
processing time and
Response Surface
the mean number of
entities waiting in 100
No. in Queue
0 1.5 3
Process Time
Lab 11 Simulation Optimization with SimRunner 559
FIGURE L11.6
ProModel macro editor.
mean processing time of the milling machine to zero minutes, which of course is
a theoretical value. For complex systems, you would not normally know the
answer in advance, but it will be fun to see how SimRunner moves through this
known response surface as it seeks the optimal solution.
The first step in the five-step process for setting up a SimRunner project is
to define the macros and their Run-Time Interface (RTI) in the simulation model.
In addition to defining ProcessTime as a macro (Figure L11.6), we shall also
define the time between arrivals (TBA) of plates to the system as a macro to
be used later in the second scenario that management has asked us to look into.
The iden- tification for this macro is entered as TBA. Be sure to set each macro’s
“Text . . .” value as shown in Figure L11.6. The Text value is the default value of
the macro. In this case, the default value for ProcessTime is 2 and the
default value of TBA is 3. If you have difficulty creating the macros or their
RTI, please see Lab Chapter 14.
Next we activate SimRunner from ProModel’s Simulation menu.
SimRunner opens in the Setup Project mode (Figure L11.7). The first step in the
Setup Project module is to select a model to optimize or to select an existing
optimization project (the results of a prior SimRunner session). For this
scenario, we are optimizing a
560 Part II Labs
FIGURE L11.7
The opening SimRunner screen.
model for the first time. Launching SimRunner from the ProModel Simulation
menu will automatically load the model you are working on into SimRunner. See
the model file name loaded in the box under “Create new project—Select
model” in Figure 11.7. Note that the authors named their model
ProsperityCo.Mod.
With the model loaded, the input factors and objective function are defined
to complete the Setup Project module. Before doing so, however, let’s take a
moment to review SimRunner’s features and user interface. After completing the
Setup Project module, you would next run either the Analyze Model module or
the Optimize Model module. The Analyze Model module helps you determine
the number of replications to run to estimate the expected value of
performance measures and/or to determine the end of a model’s warm-up
period using the techniques described in Chapter 9. The Optimize Model
module automatically seeks the values for the input factors that optimize the
objective function using the techniques described in Chapter 11. You can
navigate through SimRunner by selecting items from the menus across the top
of the window and along the left
Lab 11 Simulation Optimization with SimRunner 561
FIGURE L11.8
Single term objective function setup for scenario one.
side of the window or by clicking the <Previous or Next> buttons near the bottom
right corner of the window.
Clicking the Next> button takes you to the section for defining the objective
function. The objective function, illustrated in Figure L11.8, indicates the desire
to minimize the average contents (in this case, plates) that wait in the location
called InputPalletQue. The InputPalletQue is a location category. Therefore, to
enter this objective, we select Location from the Response Category list under
Performance Measures by clicking on Location. This will cause SimRunner to
display the list of location statistics in the Response Statistic area. Click on the
response statistic InputPalletQue—AverageContents and then press the button
below with the down arrows. This adds the statistic to the list of response statis-
tics selected for the objective function. The default objective for each response
statistic is maximize. In this example, however, we wish to minimize the average
contents of the input pallet queue. Therefore, click on Location:Max:1*Input-
PalletQue—AverageContents, which appears under the area labeled Response
562 Part II Labs
Statistics Selected for the Objective Function; change the objective for the
response statistic to Min; and click the Update button. Note that we accepted the
default value of one for the weight of the factor. Please refer to the SimRunner
Users Guide if you have difficulty performing this step.
Clicking the Next> button takes you to the section for defining the input
factors. The list of possible input factors (macros) to optimize is displayed at the
top of this section under Macros Available for Input (Figure L11.9). The input
factor to be optimized in this scenario is the mean processing time of the milling
machine, ProcessTime. Select this macro by clicking on it and then clicking the
button below with the down arrows. This moves the ProcessTime macro to the
list of Macros Selected as Input Factors (Figure L11.9). Next, indicate that you
want to consider integer values between one and five for the ProcessTime
macro. Ignore the default value of 2.00. If you wish to change the data type or
lower and upper bounds, click on the input factor, make the desired changes,
and click the Update button. Please note that an input factor is designated as an
integer when
FIGURE L11.9
Single input factor setup for scenario one.
Lab 11 Simulation Optimization with SimRunner 563
the lower and upper bounds appear without a decimal point in the Macros proper-
ties section. When complete, SimRunner should look like Figure L11.9.
From here, you click the Next> button until you enter the Optimize Model
module, or click on the Optimize Model module button near the top right corner
of the window to go directly to it. The first step here is to specify Optimization
op- tions (Figure L11.10). Select the Aggressive Optimization Profile. Accept
the de- fault value of 0.01 for Convergence Percentage, the default of one for
Min Gen- erations, and the default of 99999 for Max Generations.
The convergence percentage, minimum number of generations, and maxi-
mum number of generations control how long SimRunner’s optimization algo-
rithms will run experiments before stopping. With each experiment, SimRunner
records the objective function’s value for a solution in the population. The evalu-
ation of all solutions in the population marks the completion of a generation.
The maximum number of generations specifies the most generations SimRunner
will
FIGURE L11.10
Optimization and simulation options.
564 Part II Labs
use to conduct its search for the optimal solution. The minimum number of
generations specifies the fewest generations SimRunner will use to conduct its
search for the optimal solution. At the end of a generation, SimRunner
computes the population’s average objective function value and compares it with
the popu- lation’s best (highest) objective function value. When the best and the
average are at or near the same value at the end of a generation, all the
solutions in the population are beginning to look alike (their input factors
are converging to the same setting). It is difficult for the algorithms to locate a
better solution to the problem once the population of solutions has converged.
Therefore, the optimiza- tion algorithm’s search is usually terminated at this
point.
The convergence percentage controls how close the best and the average
must be to each other before the optimization stops. A convergence percentage
near zero means that the average and the best must be nearly equal before the
optimization stops. A high percentage value will stop the search early, while a
very small percentage value will run the optimization longer. High values for the
maximum number of generations allow SimRunner to run until it satisfies the
convergence percentage. If you want to force SimRunner to continue searching
after the convergence percentage is satisfied, specify very high values for both
the minimum number of generations and maximum number of generations.
Generally, the best approach is to accept the default values shown in Fig-
ure L11.10 for the convergence percentage, maximum generations, and mini-
mum generations.
After you specify the optimization options, set the simulation options. Typi-
cally you will want to disable the animation as shown in Figure L11.10 to make
the simulation run faster. Usually you will want to run more than one replication
to estimate the expected value of the objective function for a solution in the pop-
ulation. When more than one replication is specified, SimRunner will display the
objective function’s confidence interval for each solution it evaluates. Note that
the confidence level for the confidence interval is specified here. Confidence
intervals can help you to make better decisions at the end of an optimization as
discussed in Section 11.6.2 of Chapter 11. In this case, however, use one replica-
tion to speed things along so that you can continue learning other features of the
SimRunner software. As an exercise, you should revisit the problem and deter-
mine an acceptable number of replications to run per experiment. As indicated in
Figure L11.10, set the simulation warm-up time to 50 hours and the simulation
run time to 250 hours. You are now ready to have SimRunner seek the optimal
solution to the problem.
With these operations completed, click the Next> button (Figure L11.10) and
then click the Run button on the Optimize Model module (Figure L11.11) to start
the optimization. For this scenario, SimRunner runs all possible experiments,
locating the optimum processing time of one minute on its third experiment.
The Experi- mental Results table shown in Figure L11.11 records the history of
SimRunner’s search. The first solution SimRunner evaluated called for a mean
processing time at the milling machine of three minutes. The second solution
evaluated assigned a processing time of two minutes. These sequence
numbers are recorded in the
Lab 11 Simulation Optimization with SimRunner 565
FIGURE L11.11
Experimental results table for scenario one.
Experiment column, and the values for the processing time (ProcessTime) input
factor are recorded in the ProcessTime column. The value of the term used to
define the objective function (minimize the mean number of plates waiting in
the input pallet queue) is recorded in the InputPalletQue:AverageContents
column. This value is taken from the output report generated at the end of a
simulation. Therefore, for the third experiment, we can see that setting the
ProcessTime macro equal to one results in an average of 0.162 plates waiting in
the input pallet queue. If you were to conduct this experiment manually with
ProModel, you would set the ProcessTime macro to one, run the simulation,
display output results at the end of the run, and read the average contents for the
InputPalletQue location from the report. You may want to verify this as an
exercise.
Because the objective function was to minimize the mean number of
plates waiting in the input pallet queue, the same values from the
InputPalletQue:Average- Contents column also appear in the Objective Function
column. However, notice that the values in the Objective Function column are
preceded by a negative sign
566 Part II Labs
FIGURE L11.12
SimRunner’s process for converting minimization problems to maximization problems.
100
90
80 Location (Queue) Contents
70
60 1 x Contents
50
40
30
20
10
0
-10
0 1.5
0 3
-50 -1 x Contents
-100
-110
0 1.5
3
Process Time
(Figure L11.11). This has to do with the way SimRunner treats a minimization
objective. SimRunner’s optimization algorithms view all problems as
maximization problems. Therefore, if we want to minimize a term called
Contents in an objective function, SimRunner multiplies the term by a negative
one {(−1)Contents}. Thus SimRunner seeks the minimal value by seeking
the maximum negative value. Figure L11.12 illustrates this for the ideal
production system’s response surface.
Figure L11.13 illustrates SimRunner’s Performance Measures Plot for this
optimization project. The darker colored line (which appears red on the
computer screen) at the top of the Performance Measures Plot represents the
best value of the objective function found by SimRunner as it seeks the
optimum. The lighter colored line (which appears green on the computer screen)
represents the value of the objective function for all of the solutions that
SimRunner tried.
The last menu item of the Optimize Model module is the Response Plot
(Figure L11.11), which is a plot of the model’s output response surface based on
the solutions evaluated during the search. We will skip this feature for now and
cover it at the end of the lab chapter.
Lab 11 Simulation Optimization with SimRunner 567
FIGURE L11.13
SimRunner’s
Performance Plot
indicates the progress
of the optimization
for scenario one.
FIGURE L11.14
Multiterm objective function for scenario two.
FIGURE L11.15
Multiple input factors setup for scenario two.
570 Part II Labs
FIGURE L11.16
Experimental results table for scenario two.
per experiment. (Remember, you will want to run multiple replications on real
applications.) Also, set the simulation run hours to 250 and the warm-up hours
to 50 for now.
At the conclusion of the optimization, SimRunner will have run four
genera- tions as it conducted 23 experiments (Figure L11.16). What values do
you recom- mend to management for TBA and ProcessTime?
Explore how sensitive the solutions listed in the Experimental Results table
for this project are to changes in the weight assigned to the maximum contents
statistic. Change the weight of this second term in the objective function from
100 to 50. To do this, go back to the Define Objectives section of the Setup Pro-
ject module and update the weight assigned to the InputPalletQue—Maximum-
Contents response statistic from 100 to 50. Upon doing this, SimRunner warns
you that the action will clear the optimization data that you just created. You can
save the optimization project with the File Save option if you wish to keep the
results from the original optimization. For now, do not worry about saving
the data and click the Yes button below the warning message. Now rerun the
Lab 11 Simulation Optimization with SimRunner 571
FIGURE L11.17
Experimental results table for scenario two with modified objective function.
optimization and study the result (Figure L11.17). Notice that a different solution
is reported as optimum for the new objective function. Running a set of prelimi-
nary experiments with SimRunner is a good way to help fine-tune the weights
as- signed to terms in an objective function. Additionally, you may decide to
delete terms or add additional ones to better express your desires. Once the
objective function takes its final form, rerun the optimization with the proper
number of replications.
For this application scenario, the managers of the ideal production system
have specified that the mean time to process gears through the system should
range between four and seven minutes. This time includes the time a plate waits
in the input pallet queue plus the machining time at the mill. Recall that we built
the model with a single entity type, named Gear, to represent both plates and
gears. Therefore, the statistic of interest is the average time that the gear entity is
in the system. Our task is to determine values for the input factors ProcessTime
and TBA that satisfy management’s objective.
The target range objective function is represented in SimRunner as shown in
Figure L11.18. Develop a SimRunner project using this objective function to
seek the optimal values for the input factors (macros) TBA and ProcessTime.
Specify that the input factors are integers between one and five, and use the
aggressive optimization profile with the convergence percentage set to 0.01,
maximum gen- erations equal to 99999, and minimum generations equal to one.
To save time, set
FIGURE L11.18
Target range objective function setup for scenario three.
Lab 11 Simulation Optimization with SimRunner 573
FIGURE 11.19
Experimental results table with Performance Measures Plot for scenario three.
the number of replications per experiment to one. (Remember, you will want to
run multiple replications on real applications.) Also, set the simulation run hours
to 250 and the warm-up hours to 50 and run the optimization. Notice that only
the solutions producing a mean time in the system of between four and seven
minutes for the gear received a nonzero value for the objective function (Figure
L11.19). What values do you recommend to management for TBA and
ProcessTime?
Now plot the solutions SimRunner presented in the Experimental Results
table by selecting the Response Plot button on the Optimize Model module
(Figure L11.19). Select the independent variables as shown in Figure L11.20 and
click the Update Chart button. The graph should appear similar to the one in
Figure L11.20. The plot gives you an idea of the response surface for this objec-
tive function based on the solutions that were evaluated by SimRunner. Click the
Edit Chart button to access the 3D graph controls to format the plot and to
reposi- tion it for different views of the response surface.
574 Part II Labs
FIGURE L11.20
Surface response plot for scenario three.
L11.3 Conclusions
Sometimes it is useful to conduct a preliminary optimization project using only
one replication to help you set up the project. However, you should rarely, if ever,
make decisions based on an optimization project that used only one replication
per experi- ment. Therefore, you will generally conduct your final project
using multiple replications. In fact, SimRunner displays a confidence interval
about the objective function when experiments are replicated more than once.
Confidence intervals indicate how accurate the estimate of the expected value of
the objective function is and can help you make better decisions, as noted in
Section 11.6.2 of Chapter 11.
Even though it is easy to use SimRunner, do not fall into the trap of letting
SimRunner, or any other optimizer, become the decision maker. Study the top
solutions found by SimRunner as you might study the performance records of
different cars for a possible purchase. Kick their tires, look under their hoods,
and drive them around the block before buying. Always remember that the
optimizer is not the decision maker. SimRunner can only suggest a possible
course of action. It is your responsibility to make the final decision.
Lab 11 Simulation Optimization with SimRunner 575
L11.4 Exercises
Simulation Optimization Exercises
1. Rerun the optimization project presented in Section L11.2.1, setting the
number of replications to five. How do the results differ from the original
results?
2. Conduct an optimization project on the buffer allocation problem
presented in Section 11.6 of Chapter 11. The model’s file name is
Lab 11_4 BufferOpt Ch11.Mod and is included on the CD
accompanying the textbook. To get your results to appear as shown in
Figure 11.5 of Chapter 11, enter Buffer3Cap as the first input factor,
Buffer2Cap as the second input factor, and Buffer1Cap as the third input
factor. For each input factor, the lower bound is one and the upper bound
is nine. The objective is to maximize profit. Profit is computed in the
model’s termination logic by
Profit = (10*Throughput)
— (1000*(Buffer1Cap + Buffer2Cap + Buffer3Cap))
Figure L11.21 is a printout of the model. See Section 11.6.2 of Chapter
11 for additional details. Use the Aggressive optimization profile and
set the number of replications per experiment to 10. Specify a warm-up
time of 240 hours, a run time of 720 hours, and a confidence level of
95 percent. Note that the student version of SimRunner will halt at
25 experiments, which will be before the search is completed. However,
it will provide the data necessary for answering these questions:
a. How do the results differ from those presented in Chapter 11 when
only five replications were run per experiment?
b. Are the half-widths of the confidence intervals narrower?
c. Do you think that the better estimates obtained by using 10
replications will make it more likely that SimRunner will find the true
optimal solution?
3. In Exercise 4 of Lab Section L10.5, you increased the amount of coal
delivered to the railroad by the DumpOnMe facility by adding more
dump trucks to the system. Your solution received high praise from
everyone but the lead engineer at the facility. He is concerned about the
maintenance needs for the scale because it is now consistently operated
in excess of 90 percent. A breakdown of the scale will incur substantial
repair costs and loss of profit due to reduced coal deliveries. He wants
to know the number of trucks needed at the facility to achieve a target
scale utilization of between 70 percent and 75 percent. This will allow
time for proper preventive maintenance on the scale. Add a macro to
the
simulation model to control the number of dump trucks circulating in the
system. Use the macro in the Arrivals Table to specify the number of
dump trucks that are placed into the system at the start of each
simulation. In SimRunner, select the macro as an input factor and assign
FIGURE L11.21
Buffer allocation model from Chapter 11 (Section 11.6.2).
Lab 11 Simulation Optimization with SimRunner 577
FIGURE L11.22
SimRunner
parameters for the
GMMC warm- up
example.
L A B
12 INTERMEDIATE
MODELING CONCEPTS
All truths are easy to understand once they are discovered; the point is to
discover them.
—Galileo Galilei
L12.1 Attributes
Attributes can be defined for entities or for locations. Attributes are
placeholders similar to variables but are attached to specific entities or
locations and usually contain information about that entity or location.
Attributes are changed and as- signed when an entity executes the line of logic
that contains an operator, much like the way variables work. Some examples of
attributes are part type, customer number, and time of arrival of an entity, as
well as length, weight, volume, or some other characteristic of an entity.
579
580 Part II Labs
Children 8 2
Women 12 3
Men 10 2
Lab 12 Intermediate Modeling Concepts 581
FIGURE L12.2
Process and routing tables for Fantastic Dan.
FIGURE L12.3
Arrival of customers at Fantastic Dan.
FIGURE L12.4
Simulation model for Fantastic Dan.
582 Part II Labs
FIGURE L12.5
Setting the attribute Time_In and logging the cycle time.
FIGURE L12.6
The minimum,
maximum, and
average cycle times
for various
customers.
Lab 12 Intermediate Modeling Concepts 583
FIGURE L12.8
Keeping track of machined_qty and probabilistic routings at the Inspect location.
584 Part II Labs
FIGURE L12.9
Simulation model for Widgets-R-Us.
after inspection. Figure L12.9 shows the complete simulation model with coun-
ters added for keeping track of the number of widgets reworked and the number
of widgets shipped.
Problem Statement
Poly Casting Inc. (Lab 7, Section L7.4) decides to merge with El Segundo
Com- posites (Lab 7, Section L7.6.1). The new company is named El
Segundo
Lab 12 Intermediate Modeling Concepts 585
Castings N’ Composites. Merge the model for Section L7.4 with the model for
Section L7.6.1. All the finished products—castings as well as composites—are
now sent to the shipping queue and shipping clerk. The model in Section L7.4 is
shown in Figure L12.10. The complete simulation model, after the model for
Sec- tion L7.6.1 is merged, is shown in Figure L12.11. After merging, make the
neces- sary modifications in the process and routing tables.
We will make suitable modifications in the Processing module to reflect
these changes (Figure L12.12). Also, the two original variables in Section L7.4
(WIP and PROD_QTY) are deleted and four variables are added: WIPCasting,
WIP- Composite, PROD_QTY_Casting, and PROD_QTY_Composite.
FIGURE L12.10
The layout of the
simulation model for
Section L7.4.
FIGURE L12.11
Merging the models from Section L7.4 and Section L7.6.1.
586 Part II Labs
FIGURE L12.12
Changes made to the process table after merging.
FIGURE L12.13
The logic for preventive maintenance at Widgets-R-Us Manufacturing Inc.
FIGURE L12.14
Processes and routings at Widgets-R-Us Manufacturing Inc.
FIGURE L12.15
Complete simulation model for Widgets-R-Us Manufacturing Inc.
routings are shown in Figure L12.14. Figure L12.15 shows the complete simu-
lation model.
Problem Statement
The turning center in this machine shop (Figure L12.16) has a time to failure
(TTF) distribution that is exponential with a mean of 10 minutes. The repair
time (TTR) is also distributed exponentially with a mean of 10 minutes.
This model shows how to get ProModel to implement downtimes that use
time to failure (TTF) rather than time between failures (TBF). In practice, you
most likely will want to use TTF because that is how data will likely be avail-
able to you, assuming you have unexpected failures. If you have regularly
scheduled downtimes, it may make more sense to use TBF. In this example, the
theoretical percentage of uptime is MTTF/(MTTF + MTTR), where M
indicates a mean value. The first time to failure and time to repair are set in
the variable
initialization section (Figure L12.17). Others are set in the downtime logic (Fig-
ure L12.18).
The processing and routing tables are shown in Figure L12.19. Run this
model for about 1000 hours, then view the batch mean statistics for downtime
by picking “averaged” for the period (Figure L12.20) when the output analyzer
(clas- sical) comes up. The batch mean statistics for downtime for the turning
center are shown in Figure L12.21. (This problem was contributed by Dr.
Stephen Chick, University of Michigan, Ann Arbor.)
FIGURE L12.16
Layout of the machine
shop—modeling
breakdown with TTF
and TTR.
FIGURE L12.17
Variable
initializations.
590 Part II Labs
FIGURE L12.18
Clock downtime logic.
FIGURE L12.19
Process and routing tables.
FIGURE L12.20
Average of all the
batches in the
Classical ProModel
Output Viewer.
FIGURE L12.21
Batch mean statistics for downtime.
Lab 12 Intermediate Modeling Concepts 591
Problem Statement
Orders for two types of widgets (widget A and widget B) are received by
Widgets-R-Us Manufacturing Inc. Widget A orders arrive on average every
5 minutes (exponentially distributed), while widget B orders arrive on average
every 10 minutes (exponentially distributed). Both widgets arrive at the input
queue. An attribute Part_Type is defined to differentiate between the two types of
widgets.
Widget A goes on to the lathe for turning operations that take Normal(5,1)
min- utes. Widget B goes on to the mill for processing that takes Uniform(4,8)
minutes. Both widgets go on to an inspection queue, where every fifth part is
inspected. Inspection takes Normal(6,2) minutes. After inspection, 70 percent of
the widgets pass and leave the system, while 30 percent of the widgets fail and
are sent back to the input queue for rework.
An operator is used to process the parts at both the lathe and the mill. The
operator is also used to inspect the part. The operator moves the parts from
the input queue to the machines as well as to the inspection station. The operator
is on a shift from 8 A.M. until 5 P.M. with breaks as shown in Table L12.3 and
Figures L12.22 and L12.23.
Use the DISPLAY statement to notify the user when the operator is on a
break. Set up a break area (Figure L12.24) for the operator by extending the
path net- work to a break room and indicating the node as the break node in
Resource specs. The processes and the routings at Widgets-R-Us are shown
in Figure L12.25. Determine the following:
a. How many widgets of each type are shipped each week (40-hour
week).
b. The cycle time for each type of widgets.
c. The maximum and minimum cycle times.
Breaks From To
FIGURE L12.22
The operator Joe’s weekly work and break times at Widgets-R-Us.
FIGURE L12.23
Assigning the Shift File to the operator Joe.
FIGURE L12.24
The layout and path network at Widgets-R-Us.
FIGURE L12.25
Processes and routings at Widgets-R-Us.
FIGURE L12.26
A snapshot during
the simulation model
run for Widgets-R-
Us.
594 Part II Labs
Problem Statement
In Joe’s Jobshop there are three machines through which three types of jobs are
routed. All jobs go to all machines, but with different routings. The data for job
routings and processing times (minutes) are given in Table L12.4. The process
times are exponentially distributed with the given average values. Jobs arrive at
the rate of Exponential(30) minutes. Simulate for 10 days (80 hours). How many
jobs of each type are processed in 10 days?
Figures L12.27, L12.28, L12.29, L12.30, and L12.31 show the locations,
entities, variables, processes, and layout of Joe’s Jobshop.
FIGURE L12.27
Locations at Joe’s Jobshop.
FIGURE L12.28
Entities at Joe’s
Jobshop.
FIGURE L12.29
Variables to track jobs
processed at Joe’s
Jobshop.
FIGURE L12.30
Processes and routings at Joe’s Jobshop.
596 Part II Labs
FIGURE L12.31
Layout of Joe’s Jobshop.
Problem Statement
At Wang’s Export Machine Shop in the suburbs of Chicago, two types of jobs
are processed: domestic and export. The rate of arrival of both types of jobs is
Lab 12 Intermediate Modeling Concepts 597
FIGURE L12.32
Choosing among upstream processes.
FIGURE L12.33
Forklift resource specified for Wang’s Export Machine Shop.
FIGURE L12.34
Definition of path network for forklift for Wang’s Export Machine Shop.
FIGURE L12.35
Processes and routings defined for Wang’s Export Machine Shop.
600 Part II Labs
Problem Statement
At Wang’s Export Machine Shop, two types of jobs are processed: domestic
and export. Mr. Wang is both the owner and the operator. The rate of arrival of
both types of jobs is Exponential(60) minutes. Export jobs are processed on
machining center E, and the domestic jobs are processed on machining
center D. The processing times for all jobs are triangularly distributed (10,
12, 18) minutes. Mr. Wang gives priority to export jobs over domestic jobs. The
distance between the stations is given in Table L12.7.
Five locations (Machining_Center_D, Machining_Center_E,
In_Q_Domestic, In_Q_Export, and Outgoing_Q) are defined. Two types of jobs
(domestic and ex- port) arrive with an exponential interarrival frequency
distribution of 60 minutes. Mr. Wang is defined as a resource in Figure L12.36.
The path network and the processes are shown in Figures L12.37 and L12.38
respectively. Mr. Wang is get- ting old and can walk only 20 feet/minute with a
load and 30 feet/minute without a load. Simulate for 100 hours.
Priorities of resource requests can be assigned through a GET, JOINTLY GET,
or USE statement in operation logic, downtime logic, or move logic or the
subrou- tines called from these logics. Priorities for resource downtimes are
assigned in the Priority field of the Clock and Usage downtime edit tables.
Note that the priority of the resource (Mr_Wang) is assigned through the GET
statement in the operation logic (Figure L12.38). The domestic orders have a re-
source request priority of 1, while that of the export orders is 10.
FIGURE 12.36
Resource defined for Wang’s Export Machine Shop.
FIGURE L12.37
Path network defined at Wang’s Export Machine Shop.
FIGURE L12.38
Processes and routings defined at Wang’s Export Machine Shop.
The results (partial) are shown in Figure L12.39. Note that the average time
waiting for the resource (Mr_Wang) is about 50 percent more for the domestic
jobs (with lower priority) than for the export jobs. The average time in the
system for domestic jobs is also considerably more than for the export jobs.
602 Part II Labs
FIGURE L12.39
Part of the results showing the entity activities.
Problem Statement
In the Milwaukee Machine Shop, two types of jobs are processed within a ma-
chine cell. The cell consists of one lathe and one mill. Type 1 jobs must be
processed first on the lathe and then on the mill. Type 2 jobs are processed only
on the mill (Table L12.8). All jobs are processed on a first-in, first-out basis.
Brookfield Forgings is a vendor for the Milwaukee Machine Shop and pro-
duces all the raw material for them. Forgings are produced in batches of five
every day (exponential with a mean of 24 hours). However, the customer
supplies them only on demand. In other words, when orders arrive at the
Milwaukee Machine Shop, the raw forgings are supplied by the vendor (a pull
system of shop loading). Simulate for 100 days (2400 hours). Track the work-
in-process inventories and the production quantities for both the job types.
Six locations (Mill, Lathe, Brookfield_Forgings, Order_Arrival, Lathe_Q,
and Mill_Q) and four entities (Gear_1, Gear_2, Orders_1, and Orders_2) are
defined. The arrivals of various entities are defined as in Figure L12.40. Four
variables are defined as shown in Figure L12.41. The processes and routings are
Lab 12 Intermediate Modeling Concepts 603
FIGURE L12.40
Arrival of orders at the Milwaukee Machine Shop.
FIGURE L12.41
Variables defined for
the Milwaukee
Machine Shop.
shown in Figure L12.42. Note that as soon as a customer order is received at the
Milwaukee Machine Shop, a signal is sent to Brookfield Forgings to ship a gear
forging (of the appropriate type). Thus the arrival of customer orders pulls the
raw material from the vendor. When the gears are fully machined, they are
united (JOINed) with the appropriate customer order at the orders arrival
location. Fig- ure L12.43 shows a layout of the Milwaukee Machine Shop and a
snapshot of the simulation model.
FIGURE L12.42
Processes and routings defined for the Milwaukee Machine Shop.
FIGURE L12.43
Simulation model for the Milwaukee Machine Shop.
“Kanban” literally means “visual record.” The word kanban refers to the
signboard of a store or shop, but at Toyota it simply means any small sign dis-
played in front of a worker. The kanban contains information that serves as a
work order. It gives information concerning what to produce, when to
produce it, in what quantity, by what means, and how to transport it.
Problem Statement
A consultant recommends implementing a production kanban system for Sec-
tion 12.9.1’s Milwaukee Machine Shop. Simulation is used to find out how
many kanbans should be used. Model the shop with a total of five kanbans.
The kanban procedure operates in the following manner:
1. As soon as an order is received by the Milwaukee Machine Shop,
they communicate it to Brookfield Forgings.
2. Brookfield Forgings holds the raw material in their own facility in the
forging queue in the sequence in which the orders were received.
3. The production of jobs at the Milwaukee Machine Shop begins only
when a production kanban is available and attached to the production
order.
4. As soon as the production of any job type is finished, the kanban is
detached and sent to the kanban square, from where it is pulled by
Brookfield Forgings and attached to a forging waiting in the forging
queue to be released for production.
The locations at the Milwaukee Machine Shop are defined as shown in Fig-
ure L12.44. Kanbans are defined as virtual entities in the entity table (Fig-
ure L12.45).
The arrival of two types of gears at Brookfield Forgings is shown in the ar-
rivals table (Figure L12.46). This table also shows the arrival of two types of
cus- tomer orders at the Milwaukee Machine Shop. A total of five kanbans are
generated at the beginning of the simulation run. These are recirculated through
the system.
FIGURE L12.44
Locations at the Milwaukee Machine Shop.
606 Part II Labs
FIGURE L12.45
Entities defined for
Milwaukee Machine
Shop.
FIGURE L12.46
Arrival of orders at Milwaukee Machine Shop.
FIGURE L12.47
Simulation model of a kanban system for the Milwaukee Machine Shop.
Lab 12 Intermediate Modeling Concepts 607
FIGURE L12.48
Process and routing tables for the Milwaukee Machine Shop.
Figure L12.47 shows the layout of the Milwaukee Machine Shop. The
processes and the routings are shown in Figure L12.48. The arrival of a
customer order (type 1 or 2) at the orders arrival location sends a signal to
Brookfield Forg- ings in the form of a production kanban. The kanban is
temporarily attached (LOADed) to a gear forging of the right type at the Order_Q.
The gear forgings are sent to the Milwaukee Machine Shop for processing.
After they are fully processed, the kanban is separated (UNLOADed). The kanban
goes back to the kan- ban square. The finished gear is united (JOINed) with the
appropriate customer order at the orders arrival location.
FIGURE 12.49
The Cost option in the
Build menu.
FIGURE L12.50
The Cost dialog box—Locations option.
Locations
The Locations Cost dialog box (Figure L12.50) has two fields: Operation Rate
and Per Operation Rate specifies the cost per unit of time to process at the
selected location. Costs accrue when an entity waits at the location or uses the
location. Per is a pull-down menu to set the time unit for the operation rate as
second, minute, hour, or day.
Resources
The Resources Cost dialog box (Figure L12.51) has three fields: Regular Rate,
Per, and Cost Per Use. Regular Rate specifies the cost per unit of time for a re-
source used in the model. This rate can also be set or changed during run time
Lab 12 Intermediate Modeling Concepts 609
FIGURE L12.51
The Cost dialog box—Resources option.
FIGURE L12.52
The Cost dialog box—Entities option.
Entities
The Entities Cost dialog box (Figure L12.52) has only one field: Initial Cost.
Initial Cost is the cost of the entity when it arrives to the system through a
sched- uled arrival.
610 Part II Labs
Increment Cost
The costs of a location, resource, or entity can be incremented by a positive or
negative amount using the following operation statements:
• IncLocCost—Enables you to increment the cost of a location.
• IncResCost—Enables you to increment the cost of a resource.
• IncEntCost—Enables you to increment the cost of an entity.
Problem Statement
Raja owns a manufacturing cell consisting of two mills and a lathe. All jobs are
processed in the same sequence, consisting of an arriving station, a lathe, mill 1,
mill 2, and an exit station. The processing time on each machine is normally dis-
tributed with a mean of 60 seconds and standard deviation of 5. The arrival rate
of jobs is exponentially distributed with a mean of 120 seconds.
Raja, the material handler, transports the jobs between the machines and the
arriving and exit stations. Job pickup and release times are uniformly distributed
between six and eight seconds. The distances between the stations are given in
Table L12.9. Raja can walk at the rate of 150 feet/minute when carrying no load.
However, he can walk only at the rate of 80 feet/minute when carrying a load.
The operation costs for the machines are given in Table L12.10. Raja gets
paid at the rate of $60 per hour plus $5 per use. The initial cost of the jobs
when they enter the system is $100 per piece. Track the number of jobs
produced, the total cost of production, and the cost per piece of production.
Simulate for 80 hours.
Arriving Lathe 40
Lathe Mill 1 80
Mill 1 Mill 2 60
Mill 2 Exit 50
Exit Arrive 80
Lathe $10/minute
Mill 1 $18/minute
Mill 2 $22/minute
Lab 12 Intermediate Modeling Concepts 611
FIGURE L12.53
Processes and routings at Raja’s manufacturing cell.
FIGURE L12.54
Simulation model of Raja’s manufacturing cell.
AutoCAD drawings can also be copied to the clipboard and pasted into the back-
ground. The procedure is as follows:
1. With the graphic on the screen, press <Ctrl> and <C> together.
Alternatively, choose Copy from the Edit menu. This will copy the
graphic into the Windows clipboard.
2. Open an existing or new model file in ProModel.
3. Press <Ctrl> and <V> together. Alternatively, choose Paste from the
Edit menu.
This action will paste the graphic as a background on the layout of the model.
Another way to import backgrounds is to use the Edit menu in ProModel:
1. Choose Background Graphics from the Build menu.
2. Select Front of or Behind grid.
3. Choose Import Graphic from the Edit menu.
4. Select the desired file and file type. The image will be imported into
the layout of the model.
5. Left-click on the graphic to reposition and resize it, if necessary.
“Front of grid” means the graphic will not be covered by grid lines when the
grid is on. “Behind grid” means the graphic will be covered with grid lines when
the grid is on.
Problem Statement
For the Shipping Boxes Unlimited example in Lab 7, Section L7.7.2, define the
following six views: Monitor Queue, Box Queue, Inspect Queue, Shipping
Queue, Pallet Queue, and Full View. Each view must be at 300 percent magnifi-
cation. Show Full View at start-up. Show the Pallet Queue when an empty pallet
arrives at the pallet queue location. Go back to showing the Full View when the
box is at the shipping dock. Here are the steps in defining and naming views:
1. Select the View menu after the model layout is finished.
2. Select Views from the View menu.
The Views dialog box is shown in Figure L12.56. Figure L12.57 shows the Add
View dialog box. The Full View of the Shipping Boxes Inc. model is shown in
Figure L12.58.
Lab 12 Intermediate Modeling Concepts 613
FIGURE L12.56
FIGURE L12.55 The Views dialog box.
Views command in the View menu.
FIGURE L12.57
The Add View
dialog box.
FIGURE L12.58
Full View of the Shipping Boxes Inc. model.
FIGURE L12.59
General Information
for the Shipping Boxes
Inc. model.
Lab 12 Intermediate Modeling Concepts 615
FIGURE L12.60
Processes and routings at Shipping Boxes Inc. incorporating the change of views.
Example
For the Widgets-R-Us example in Section L12.6, make a model package that in-
cludes the model file and the shift file for operator Joe. Save the model package
in a floppy disk. Figure L12.61 shows the Create Model Package dialog.
Unpack
To unpack and install the model package, double-click on the package file. In the
Unpack Model Package dialog select the appropriate drive and directory path to
install the model file and its associated files (Figure L12.62). Then click Install.
After the package file has been installed, ProModel prompts you (Figure
L12.63) for loading the model. Click Yes.
FIGURE L12.61
Create Model Package
dialog.
FIGURE L12.62
Unpack Model
Package dialog.
Lab 12 Intermediate Modeling Concepts 617
FIGURE L12.63
Load Model dialog.
L12.14 Exercises
1. Five different types of equipment are available for processing a special
type of part for one day (six hours) of each week. Equipment 1 is
available on Monday, equipment 2 on Tuesday, and so forth. The
processing time data follow:
1 5± 2
2 4± 2
3 3± 1.5
4 6± 1
5 5± 1
Children 8 2
Women 12 3
Men 10 2
618 Part II Labs
The initial greetings and signing in take Normal (2, • 2) minutes, and the
transaction of money at the end of the haircut takes Normal (3, • 3) minutes.
Run the simulation model for 100 working days (480 minutes each).
a. About how many customers of each type does Dan process per day?
b. What is the average number of customers of each type waiting to get
a haircut? What is the maximum?
c. What is the average time spent by a customer of each type in the
salon? What is the maximum?
3. Poly Castings Inc. receives castings from its suppliers in batches of one every eleven minutes
exponentially distributed. All castings arrive at the raw material store. Of these castings, 70 percent are
used to make widget A, and the rest are used to make widget B. Widget A goes from the raw material
store to the mill, and then on to the grinder. Widget B goes directly to the grinder. After grinding, all
widgets go to degrease for cleaning. Finally, all widgets are sent to the finished parts store. Simulate
for 1000 hours.
c. What percentage of the time does each mill spend in cleaning and
tool change operations?
d. What is the average time a casting spends in the system?
e. What is the average work-in-process of castings in the system?
7. Consider the NoWaitBurger stand in Exercise 13, in Section L7.12, and answer the following
questions.
a. What is the average amount of time spent by a customer at the
hamburger stand?
b. Run 10 replications and compute a 90 percent confidence interval for the
average amount of time spent by a customer at the stand.
c. Develop a 90 percent confidence interval for the average number of
customers waiting at the burger stand.
8. Sharukh, Amir, and Salman wash cars at the Bollywood Car Wash. Cars arrive every 10 ± 6 minutes.
They service customers at the rate of one every 20 ± 10 minutes. However, the customers prefer Sharukh
to Amir, and Amir over Salman. If the attendant of choice is busy, the
customers choose the first available attendant. Simulate the car wash
system for 1000 service completions (car washes). Answer the
following questions:
a. Estimate Sharukh’s, Amir’s, and Salman’s utilization.
b. On average, how long does a customer spend at the car wash?
c. What is the longest time any customer spent at the car wash?
d. What is the average number of customers at the car wash?
Embellishment: The customers are forced to choose the first available
attendant; no individual preferences are allowed. Will this make a
significant enough difference in the performance of the system to justify
this change? Answer questions a through d to support your argument.
9. Cindy is a pharmacist and works at the Save-Here Drugstore. Walk- in customers arrive at a rate of one
every 10 ± 3 minutes. Drive-in customers arrive at a rate of one every 20 ± 10 minutes. Drive-in
customers are given higher priority than walk-in customers. The number of items in a prescription varies
from 1 to 5 (3 ± 2). Cindy can fill one item in 6 ± 1 minutes. She works from 8 A.M. until 5 P.M. Her
lunch break is from 12 noon until 1 P.M. She also takes two 15-minute
breaks: at 10 A.M. and at 3 P.M. Define a shift file for Cindy named
Cindy.sft. Model the pharmacy for a year (250 days) and answer the
following questions:
a. Estimate the average time a customer (of each type) spends at the
drugstore.
b. What is the average number of customers (of each type) waiting for
service at the drugstore?
c. What is the utilization of Cindy (percentage of time busy)?
d. Do you suggest that we add another pharmacist to assist Cindy? How
many pharmacists should we add?
620 Part II Labs
Class 1 2 3
Time Required
Operation (minutes)
Assemble 30 ± 5
Fire 8± 2
Cost Information
Item ($)
85%
622 Part II Labs
L A B
13 MATERIAL
HANDLING
CONCEPTS
With malice toward none, with charity for all, with firmness in the right, as God
gives us to see the right, let us strive on to finish the work we are in . . .
—Abraham Lincoln (1809–1865)
L13.1 Conveyors
Conveyors are continuous material handling devices for transferring or moving
objects along a fixed path having fixed, predefined loading and unloading points.
Some examples of conveyors are belt, chain, overhead, power-and-free, roller,
and bucket conveyors.
In ProModel, conveyors are locations represented by a conveyor graphic. A
conveyor is defined graphically by a conveyor path on the layout. Once the path
has been laid out, the length, speed, and visual representation of the conveyor
can be edited by double-clicking on the path and opening the Conveyor/Queue
dialog box (Figure L13.1). The various conveyor options are specified in the
Conveyor Options dialog box, which is accessed by clicking on the Conveyor
Options but- ton in the Conveyor/Queue dialog box (Figure L13.2).
In an accumulating conveyor, if the lead entity comes to a stop, the trailing
entities queue behind it. In a nonaccumulating conveyor, if the lead entity is un-
able to exit the conveyor, then the conveyor and all other entities stop.
623
624 Part II Labs
FIGURE L13.1
The Conveyor/Queue
dialog box.
FIGURE L13.2
The Conveyor
Options dialog box.
FIGURE L13.3
Processes and routings for the Ship’n Boxes Inc. model.
FIGURE L13.4
Simulation
model layout for
Ship’n Boxes
Inc.
A path network defines the way a resource travels between locations. The
specifications of path networks allow you to define the nodes at which the
resource parks, the motion of the resource, and the path on which the resource
travels. Path networks consist of nodes that are connected by path segments. A
beginning node and an ending node define a path segment. Path segments may
be unidirectional or bidirectional. Multiple path segments, which may be
straight or joined, are connected at path nodes. To create path networks:
1. Select the Path button and then left-click in the layout where you want
the path to start.
2. Subsequently, left-click to put joints in the path and right-click to end the
path.
Interfaces are where the resource interacts with the location when it is on
the path network. To create an interface between a node and a location:
1. Left-click and release on the node (a dashed line appears).
2. Then left-click and release on the location.
Multiple interfaces from a single node to locations can be created, but only
one interface may be created from the same path network to a particular
location.
FIGURE L13.5
Process and routing logic at Ghosh’s Gear Shop.
FIGURE L13.6
Layout of Ghosh’s Gear Shop.
We ran this model with one, two, or three operators working together. The
results are summarized in Tables L13.2 and L13.3. The production quantities and
the average time in system (minutes) are obtained from the output of the simula-
tion analysis. The profit per hour, the expected delay cost per piece, and the
628 Part II Labs
additional throughput can be sold for $20 per item profit. Should we replace one
operator in the previous manufacturing system with a conveyor system? Build a
simulation model and run it for 100 hours to help in making this decision.
The layout, locations, and processes and routings are defined as shown
in Figures L13.7, L13.8, and L13.9 respectively. The length of each conveyor is
40 feet (Figure L13.10). The speeds of all three conveyors are assumed to be
50 feet/minute (Figure L13.11).
The results of this model are summarized in Tables L13.4 and L13.5. The
production quantities and the average time in system (minutes) are obtained
from the output of the simulation analysis. The profit per hour, the expected
delay cost
FIGURE L13.7
Layout of Ghosh’s Gear Shop with conveyors.
FIGURE L13.8
Locations at Ghosh’s
Gear Shop with
conveyors.
630 Part II Labs
FIGURE L13.9
Process and routing at Ghosh’s Gear Shop with conveyors.
FIGURE L13.10
Conveyors at Ghosh’s
Gear Shop.
FIGURE L13.11
Conveyor options at
Ghosh’s Gear Shop.
Lab 13 Material Handling Concepts 631
Production
Mode of Quantity Production Rate Gross Profit Average Time in
Transportation Per 100 Hours Per Hour Per Hour System (minutes)
per piece, and the expected delay cost per hour are calculated as follows:
Profit per hour = Production rate/hour × Profit per
piece Expected delay cost ($/piece) = (Average time in system
(min.)/60)
× $0.1/piece/hour
Expected delay cost ($/hour) = Expected delay cost ($/piece)
× Production rate/hour
To calculate the service cost of the conveyor, we assume that it is used about
2000 hours per year (8 hrs/day × 250 days/year). Also, for depreciation
purposes, we assume straight-line depreciation over three years.
Total cost of installation = $50/ft. × 40 ft./conveyor segment
× 3 conveyor segments
= $6000
Depreciation per year = $6000/3 =
$2000/year Maintenance cost =
$30,000/year
Total service cost/year = Depreciation cost + Maintenance cost
= $32,000/year
Total service cost/hour = $32,000/2000 = $16/hour
The total net profit per hour after deducting the expected delay and the ex-
pected service costs is
Total net profit/hour = Gross profit/hour − Expected delay cost/hour
— Expected service cost/hour
Comparing the net profit per hour between manual material handling and
conveyorized material handling, it is evident that conveyors should be installed
to maximize profit.
632 Part II Labs
Problem Statement
Modify the example in Section L7.7.1 by having a material handler (let us call
him Joe) return the pallets from the shipping dock to the packing workbench.
Eliminate the pallet return conveyor. Also, let’s have Joe load the boxes onto
pallets [Uniform(3,1) min] and the loaded pallets onto the shipping conveyor.
Assume one pallet is in the system. Simulate for 10 hours and determine the
following:
1. The number of monitors shipped.
2. The utilization of the material handler.
Seven locations (Conv1, Conv2, Shipping_Q, Shipping_Conv, Shipping_Dock,
Packing_Table, and Load_Zone) are defined. The resource (Joe) is defined as
shown in Figure L13.12. The path network for the resource and the processes
and routings are shown in Figures L13.13 and L13.14 respectively. The arrivals
of the entities are shown in Figure L13.15. The layout of Ship’n Boxes Inc.
with con- veyors is shown in Figure L13.16.
FIGURE L13.12
Material handler as a resource.
FIGURE L13.13
Path network for the material handler.
Lab 13 Material Handling Concepts 633
FIGURE L13.14
Processes and routings for the example in Section L13.2.3.
FIGURE L13.15
Arrivals for the example in Section L13.2.3.
FIGURE L13.16
The layout for
Shipping Boxes Inc.
634 Part II Labs
and exit station. The processing time on each machine is normally distributed
with a mean of 60 seconds and a standard deviation of 5 seconds. The arrival
rate of jobs is exponentially distributed with a mean of 120 seconds.
Raja also transports the jobs between the machines and the arrival and exit
stations. Job pickup and release times are uniformly distributed between six and
eight seconds. The distances between the stations are given in Table L13.6. Raja
can travel at the rate of 150 feet/minute when carrying no load. However, he can
walk at the rate of only 80 feet/minute when carrying a load. Simulate for 80
hours.
The layout, locations, path networks, resource specification, and processes
and routings are shown in Figures L13.17, L13.18, L13.19, L13.20, and L13.21,
respectively.
FIGURE L13.17
Layout of Raja’s manufacturing cell.
FIGURE L13.18
Locations at Raja’s manufacturing cell.
FIGURE L13.19
Path networks at Raja’s manufacturing cell.
FIGURE L13.20
Resource specification at Raja’s manufacturing cell.
636 Part II Labs
FIGURE L13.21
Process and routing tables at Raja’s manufacturing cell.
Problem Statement
Pritha takes over the manufacturing operation (Section L13.2.4) from Raja and
re- names it Pritha’s manufacturing cell. After taking over, she installs an
overhead crane system to handle all the material movements between the
stations. Job pickup and release times by the crane are uniformly distributed
between six and eight seconds. The coordinates of all the locations in the cell
are given in Table L13.7. The crane can travel at the rate of 150 feet/minute
with or without a load. Simulate for 80 hours.
Five locations (Lathe, Mill 1, Mill 2, Exit_Station, and Arrive_Station) are
de- fined for Pritha’s manufacturing cell. The path networks, crane system
resource, and processes and routings are shown in Figures L13.22, L13.23, and
L13.24.
FIGURE L13.22
Path networks for the crane system at Pritha’s manufacturing cell.
Arriving 10 100
Lathe 50 80
Mill 1 90 80
Mill 2 80 20
Exit 0 20
638 Part II Labs
FIGURE L13.23
The crane system resource defined.
FIGURE L13.24
Process and routing tables defined for Pritha’s manufacturing cell.
Select Path Network from the Build menu. From the Type menu, select
Crane. The following four nodes are automatically created when we select the
crane type of path network: Origin, Rail1 End, Rail2 End, and Bridge End.
Define the five nodes N1 through N5 to represent the three machines, the arrival
station, and the exit station. Click on the Interface button in the Path Network
menu and define all the interfaces for these five nodes.
Select Resource from the Build menu. Name the crane resource. Enter 1 in
the Units column (one crane unit). Click on the Specs. button. The Specifications
menu opens up. Select Net1 for the path network. Enter the empty and full speed
of the crane as 150 ft/min. Also, enter Uniform(3 ± 1) seconds as the pickup
and deposit time.
L13.4 Exercises
1. Consider the DumpOnMe facility in Exercise 11 of Section L7.12 with
the following enhancements. Consider the dump trucks as material
handling resources. Assume that 10 loads of coal arrive to the loaders
every hour (randomly; the interarrival time is exponentially
distributed). Create a simulation model, with animation, of this system.
Simulate for 100 days, eight hours each day. Collect statistics to
estimate the loader and scale utilization (percentage of time busy).
About how many trucks are loaded each day on average?
2. For the Widgets-R-Us Manufacturing Inc. example in Section
L12.5.1, consider that a maintenance mechanic (a resource) will be
hired to do the
Lab 13 Material Handling Concepts 639
repair work on the lathe and the mill. Modify the simulation model and
run it for 2000 hours. What is the utilization of the maintenance
mechanic?
Hint: Define maintenance_mech as a resource. In the Logic field of the
Clock downtimes for Lathe enter the following code:
GET maintenance_mech
DISPLAY "The Lathe is Down for Maintenance"
Wait N(10,2) min
FREE maintenance_mech
Enter similar code for the Mill.
3. At Forge Inc. raw forgings arrive at a circular conveyor system (Figure
L13.25). They await a loader in front of the conveyor belt. The conveyor
delivers the forgings (one foot long) to three machines located one after the
other. A forging is off-loaded to a machine only if the machine is not in use.
Otherwise, the forging moves on to the next machine. If the forging cannot
gain access to any machine in a given pass, it is recirculated. The conveyor
moves at 30 feet per minute. The distance from the loading station to the first
machine is 30 feet. The distance between each machine is 10 feet. The
distance from the last machine back to the loader is 30 feet. Loading and
unloading take
30 seconds each. Forgings arrive to the loader with an exponential
interarrival time and a mean of 10 seconds. The machining times are
also exponentially distributed with a mean of 20 seconds. Simulate the
conveyor system for 200 hours of operation.
a. Collect statistics to estimate the loader and machine utilizations.
b. What is the average time a forging spends in the system?
c. What fraction of forgings cannot gain access to the machines in
the first pass and need to be recirculated?
d. What is the production rate of forgings per hour at Forge Inc.?
Loader
Forge Inc.
4. U.S. Construction Company has one bulldozer, four trucks, and two loaders. The bulldozer
stockpiles material for the loaders. Two piles of material must be stocked prior to the initiation of any
load operation. The time for the bulldozer to stockpile material is Erlang distributed and consists of
the sum of two exponential variables, each with a mean of 4 (this corresponds to an Erlang variable
with a mean of 8 and a variance of 32). In addition to this material, a loader and an unloaded truck
must be available before the loading operation can begin. Loading time is exponentially distributed
with a mean time of 14 minutes for server 1 and 12 minutes for server 2.
After a truck is loaded, it is hauled and then dumped; it must
be returned before it is available for further loading. Hauling time
is normally distributed. When loaded, the average hauling time is
22 minutes. When unloaded, the average time is 18 minutes. In both
cases, the standard deviation is three minutes. Dumping time is
uniformly distributed between two and eight minutes. Following a
loading operation, the loader must rest for five minutes before it is
available to begin loading again. Simulate this system at the U.S.
Construction Co. for a period of one year (2000 working hours) and
analyze it.
5. At Walnut Automotive, machined castings arrive randomly (exponential, mean of six minutes)
from the supplier to be assembled at one of five identical engine assembly stations. A forklift truck
delivers the castings from the shipping dock to the engine assembly department. A loop conveyor
connects the assembly stations.
The forklift truck moves at a velocity of five feet per second. The
dis- tance from the shipping dock to the assembly department is 1000
feet. The conveyor is 5000 feet long and moves at a velocity of five feet
per second.
At each assembly station, no more than three castings can be
waiting for assembly. If a casting arrives at an assembly station and
there is no room for the casting (there are already three castings
waiting), it goes around for another try. The assembly time is normally
distributed with a mean of five minutes and standard deviation of two
minutes. The assembly stations are equally distributed around the belt.
The load/unload station is located halfway between stations 5 and 1.
The forklift truck delivers the castings to the load/unload station. It also
picks up the completed assemblies from the load/unload station and
delivers them back to the shipping dock.
Create a simulation model, with animation, of this system. Run the
simulation model until 1000 engines have been assembled.
a. What is the average throughput time for the engines in the
manufacturing system?
b. What are the utilization figures for the forklift truck and the conveyor?
c. What is the maximum number of engines in the manufacturing system?
d. What are the maximum and average numbers of castings on the
conveyor?
Lab 13 Material Handling Concepts 641
6. At the Grocery Warehouse, a sorting system consists of one incoming conveyor and three sorting
conveyors, as shown in Figure L13.26. Cases enter the system from the left at a rate of 100 per minute at
random times. The incoming conveyor is 150 feet long. The sorting conveyors are 100 feet long. They
are numbered 1 to 3 from left to right and are 10 feet apart. The incoming conveyor runs at 10 feet per
minute and all the sorting conveyors at 15 feet per minute. All conveyors are accumulating type.
Incoming cases are distributed to the three lanes in the following proportions: Lane 1—30 percent; Lane
2—50 percent; and Lane 3—20 percent.
At the end of each sorting conveyor, a sorter (from a group of
available sorters) scans each case with a bar code scanner, applies a
label, and then places the case on a pallet. One sorter can handle
10 cases per minute on the average (normally distributed with a
standard deviation of 0.5 minute).
When a pallet is full (40 cases), a forklift arrives from the shipping
dock to take it away, unload the cases, and bring back the empty pallet
to an empty pallet queue near the end of the sorting conveyor. A total
of five pallets are in circulation. The data shown in Table 13.8 are
available for the forklift operation.
FIGURE L13.26
The conveyor sorting system at the Grocery Warehouse.
Incoming conveyor
Simulate for one year (250 working days, eight hours each day).
Answer the following questions:
a. How many sorters are required? The objective is to have the
minimum number of sorters but also avoid overflowing the
conveyors.
b. How many forklifts do we need?
c. Report on the sorter utilization, total number of cases shipped, and
the number of cases palletized by lane.
7. Repeat Exercise 6 with a dedicated sorter in each sorting lane. Address all the same issues.
8. Printed circuit boards arrive randomly from the preparation department. The boards are moved in sets
of five by a hand truck to the component assembly department, where the board components are
manually assem- bled. Five identical assembly stations are connected by a loop conveyor.
When boards are placed onto the conveyor, they are directed to the
assembly station with the fewest boards waiting to be processed. After
the components are assembled onto the board, they are set aside and
removed for inspection at the end of the shift. The time between boards
arriving from preparation is exponentially distributed with a mean of
five seconds. The hand truck moves at a velocity of five feet per second
and the conveyor moves at a velocity of two feet per second. The
conveyor is 100 feet long. No more than 20 boards can be placed on the
belt at any one time.
At each assembly station, no more than two boards can be waiting
for assembly. If a board arrives at an assembly station and there is no
room for the board (there are already two boards waiting), the board
goes around the conveyor another time and again tries to enter the
station. The assembly time is normally distributed with a mean of
35 seconds and standard deviation of 8 seconds. The assembly stations
are uniformly distributed around the belt, and boards are placed onto
the belt four feet before the first station. After all five boards are
placed onto the belt, the hand truck waits until five boards have arrived
from the preparation area before returning for another set of boards.
Simulate until 100 boards have been assembled. Report on the
utilization of the hand truck, conveyor, and the five operators. How
many assemblies are produced at each of the five stations? (Adapted
from Hoover and Perry, 1989.)
9. In this example we will model the assembly of circuit boards at four identical assembly stations
located along a closed loop conveyor (100 feet long) that moves at 15 feet per minute. The boards
are
assembled from kits, which are sent in totes from a load/unload station
to the assembly stations via the top loop of the conveyor. Each kit can
be assembled at any one of the four assembly stations. Completed
boards are returned in their totes from the assembly stations back to the
loading /unloading station. The loading/unloading station (located at
the left end of the conveyor) and the four assembly stations (identified
by the letters A through D) are equally spaced along the conveyor, 20
feet
Lab 13 Material Handling Concepts 643
FIGURE L13.27
Steel coils Copper coils
Crane system at the Input Input
Rancho station A station B
Cucamonga Coil
Company.
Crane 1
Crane 2
Output
station C
Output Output
station D station E
Reference
S. V. Hoover and R. F. Perry, Simulation: A Problem Solving Approach, Addison Wesley,
1989, pp. B93–B95.
Harrell−Ghosh−Bo II. 14. Additional © The
wden: Simulation Labs Modeling McGraw−Hill
Using ProModel, Concepts Companies,
Second Edition
L A B
14 ADDITIONAL MODELING
CONCEPTS
In this lab we discuss some of the advanced but very useful concepts in
ProModel. In Section L14.1 we model a situation where customers balk (leave)
when there is congestion in the system. In Section L14.2 we introduce the
concepts of macros and runtime interfaces. In Section L14.3 we show how to
generate multiple sce- narios for the same model. In Section L14.4 we discuss
how to run multiple repli- cations. In Section L14.5 we show how to set up and
import data from external files. In Section L14.6 we discuss arrays of data.
Table functions are introduced in Section L14.7. Subroutines are explained with
examples in Section L14.8. Sec- tion L14.9 introduces the concept of arrival
cycles. Section L14.10 shows the use of user distributions. Section L14.11
introduces the concepts and use of random number streams.
Problem Statement
All American Car Wash is a five-stage operation that takes 2 ± 1 minutes
for each stage of wash (Figure L14.1). The queue for the car wash facility can
hold up to three cars. The cars move through the washing stages in order, one car
not being
647
648 Part II Labs
FIGURE L14.1
Locations and layout of All American Car Wash.
FIGURE L14.2
Customer arrivals at All American Car Wash.
able to move until the car ahead of it moves. Cars arrive every 2.5 ± 2 minutes
for a wash. If a car cannot get into the car wash facility, it drives across the
street to Better Car Wash. Simulate for 100 hours.
a. How many customers does All American Car Wash lose to
its competition per hour (balking rate per hour)?
b. How many cars are served per hour?
c. What is the average time spent by a customer at the car wash facility?
The customer arrivals and processes/routings are defined as shown in
Fig- ures L14.2 and L14.3. The simulation run hours are set at 100. The total
number of cars that balked and went away to the competition across the street
is 104 in 100 hours (Figure L14.4). That is about 10.4 cars per hour. The total
number of customers served is 2377 in 100 hours (Figure L14.5). That is about
23.77 cars per hour. We can also see that, on average, the customers spent 14
minutes in the car wash facility.
Lab 14 Additional Modeling Concepts 649
FIGURE L14.3
Process and routing tables at All American Car Wash.
FIGURE L14.4
Cars that balked.
FIGURE L14.5
Customers served.
The runtime interface (RTI) is a useful feature through which the user can
in- teract with and supply parameters to the model without having to rewrite it.
Every time the simulation is run, the RTI allows the user to change model
parameters defined in the RTI. The RTI provides a user-friendly menu to
change only the macros that the modeler wants the user to change. An RTI is a
custom inter- face defined by a modeler that allows others to modify the
model or conduct multiple-scenario experiments without altering the actual
model data. All changes are saved along with the model so they are preserved
from run to run. RTI para- meters are based on macros, so they may be used to
change any model parameter that can be defined using a macro (that is, any
field that allows an expression or any logic definition).
An RTI is created and used in the following manner:
1. Select Macros from the Build menu and type in a macro ID.
2. Click the RTI button and choose Define from the submenu. This opens
the RTI Definition dialog box.
3. Enter the Parameter Name that you want the macro to represent.
4. Enter an optional Prompt to appear for editing this model parameter.
5. Select the parameter type, either Unrestricted Text or Numeric Range.
6. For numeric parameters:
a. Enter the lower value in the From box.
b. Enter the upper value in the To box.
7. Click OK.
8. Enter the default text or numeric value in the Macro Text field.
9. Use the macro ID in the model to refer to the runtime parameter (such
as operation time or resource usage time) in the model.
10. Before running the model, use the Model Parameters dialog box or
the Scenarios dialog box to edit the RTI parameter.
Problem Statement
Widgets-R-Us Inc. receives various kinds of widget orders. Raw castings of
wid- gets arrive in batches of one every five minutes. Some widgets go from
the raw material store to the mill, and then on to the grinder. Other widgets go
directly to the grinder. After grinding, all widgets go to degrease for cleaning.
Finally, all widgets are sent to the finished parts store. The milling and
grinding times vary depending on the widget design. However, the degrease
time is seven minutes per widget. The layout of Widgets-R-Us is shown in
Figure L14.6.
Define a runtime interface to allow the user to change the milling and grind-
ing times every time the simulation is run. Also, allow the user to change the
total quantity of widgets being processed. Track the work-in-process inventory
(WIP) of widgets. In addition, define another variable to track the production
(PROD_QTY) of finished widgets.
The macros are defined as shown in Figure L14.7. The runtime interface for
the milling time is shown in Figure L14.8. Figure L14.9 shows the use of the
parameters Mill_Time and Grind_Time in the process and routing tables. To
Lab 14 Additional Modeling Concepts 651
FIGURE L14.6
Layout of
Widgets-R-Us.
FIGURE L14.7
Macros created for Widgets-R-Us simulation model.
FIGURE L14.8
The runtime interface
defined for the
milling operation.
652 Part II Labs
FIGURE L14.9
The process and routing tables showing the Mill_Time and Grind_Time parameters.
FIGURE L14.12
The
Grind_Time_Halfwidth
parameter dialog box.
view or change any of the model parameters, select Model Parameters from
the Simulation menu (Figure L14.10). The model parameters dialog box is
shown in Figure L14.11. To change the Grind_Time_Halfwidth, first select it
from the model parameters list, and then press the Change button. The
Grind_Time_Halfwidth dialog box is shown in Figure L14.12.
Lab 14 Additional Modeling Concepts 653
FIGURE L14.15
Editing the scenario
parameter.
FIGURE L14.16
Arrivals table for Shipping Boxes Unlimited in Section L7.7.2.
FIGURE L14.17
Reports created for both the scenarios.
Lab 14 Additional Modeling Concepts 655
General Read File. These files contain numeric values read into a simulation
model using a READ statement. A space, comma, or end of line can separate
the data values. Any nonnumeric data are skipped automatically. The syntax of
the READ statement is as follows:
READ <file ID>, <variable name>
If the same file is to be read more than once in the model, it may be necessary to
reset the file between each reading. This can be achieved by adding an arbitrary
end-of-file marker 99999 and the following two lines of code:
Read MydataFile1, Value1
If Value1D99999 Then Reset MydataFile1
The data stored in a general read file must be in ASCII format. Most
spreadsheet programs (Lotus 1-2-3, Excel, and others) can convert spreadsheets
to ASCII files (MydataFile1.TXT).
General Write. These files are used to write text strings and numeric values
using the WRITE and WRITELINE statements. Text strings are enclosed in
quotes when written to the file, with commas automatically appended to strings.
This en- ables the files to be read into spreadsheet programs like Excel or Lotus
1-2-3 for viewing. Write files can also be written using the XWRITE statement,
which gives the modeler full control over the output and formatting. If you
write to the same file more than once, either during multiple replications or
within a single replica- tion, the new data are appended to the previous data.
The WRITE statement writes to a general write file. The next item is written
to the file immediately after the previous item. Any file that is written to with
the WRITE statement becomes an ASCII text file and ProModel attaches an end-
of-file marker automatically when it closes the file.
WRITE <file ID>, <string or numeric expression>
The WRITELINE statement writes information to a general write file and
starts a new line. It always appends to the file unless you have RESET the file.
Any file that is written to with the WRITELINE statement becomes an ASCII text
file, and ProModel attaches an end-of-file marker automatically when it closes
the file.
The syntax of the WRITE, WRITELINE, and the XWRITE statements is as follows:
WRITE MyReport, “Customer Service Completed At:”
WRITELINE MyReport, CLOCK(min)
656 Part II Labs
The XWRITE statement allows the user to write in any format he or she chooses.
XWRITE <file ID>, <string or numeric expression>
XWRITE MyReport2, “Customer Service Completed At:” $FORMAT(Var1,5,2)
Arrival. An arrivals file is a spreadsheet file (.WK1 format only) containing ar-
rival information normally specified in the Arrival Editor. One or more arrival
files may be defined and referenced in the External Files Editor. Arrival files are
automatically read in following the reading of the Arrival Editor data. The
column entries must be as in Table L14.1.
Shift. A shift file record is automatically created in the External Files Editor
when a shift is assigned to a location or resource. If shifts have been assigned,
the name(s) of the shift file(s) will be created automatically in the External Files
Editor. Do not attempt to create a shift file record in the External Files Editor
yourself.
.DLL. A .DLL file is needed when using external subroutines through the
XSUB() function.
Problem Statement
At the Pomona Castings, Inc., castings arrive for processing at the rate of 12
per hour (average interarrival time assumed to be five minutes). Seventy
percent
Column Data
A Entity name
B Location name
C Quantity per arrival
D Time of first arrival
E Number of arrivals
F Frequency of arrivals
G Attribute assignments
Lab 14 Additional Modeling Concepts 657
FIGURE L14.18
An external
entity-location
file in .WK1
format.
FIGURE L14.19
File ID and file name created for the external file.
FIGURE L14.20
The process table referring to the file ID of the external file.
of the castings are processed as casting type A, while the rest are processed as
casting type B.
For Pomona Castings, Inc., create an entity–location file named P14_5
.WK1 to store the process routing and process time information (Figure
L14.18). In the simulation model, read from this external file to obtain all the
process information. Build a simulation model and run it for 100 hours. Keep
track of the work-in-process inventory and the production quantity.
Choose Build/More Elements/External Files. Define the ID as SvcTms. The
Type of file is Entity Location. The file name (and the correct path) is also
provided (Figure L14.19). In the Process definition, use the file ID (Figure
L14.20) instead of the actual process time—for example, WAIT SvcTms().
Change the file path to point to the appropriate directory and drive where the
external file is located. A snapshot of the simulation model is shown in Figure
L14.21.
658 Part II Labs
FIGURE L14.21
A snapshot of the
simulation model for
Pomona Castings, Inc.
L14.5 Arrays
An array is a collection of values that are related in some way such as a list of
test scores, a collection of measurements from some experiment, or a sales tax
table. An array is a structured way of representing such data.
An array can have one or more dimensions. A two-dimensional array is
use- ful when the data can be arranged in rows and columns. Similarly, a
three- dimensional array is appropriate when the data can be arranged in rows,
columns, and ranks. When several characteristics are associated with the data,
still higher dimensions may be appropriate, with each dimension
corresponding to one of these characteristics.
Each cell in an array works much like a variable. A reference to a cell in
an array can be used anywhere a variable can be used. Cells in arrays are usually
ini- tialized to zero, although initializing cells to some other value can be done
in the initialization logic. A WHILE-DO loop can be used for initializing array cell
values.
Suppose that electroplating bath temperatures are recorded four times a day
at each of three locations in the tank. These temperature readings can be
arranged in an array having four rows and three columns (Table L14.2).
These 12 data items can be conveniently stored in a two-dimensional array
named Temp[4,3] with four rows and three columns.
An external Excel file (BathTemperature.xls) contains these bath
temperature data (Figure L14.22). The information from this file can be
imported into an array in ProModel (Figure L14.23) using the Array Editor.
When you import data from an external Excel spreadsheet into an array,
ProModel loads the data from left to right, top to bottom. Although there is no
limit to the quantity of values you may use, ProModel supports only two-
dimensional arrays. Figure L14.24 shows the Import File dialog in the Array
Editor.
Lab 14 Additional Modeling Concepts 659
FIGURE L14.22
An external file
containing
bath
temperatures.
FIGURE L14.23
The Array Editor in
ProModel.
FIGURE L14.24
The Import File dialog
in the Array Editor.
Problem Statement
Table L14.3 shows the status of orders at the beginning of the month at Joe’s
Job- shop. In his shop, Joe has three machines through which three types of
jobs are routed. All jobs go to all machines, but with different routings. The
data for job routings and processing times (exponential) are given in Table
L14.4. The processing times are given in minutes. Use a one-dimensional array
to hold the in- formation on the order status. Simulate and find out how long it
will take for Joe to finish all his pending orders.
The layout, locations, and arrival of jobs at Joe’s Jobshop are shown in
Figures L14.25, L14.26, and L14.27. The pending order array is shown in
FIGURE L14.25
Layout of Joe’s Jobshop.
TABLE L14.3 Data for Pending TABLE L14.4 Process Routings and Average
Orders Process Times
at Joe’s Jobshop
Figure L14.28. The initialization logic is shown in Figure L14.29. The processes
and routings are shown in Figure L14.30.
It took about 73 hours to complete all the pending work orders. At eight
hours per day, it took Joe a little over nine days to complete the backlog of
orders.
FIGURE L14.26
Locations at Joe’s Jobshop.
FIGURE L14.27
Arrival of jobs at Joe’s Jobshop.
FIGURE L14.28
Pending order array for Joe’s Jobshop.
662 Part II Labs
FIGURE L14.29
Initialization logic in the General Information menu.
FIGURE L14.30
Process and routing tables for Joe’s Jobshop.
Lab 14 Additional Modeling Concepts 663
Problem Statement
Customers arrive at the Save Here Grocery store with a mean time between ar-
rival (exponential) that is a function of the time of the day, (the number of hours
elapsed since the store opening), as shown in Table L14.5. The grocery store
con- sists of two aisles and a cashier. Once inside the store, customers may
choose to shop in one, two, or none of the aisles. The probability of shopping
aisle one is 0.75, and for aisle two it is 0.5. The number of items selected in
each aisle is de- scribed by a normal distribution with a mean of 15 and a
standard deviation of 5. The time these shoppers require to select an item is five
minutes. When all desired items have been selected, customers queue up at the
cashier to pay for them. The time to check out is a uniformly distributed random
variable that depends on the
number of items purchased. The checkout time per item is 0.6 ± 0.5
minutes (uniformly distributed). Simulate for 10 days (16 hours/day).
The locations, the arrival of entities, the processes and routings, and the lay-
out of the grocery store are shown in Figures L14.31, L14.32, L14.33, and
L14.34, respectively. The arrival frequency is derived from the table function
arrival time(HR):
e(arrival_time(CLOCK(HR) – 16*TRUNC(CLOCK(HR)/16))) min
The table function arrival_time is shown in Figure L14.35.
0 50
2 40
4 25
6 20
8 20
10 20
12 25
14 30
16 30
664 Part II Labs
FIGURE L14.31
Locations in Save Here Grocery.
FIGURE L14.32
Arrivals at Save Here Grocery.
FIGURE L14.33
Process and routing tables for Save Here Grocery.
Lab 14 Additional Modeling Concepts 665
FIGURE L14.34
The layout of Save Here Grocery.
FIGURE L14.35
The Table Functions
dialog box.
666 Part II Labs
L14.7 Subroutines
A subroutine is a separate section of code intended to accomplish a specific task.
It is a user-defined command that can be called upon to perform a block of logic
and optionally return a value. Subroutines may have parameters or local
variables (local to the subroutine) that take on the values of arguments passed
to the sub- routine. There are three variations for the use of subroutines in
ProModel:
1. A subroutine is called by its name from the main block of code.
2. A subroutine is processed independently of the calling logic so that
the calling logic continues without waiting for the subroutine to
finish. An ACTIVATE statement followed by the name of the subroutine
is needed.
3. Subroutines written in an external programming language can be called
using the XSUB() function.
Subroutines are defined in the Subroutines Editor in the More Elements section
of the Build menu. For more information on ProModel subroutines, please refer
to the ProModel Users Guide.
Problem Statement
At California Gears, gear blanks are routed from a fabrication center to one of
three manual inspection stations. The operation logic for the gears is identical at
each station except for the processing times, which are a function of the individ-
ual inspectors. Each gear is assigned two attributes during fabrication. The first
at- tribute, OuterDia, is the dimension of the outer diameter of the gear. The
second attribute, InnerDia, is the dimension of the inner diameter of the gear.
During the fabrication process, the outer and inner diameters are machined
with an average
of 4.015 ± 0.01 and 2.015 ± 0.01 (uniformly distributed). These dimensions
are tested at the inspection stations and the values entered into a text file
(Quality.doc)
for quality tracking. After inspection, gears are routed to a shipping location if
they pass inspection, or to a scrap location if they fail inspection.
Gear blanks arrive at the rate of 12 per hour (interarrival time exponentially
distributed with a mean of five minutes). The fabrication and inspection times
(normal) are shown in Table L14.6. The specification limits for the outer and
inner
diameters are given in Table L14.7. The layout, locations, entities, and arrival of
raw material are shown in Figures L14.36, L14.37, L14.38, and L14.39 respec-
tively. The subroutine defining routing logic is shown in Figure L14.40. Fig-
ure L14.41 shows the processes and routing logic. The external file in which
quality data will be written is defined in Figure L14.42. Figure L14.43 shows a
portion of the actual gear rejection report.
FIGURE L14.36
Layout of California
Gears.
FIGURE L14.37
Locations at
California Gears.
FIGURE L14.38
Entities at California
Gears.
FIGURE L14.39
Arrival of raw material at California Gears.
FIGURE L14.40
Subroutine defining routing logic.
Lab 14 Additional Modeling Concepts 669
FIGURE L14.41
Process and routing tables for California Gears.
FIGURE L14.42
External file for
California Gears.
FIGURE L14.43
Gear rejection report (partial) for California Gears.
670 Part II Labs
Problem Statement
At the Newport Beach Burger stand, there are three peak periods of customer
ar- rivals: breakfast, lunch, and dinner (Table L14.8). The customer arrivals taper
out
FIGURE L14.44
Arrival Cycles edit
menu.
From To Percent
before and after these peak periods. The same cycle of arrivals repeats every
day. A total of 100 customers visit the store on an average day (normal
distribution with a standard deviation of five). Upon arrival, the customers take
Uniform(5 ± 2) min- utes to order and receive food and Normal(15, 3) minutes
to eat, finish business discussions, gossip, read a newspaper, and so on. The
restaurant currently has only one employee (who takes the order, prepares the
food, serves, and takes the
money). Simulate for 100 days.
The locations, processes and routings, arrivals, and arrival quantities are
shown in Figures L14.45, L14.46, L14.47, and L14.48, respectively. The arrival
cycles and the probability density function of arrivals at Newport Beach Burger
are shown in Figure L14.49. A snapshot of the simulation model is shown in
Figure L14.50.
FIGURE L14.45
Locations at Newport Beach Burger.
FIGURE L14.46
Process and routing tables at Newport Beach Burger.
FIGURE L14.47
Arrivals at Newport Beach Burger.
672 Part II Labs
FIGURE L14.50
A snapshot of Newport Beach Burger.
Lab 14 Additional Modeling Concepts 673
Problem Statement
The customers at the Newport Beach Burger stand arrive in group sizes of one,
two, three, or four with the probabilities shown in Table L14.9. The mean time
be- tween arrivals is 15 minutes (exponential). Ordering times have the
probabilities
FIGURE L14.51
User Distributions
menu.
1 .4
2 .3
3 .1
4 .2
674 Part II Labs
shown in Table L14.10. The probability density function of eating times are
shown in Table L14.11. Simulate for 100 hours.
The user distributions, group size distribution, order time distribution, and
eating time distribution are defined as shown in Figures L14.52, L14.53, L14.54,
and L14.55, respectively.
FIGURE L14.52
User distributions defined for Newport Beach Burger.
FIGURE L14.55
Eating time
distribution for
Newport
Beach Burger.
TABLE L14.10 Probability Density Function TABLE L14.11 Probability Density Function
of Ordering Times of Eating Times
Problem Statement
Salt Lake Machine Shop has two machines: Mach A and Mach B. A
maintenance mechanic, mechanic Dan, is hired for preventive maintenance on
these machines. The mean time between preventive maintenance is 120 ± 10
minutes. Both ma- chines are shut down at exactly the same time. The actual
maintenance takes
10 ± 5 minutes. Jobs arrive at the rate of 7.5 per hour (exponential mean time
be- tween arrival). Jobs go to either Mach A or Mach B selected on a random
basis.
Simulate for 100 hours.
The layout, resources, path network of the mechanic, and process and
routing tables are shown in Figures L14.56, L14.57, L14.58, and L14.59.
The arrival of customers is defined by an exponential interarrival time with a
mean of three
FIGURE L14.56
Layout of Salt Lake
Machine Shop.
676 Part II Labs
minutes. The downtimes and the definition of random number streams are
shown in Figures L14.60 and L14.61, respectively. Note that the same seed
value (Fig- ure L14.61) is used in both the random number streams to ensure
that both ma- chines are shut down at exactly the same time.
FIGURE L14.57
Resources at Salt Lake Machine Shop.
FIGURE L14.58
Path network of the mechanic at Salt Lake Machine Shop.
FIGURE L14.59
Process and routing tables at Salt Lake Machine Shop.
FIGURE L14.60
Clock downtimes for machines A and B at Salt Lake Machine Shop.
Lab 14 Additional Modeling Concepts 677
FIGURE L14.61
Definition of
random number
streams.
L14.11 Exercises
1. Differentiate between the following:
a. Table functions versus arrays.
b. Subroutines versus macros.
c. Arrivals versus arrival cycles.
d. Scenarios versus replications.
e. Scenarios versus views.
f. Built-in distribution versus user distribution.
2. What are some of the advantages of using an external file in ProModel?
3. HiTek Molding, a small mold shop, produces three types of parts: Jobs
A, B, and C. The ratio of each part and the processing times
(minimum, mean, and maximum of a triangular distribution) are as
follows:
7. For the Detroit ToolNDie plant (Exercise 14, Section L7.12), generate the following three scenarios:
a. Scenario I: One tool crib clerk.
b. Scenario II: Two tool crib clerks.
c. Scenario III: Three tool crib clerks.
Run 10 replications of each scenario. Analyze and compare the results.
How many clerks would you recommend hiring?
8. In United Electronics (Exercise 7 in Section L7.12), use an array to store the process time
information. Read this information from an external Excel spreadsheet into the simulation model.
9. For Salt Lake City Electronics (Exercise 10 in Section L7.12), use external files (arrivals and
entity_location files) to store all the data. Read this file directly into the simulation model.
10. For Ghosh’s Gear Shop example in Section L13.2.1, create macros
and suitable runtime interfaces for the following processing time parameters:
11.West Coast Federal a drive-in bank, has one teller and space for five waiting cars. If a customer
arrives when the line is full, he or she drives around the block and tries again. Time between arrivals is
exponential with mean of 10 minutes. Time to drive around the block is normally distributed with
mean 3 min and standard deviation 0.6 min. Service
time is uniform at 9 ± 3 minutes. Build a simulation model and run it
for 2000 hours (approximately one year of operation).
a. Collect statistics on time in queue, time in system, teller utilization, number of customers served
per hour, and number of customers balked per hour.
b. Modify the model to allow two cars to wait after they are served to get onto the street. Waiting
time for traffic is exponential with a mean of four minutes. Collect all the statistics from part a.
c. Modify the model to reflect balking customers leaving the system and not driving around the block.
Collect all the same statistics. How many customers are lost per hour?
d. The bank’s operating hours are 9 A.M. till 3 P.M. The drive-in facility is closed at 2:30 P.M.
Customers remaining in line are served until the last customer has left the bank. Modify the model
to reflect these changes. Run for 200 days of operation.
12. San Dimas Mutual Bank has two drive-in ATM kiosks in tandem but
only one access lane (Figure L14.62). In addition, there is one indoor
Lab 14 Additional Modeling Concepts 679
FIGURE L14.62
Layout of the San
Dimas Mutual San Dimas Mutual Bank
Bank.
Walk-
Indoor ATM
Parking Lot
Drive-
ATM 1 ATM 2
ATM for customers who decide to park (30 percent of all customers)
and walk in. Customers arrive at intervals that are spaced on average
five minutes apart (exponentially distributed). ATM customers are of
three types—save money (deposit cash or check), spend money
(withdraw cash), or count money (check balance). If both ATMs are
free when a customer arrives, the customer will use the “downstream”
ATM 2. A car at ATM 1 cannot pass a car at the ATM in front of it even
if it has finished.
time at this ATM is same as the drive-in ATMs. Run the model until
2000 cars have been served. Analyze the following:
a. Average and maximum number of customers deciding to park and
walk in. How big should the parking lot be?
b. The average and maximum drive-in queue size.
c. The average and maximum time spent by a customer waiting in the
drive-in queue.
d. The average and maximum walk-in queue size.
e. The average and maximum time spent by a customer waiting in the
walk-in queue.
f. The average and maximum time in the system.
g. Utilization of the three ATMs.
h. Number of customers served each hour.
Harrell−Ghosh−Bo Appendi A. Common © The
wden: Simulation xes Continuous and McGraw−Hill
Using ProModel, Discrete Companies,
Second Edition
The beta distribution is a continuous distribution that has both upper and lower finite
bounds. Because many real situations can be bounded in this way, the beta distribution
can be used empirically to estimate the actual distribution before many data are available.
Even when data are available, the beta distribution should fit most data in a reasonable
fashion, although it may not be the best fit. The uniform distribution is a special case of
the beta dis-
tribution with p, q = 1.
As can be seen in the examples that follow, the beta distribution can approach zero or
infinity at either of its bounds, with p controlling the lower bound and q controlling
the upper bound. Values of p, q < 1 cause the beta distribution to approach infinity at
that bound. Values of p, q > 1 cause the beta distribution to be finite at that bound.
Beta distributions have been used to model distributions of activity time in PERT
analy- sis, porosity/void ratio of soil, phase derivatives in communication theory, size of
progeny in Escherichia coli, dissipation rate in breakage models, proportions in gas
mixtures, steady-state reflectivity, clutter and power of radar signals, construction
duration, particle size, tool wear, and others. Many of these uses occur because of the
doubly bounded nature of the beta distribution.
*Adapted by permission from Stat::Fit Users Guide (South Kent, Connecticut: Geer Mountain
Software Corporation, 1997).
709
710 Appendix A Common Continuous and Discrete Distributions
( \
Erlang Distribution (min, m, β) [x − min]
(x − min)m−1 −
f (x ) exp
β m f(m) β
=
min = minimum x
m = shape factor = positive integer
β = scale factor > 0
mean = min + mβ
variance = mβ2
mode = min + β(m − 1)
Appendix A Common Continuous and Discrete Distributions 711
probabilities when shifted in time. Even when exponential models are known to be
inadequate to describe the situation, their mathematical tractability provides a good
starting point. A more complex distribution such as Erlang or Weibull may be
investigated (see Johnson et al. 1994, p. 499; Law and Kelton 1991, p. 330).
( \
Gamma Distribution (min, α, β) [x − min]
(x − min)α−1 −
f (x ) exp
β α f(α) β
=
min = minimum x
α = shape parameter > 0
β = scale parameter > 0
mean = min + αβ
variance = rαβ2
min + β(α − 1) if α ≥ 1
mode = min if α < 1
The gamma distribution is a continuous distribution bounded at the lower side. It has
three distinct regions. For α = 1, the gamma distribution reduces to the exponential
distri- bution, starting at a finite value at minimum x and decreasing monotonically
thereafter. For α < 1, the gamma distribution tends to infinity at minimum x and decreases
monotonically for increasing x. For α > 1, the gamma distribution is 0 at minimum x,
peaks at a value that
Appendix A Common Continuous and Discrete Distributions 713
depends on both alpha and beta, and decreases monotonically thereafter. If α is restricted
to positive integers, the gamma distribution is reduced to the Erlang distribution.
Note that the gamma distribution also reduces to the chi-square distribution for
min = 0, β = 2, and α = nµ/2. It can then be viewed as the distribution of the sum
of squares of independent unit normal variables, with nµ degrees of freedom, and is used
in
many statistical tests.
The gamma distribution can also be used to approximate the normal distribution, for
large α, while maintaining its strictly positive values of x [actually (x-min)].
The gamma distribution has been used to represent lifetimes, lead times, personal in-
come data, a population about a stable equilibrium, interarrival times, and service times.
In particular, it can represent lifetime with redundancy (see Johnson et al. 1994, p.
343; Shooman 1990).
Examples of each of the regions of the gamma distribution are shown here. Note the
peak of the distribution moving away from the minimum value for increasing α, but with
a much broader distribution.
( \
Lognormal Distribution (min, µ, σ) [ln(x − min) − µ]2
1
f (x ) = √ exp −
(x − min 2πσ 2σ 2
) 2
min = minimum x
µ = mean of the included normal distribution
σ = standard deviation of the included normal distribution
mean = min + exp(µ + (σ 2 /2))
variance = exp(2µ + σ 2 )(exp(σ2 2 ) −
1) mode = min + exp(µ−σ )
The lognormal distribution is used in many different areas including the distribution
of particle size in naturally occurring aggregates, dust concentration in industrial atmo-
spheres, the distribution of minerals present in low concentrations, the duration of
sickness absence, physicians’ consulting time, lifetime distributions in reliability,
distribution of income, employee retention, and many applications modeling weight,
height, and so forth (see Johnson et al. 1994, p. 207).
The lognormal distribution can provide very peaked distributions for increasing σ—
indeed, far more peaked than can be easily represented in graphical form.
( \
Normal Distribution (µ, σ) [x − µ]2
1
f (x ) = √ exp −
2π σ 2 2σ 2
µ = shift parameter
σ = scale parameter = standard deviation
mean = µ 2
variance = σ
mode = µ
β
x > min
min ∈ (−∞, ∞)
β>0
p>0
q>0
β( p, q) = beta function
for q > 1
βp
min
mean = + q−1
does not exist for 0 < q ≤ 1
2
β q( p + q − 1)
for q > 2
variance = (q − 1)2 (q −
2)
does not exist for 0 < q ≤ 2
min β(p − 1) for p ≥ 1
mode = + q+1
min otherwise
The Pearson 6 distribution is a continuous distribution bounded on the low side. The
Pearson 6 distribution is sometimes called the beta distribution of the second kind due to
the relationship of a Pearson 6 random variable to a beta random variable.
Like the gamma distribution, the Pearson 6 distribution has three distinct regions.
For p = 1, the Pearson 6 distribution resembles the exponential distribution, starting
at a finite value at minimum x and decreasing monotonically thereafter. For p < 1, the
Pearson 6
distribution tends to infinity at minimum x and decreases monotonically for
increasing x. For p > 1, the Pearson 6 distribution is 0 at minimum x, peaks at a value
that depends on both p and q, and decreases monotonically thereafter.
Appendix A Common Continuous and Discrete Distributions 717
The Pearson 6 distribution appears to have found little direct use, except in its reduced
form as the F distribution, where it serves as the distribution of the ratio of independent
estimators of variance and provides the final test for the analysis of variance.
The three regions of the Pearson 6 distribution are shown here. Also note that the
distribution becomes sharply peaked just off the minimum for increasing q.
min = minimum x
max = maximum x
mode = most likely
x
min + max + mode
mean =
3
min + max2 + mode2 − (min)(max) − (min)(mode) − (max)
2
variance =
(mode)
18
The triangular distribution is a continuous distribution bounded on both sides. The tri-
angular distribution is often used when no or few data are available; it is rarely an
accurate representation of a data set (see Law and Kelton 1991, p. 341). However, it is
employed as the functional form of regions for fuzzy logic due to its ease of use.
The triangular distribution can take on very skewed forms, as shown here, including
negative skewness. For the exceptional cases where the mode is either the min or max,
the triangular distribution becomes a right triangle.
718 Appendix A Common Continuous and Discrete Distributions
( ( \α \
Weibull Distribution (min, α, β) [x − min]
( \α −1
α x − min
f (x) = exp −
β β
β
min = minimum x
α = shape parameter > 0
β = scale parameter > 0
mean = min + αβ
variance = rαβ2
min + β(α − 1) if α ≥ 1
mode = min if α < 1
Appendix A Common Continuous and Discrete Distributions 719
The Weibull distribution is a continuous distribution bounded on the lower side. Be-
cause it provides one of the limiting distributions for extreme values, it is also referred to
as the Frechet distribution and the Weibull–Gnedenko distribution. Unfortunately, the
Weibull distribution has been given various functional forms in the many engineering ref-
erences; the form here is the standard form given in Johnson et al. 1994, p. 628.
Like the gamma distribution, the Weibull distribution has three distinct regions. For
α = 1, the Weibull distribution is reduced to the exponential distribution, starting at a
finite value at minimum x and decreasing monotonically thereafter. For α < 1, the
Weibull dis-
tribution tends to infinity at minimum x and decreases monotonically for increasing x. For
α > 1, the Weibull distribution is 0 at minimum x, peaks at a value that depends on both α
and β, and decreases monotonically thereafter. Uniquely, the Weibull distribution has neg-
ative skewness for α > 3.6.
The Weibull distribution can also be used to approximate the normal distribution for
α = 3.6, while maintaining its strictly positive values of x [actually (x-min)], although
the kurtosis is slightly smaller than 3, the normal value.
The Weibull distribution derived its popularity from its use to model the strength of
materials, and has since been used to model just about everything. In particular, the
Weibull distribution is used to represent wear-out lifetimes in reliability, wind speed, rain-
fall intensity, health-related issues, germination, duration of industrial stoppages,
migratory systems, and thunderstorm data (see Johnson et al. 1994, p. 628; Shooman
1990, p. 190).
720 Appendix A Common Continuous and Discrete Distributions
x = 0, 1, . . . , n
n = number of trials
p = probability of the event occurring
( \ n!
n
= x!(n − x)!
x
mean = n p
variance = np(1
r − p)
p(n + 1) − 1 and p(n + 1) if p(n + 1) is an integer
mode = p( + 1) otherwise
n
As shown in the examples, low values of p give high probabilities for low values of
x and vice versa, so that the peak in the distribution may approach either bound. Note that
the probabilities are actually weights at each integer but are represented by broader bars
for visibility.
The binomial distribution can be used to describe
• The number of defective items in a batch.
• The number of people in a group of a particular type.
• Out of a group of employees, the number of employees who call in sick on a
given day.
It is also useful in other event sampling tests where the probability of the event is known to
be constant or nearly so. See Johnson et al. (1992, p. 134).
Geometric Distribution
(p) p(x ) = p(1 − p)x
p = probability of
occurrence 1 − p
mean = p
1−
variance =
p p2
mode = 0
r
mode = λ − 1 and λ if λ is an integer
λ otherwise
The Poisson distribution is a discrete distribution bounded at 0 on the low side and un-
bounded on the high side. The Poisson distribution is a limiting form of the
hypergeometric distribution.
Appendix A Common Continuous and Discrete Distributions 723
The Poisson distribution finds frequent use because it represents the infrequent occur-
rence of events whose rate is constant. This includes many types of events in time and
space such as arrivals of telephone calls, defects in semiconductor manufacturing, defects
in all aspects of quality control, molecular distributions, stellar distributions, geographical
distributions of plants, shot noise, and so on. It is an important starting point in queuing
theory and reliability theory (see Johnson et al. 1992, p. 151). Note that the time between
arrivals (defects) is exponentially distributed, which makes this distribution a particularly
convenient starting point even when the process is more complex.
The Poisson distribution peaks near λ and falls off rapidly on either side. Note that the
probabilities are actually weights at each integer but are represented by broader bars for
visibility.
References
Banks, Jerry, and John S. Carson II. Discrete-Event System Simulation. Englewood Cliffs,
NJ: Prentice Hall, 1984.
Johnson, Norman L.; Samuel Kotz; and N. Balakrishnan. Continuous Univariate Distribu-
tions. Vol. 1. New York: John Wiley & Sons, 1994.
Johnson, Norman L.; Samuel Kotz; and N. Balakrishnan. Continuous Univariate Distri-
butions. Vol. 2. New York: John Wiley & Sons, 1995.
Johnson, Norman L.; Samuel Kotz; and Adrienne W. Kemp. Univariate Discrete Distribu-
tions. New York: John Wiley & Sons, 1992.
Law, Averill M., and W. David Kelton. Simulation Modeling and Analysis. New York:
McGraw-Hill, 1991.
Shooman, Martin L. Probabilistic Reliability: An Engineering Approach. Melbourne,
Florida: Robert E. Krieger, 1990.
Harrell−Ghosh−Bo Appendi B. Critical Values © The
wden: Simulation xes for Student’s t McGraw−Hill
Using
ProModel, Second Distribution
Standard and Companies,
Edition Normal
Distribution
0 tdf,
724
Harrell
−Ghos
APPENDIX C F DISTRIBUTION FOR α = 0.05 h−Bow
den:
Simula
tion
Using
Numerator Degrees of Freedom [df(Treatment)] ProMo
del,
1 2 3 4 5 6 7 8 9 10 12 14 16 20 24 30 40 50 100 200 ∞
A
p
3 10.13 9.55 9.28 9.12 9.01 8.94 8.89 8.85 8.81 8.79 8.74 8.71 8.69 8.66 8.64 8.62 8.59 8.58 8.55 8.54 8.54
p
4 7.71 6.94 6.59 6.39 6.26 6.16 6.09 6.04 6.00 5.96 5.91 5.87 5.84 5.80 5.77 5.75 5.72 5.70 5.66 5.65 5.63 e
5 6.61 5.79 5.41 5.19 5.05 4.95 4.88 4.82 4.77 4.74 4.68 4.64 4.60 4.56 4.53 4.50 4.46 4.44 4.41 4.39 4.36 n
6 5.99 5.14 4.76 4.53 4.39 4.28 4.21 4.15 4.10 4.06 4.00 3.96 3.92 3.87 3.84 3.81 3.77 3.75 3.71 3.69 3.67
7 5.59 4.74 4.35 4.12 3.97 3.87 3.79 3.73 3.68 3.64 3.57 3.53 3.49 3.44 3.41 3.38 3.34 3.32 3.27 3.25 3.23
8 5.32 4.46 4.07 3.84 3.69 3.58 3.50 3.44 3.39 3.35 3.28 3.24 3.20 3.15 3.12 3.08 3.04 3.02 2.97 2.95 2.93
9 5.12 4.26 3.86 3.63 3.48 3.37 3.29 3.23 3.18 3.14 3.07 3.03 2.99 2.94 2.90 2.86 2.83 2.80 2.76 2.73 2.71
10 4.96 4.10 3.71 3.48 3.33 3.22 3.14 3.07 3.02 2.98 2.91 2.86 2.83 2.77 2.74 2.70 2.66 2.64 2.59 2.56 2.54
11 4.84 3.98 3.59 3.36 3.20 3.09 3.01 2.95 2.90 2.85 2.79 2.74 2.70 2.65 2.61 2.57 2.53 2.51 2.46 2.43 2.41 C.
F
12 4.75 3.89 3.49 3.26 3.11 3.00 2.91 2.85 2.80 2.75 2.69 2.64 2.60 2.54 2.51 2.47 2.43 2.40 2.35 2.32 2.30 Di
Denominator Degrees of Freedom [df(Error)]
13 4.67 3.81 3.41 3.18 3.03 2.92 2.83 2.77 2.71 2.67 2.60 2.55 2.51 2.46 2.42 2.38 2.34 2.31 2.26 2.23 2.21 st
14 4.60 3.74 3.34 3.11 2.96 2.85 2.76 2.70 2.65 2.60 2.53 2.48 2.44 2.39 2.35 2.31 2.27 2.24 2.19 2.16 2.13 ri
15 4.54 3.68 3.29 3.06 2.90 2.79 2.71 2.64 2.59 2.54 2.48 2.42 2.38 2.33 2.29 2.25 2.20 2.18 2.12 2.10 2.07 b
ut
16 4.49 3.63 3.24 3.01 2.85 2.74 2.66 2.59 2.54 2.49 2.42 2.37 2.33 2.28 2.24 2.19 2.15 2.12 2.07 2.04 2.01
io
17 4.45 3.59 3.20 2.96 2.81 2.70 2.61 2.55 2.49 2.45 2.38 2.33 2.29 2.23 2.19 2.15 2.10 2.08 2.02 1.99 1.96 n
18 4.41 3.55 3.16 2.93 2.77 2.66 2.58 2.51 2.46 2.41 2.34 2.29 2.25 2.19 2.15 2.11 2.06 2.04 1.98 1.95 1.92 fo
19 4.38 3.52 3.13 2.90 2.74 2.63 2.54 2.48 2.42 2.38 2.31 2.26 2.21 2.16 2.11 2.07 2.03 2.00 1.94 1.91 1.88
20 4.35 3.49 3.10 2.87 2.71 2.60 2.51 2.45 2.39 2.35 2.28 2.23 2.18 2.12 2.08 2.04 1.99 1.97 1.91 1.88 1.84
22 4.30 3.44 3.05 2.82 2.66 2.55 2.46 2.40 2.34 2.30 2.23 2.17 2.13 2.07 2.03 1.98 1.94 1.91 1.85 1.82 1.78
24 4.26 3.40 3.01 2.78 2.62 2.51 2.42 2.36 2.30 2.25 2.18 2.13 2.09 2.03 1.98 1.94 1.89 1.86 1.80 1.77 1.73
26 4.23 3.37 2.98 2.74 2.59 2.47 2.39 2.32 2.27 2.22 2.15 2.09 2.05 1.99 1.95 1.90 1.85 1.82 1.76 1.73 1.69
28 4.20 3.34 2.95 2.71 2.56 2.45 2.36 2.29 2.24 2.19 2.12 2.06 2.02 1.96 1.91 1.87 1.82 1.79 1.73 1.69 1.66
30 4.17 3.32 2.92 2.69 2.53 2.42 2.33 2.27 2.21 2.16 2.09 2.04 1.99 1.93 1.89 1.84 1.79 1.76 1.70 1.66 1.62
35 4.12 3.27 2.87 2.64 2.49 2.37 2.29 2.22 2.16 2.11 2.04 1.99 1.94 1.88 1.83 1.79 1.74 1.70 1.63 1.60 1.56
40 4.08 3.23 2.84 2.61 2.45 2.34 2.25 2.18 2.12 2.08 2.00 1.95 1.90 1.84 1.79 1.74 1.69 1.66 1.59 1.55 1.51
45 4.06 3.20 2.81 2.58 2.42 2.31 2.22 2.15 2.10 2.05 1.97 1.92 1.87 1.81 1.76 1.71 1.66 1.63 1.55 1.51 1.47 ©
50 4.03 3.18 2.79 2.56 2.40 2.29 2.20 2.13 2.07 2.03 1.95 1.89 1.85 1.78 1.74 1.69 1.63 1.60 1.52 1.48 1.44 The
60 4.00 3.15 2.76 2.53 2.37 2.25 2.17 2.10 2.04 1.99 1.92 1.86 1.82 1.75 1.70 1.65 1.59 1.56 1.48 1.44 1.39 McGr
70 3.98 3.13 2.74 2.50 2.35 2.23 2.14 2.07 2.02 1.97 1.89 1.84 1.79 1.72 1.67 1.62 1.57 1.53 1.45 1.40 1.35 aw−
80 3.96 3.11 2.72 2.49 2.33 2.21 2.13 2.06 2.00 1.95 1.88 1.82 1.77 1.70 1.65 1.60 1.54 1.51 1.43 1.38 1.33 Hill
100 3.94 3.09 2.70 2.46 2.31 2.19 2.10 2.03 1.97 1.93 1.85 1.79 1.75 1.68 1.63 1.57 1.52 1.48 1.39 1.34 1.28 Comp
200 3.89 3.04 2.65 2.42 2.26 2.14 2.06 1.98 1.93 1.88 1.80 1.74 1.69 1.62 1.57 1.52 1.46 1.41 1.32 1.26 1.19
7 ∞ 1.04 3.00 2.61 2.37 2.21 2.10 2.01 1.94 1.88 1.83 1.75 1.69 1.64 1.57 1.52 1.46 1.40 1.35 1.25 1.17 1.03
2
Harrell−Ghosh−Bo Appendixes D. Critical Values © The
wden: Simulation for Chi−Square McGraw−Hill
Using ProModel, Distribution Companies,
Second Edition 2004
x 2df,
726
Harrell−Ghosh−Bo III. Case Eight Possible © The
wden: Simulation Study Simulations McGraw−Hill
Using ProModel, Assignme Companies,
Second Edition
P A R T
These case studies have been used in senior- or graduate-level simulation classes. Each of
these case studies can be analyzed over a three- to five-week period. A single student or
a group of two to three students can work together on these case studies. If you are using
the student version of the software, you may need to make some simplifying
assumptions to limit the size of the model. You will also need to fill in (research or
assume) some of the in- formation and data missing from the case descriptions.
A toy company produces three types (A, B, and C) of toy aluminum airplanes in the fol-
lowing daily volumes: A = 1000, B = 1500 and C = 1800. The company expects
demand to increase for its products by 30 percent over the next six months and needs to
know the
total machines and operators that will be required. All planes go through five operations
(10 through 50) except for plane A, which skips operation 40. Following is a list of
opera- tion times, move times, and resources used:
After die casting, planes are moved to each operation in batch sizes of 24. Input buffers
exist at each operation. The factory operates eight hours a day, five days per week. The
fac- tory starts out empty at the beginning of each day and ships all parts produced at the
end of the day. The die caster experiences downtimes every 30 minutes exponentially
distributed and takes 8 minutes normally distributed with a standard deviation of 2
minutes to repair. One maintenance person is always on duty to make repairs.
Find the total number of machines and personnel needed to meet daily production re-
quirements. Document the assumptions and experimental procedure you went through to
conduct the study.
Maria opened her authentic Mexican restaurant Mi Cazuela (a cazuela is a clay cooking
bowl with a small handle on each side) in Pasadena, California, in the 1980s. It quickly
be- came popular for the tasty food and use of fresh organic produce and all-natural
meats. As her oldest child, you have been asked to run the restaurant. If you are able to
gain her con- fidence, she will eventually hand over the restaurant to you.
683
684 Part III Case Study Assignments
You have definite ideas about increasing the profitability at Mi Cazuela. Lately, you
have observed a troubling trend in the restaurant. An increasing number of customers are
expressing dissatisfaction with the long wait, and you have also observed that some
people leave without being served.
Your initial analysis of the situation at Mi Cazuela indicates that one way to improve
cus- tomer service is to reduce the waiting time in the restaurant. You also realize that by
optimiz- ing the process for the peak time in the restaurant, you will be able to increase
the profit.
Customers arrive in groups that vary in size from one to four (uniformly distributed).
Cur- rently, there are four tables for four and three tables for two patrons in the dining area.
One table for four can be replaced with two tables for two, or vice versa. Groups of one or
two customers wait in one queue while groups of three or four customers wait in another
queue. Each of these waiting lines can accommodate up to two groups only. One- or
two-customer groups are directed to tables for two. Three- or four-customer groups are
directed to tables for four.
There are two cooks in the kitchen and two waiters. The cooks are paid $100/day, and
the waiters get $60/day. The cost of raw material (vegetables, meat, spices, and other food
mate- rial) is $1 per customer. The overhead cost of the restaurant (rent, insurance,
utilities, and so on) is $300/day. The bill for each customer varies uniformly from $10 to
$16 or U(13,3).
The restaurant remains open seven days a week from 5 P.M. till 11 P.M. The customer
ar- rival pattern is as follows. The total number of customer groups visiting the restaurant
each day varies uniformly between 30 and 50 or U(40,10):
From To Percent
5 P.M. 6 P.M. 10
6 P.M. 7 P.M. 20
7 P.M. 9 P.M. 55
9 P.M. 10 P.M. 10
10 P.M. 11 P.M. 5
Part A
Analyze and answer the following questions:
1. What is the range of profit (develop a ±3σ confidence interval) per day at Mi Cazuela?
2. On average, how many customers leave the restaurant (per day) without eating?
3. What is the range of time (develop a ±3σ confidence interval) a customer
group spends at the restaurant?
4. How much time (develop a ±3σ confidence interval) does a customer group
wait in line?
Part B
You would like to change the mix of four-seat tables and two-seat tables in the dining area
to increase profit and reduce the number of balking customers. You would also like to in-
vestigate if hiring additional waiters and/or cooks will improve the bottom line (profit).
Part C
You are thinking of using an automated handheld device for the waiters to take the cus-
tomer orders and transmit the information (wireless) to the kitchen. The order entry and
transmission (activities #2 and 3) is estimated to take N(1.5, 0.2) minutes. The rent for
each of these devices is $2/hour. Will using these devices improve profit? Reduce
customer time in the system? Should you invest in these handheld devices?
Part D
The area surrounding the mall is going through a construction boom. It is expected that
Mi Cazuela (and the mall) will soon see an increase in the number of patrons per day.
Soon the number of customer groups visiting the restaurant is expected to grow to 50–70
per day, or U(60,10). You have been debating whether to take over the adjoining
coffeeshop and ex- pand the Mi Cazuela restaurant. The additional area will allow you to
add four more tables of four and three tables of two customers each. The overhead cost
of the additional area will be $200 per day. Should you expand your restaurant? Will it
increase profit?
How is your performance in managing Mi Cazuela? Do you think Mama Maria will be
proud and hand over the reins of the business to you?
CASE STUDY 3 JAI HIND CYCLES INC. PLANS NEW PRODUCTION FACILITY
Mr. Singh is the industrial engineering manager at Jai Hind Cycles, a producer of bicycles.
As part of the growth plan for the company, the management is planning to introduce a
new model of mountain bike strictly for the export market. Presently, JHC assembles
regular bikes for the domestic market. The company runs one shift every day. The
present facility has a process layout. Mr. Singh is considering replacing the existing
layout with a group technology cell layout. As JHC’s IE manager, Mr. Singh has been
asked to report on the impact that will be made by the addition of the mountain bike to
JHC’s current production capabilities.
686 Part III Case Study Assignments
Mr. Singh has collected the following data from the existing plant:
1. The present production rate is 200 regular bikes per day in one 480-minute
shift.
2. The following is the list of all the existing equipment in JHC’s production
facility:
Table 1 shows a detailed bill of materials of all the parts manufactured by JHC and the
machining requirements for both models of bikes. Only parts of the regular and the
moun- tain bikes that appear in this table are manufactured within the plant. The rest of
the parts either are purchased from the market or are subcontracted to the vendors.
A job-shop floor plan of the existing facility is shown in Figure 1. The whole facility
is 500,000 square feet in covered area.
The figures for the last five years of the combined total market demand are as follows:
Year Demand
1998 75,000
1999 82,000
2000 80,000
2001 77,000
2002 79,000
At present, the shortages are met by importing the balance of the demand. However, this
is a costly option, and management thinks indigenously manufactured bikes of good
quality would be in great demand.
Tasks
1. Design a cellular layout for the manufacturing facility, incorporating group
technology principles.
2. Determine the amount of resources needed to satisfy the increased demand.
3. Suggest a possible material handling system for the new facility—conveyor(s),
forklift truck(s), AGV(s).
Case Study 3 Jai Hind Cycles Inc. Plans New Production Facility 687
FIGURE 1
Floor plan for Jai Hind
Cycles. Raw material storage
Cutting Molding
Bending Casting
Todd had a problem. First Security Bank had developed a consumer lending software
package to increase the capacity and speed with which auto loan applications could be
processed. The system consisted of faxed applications combined with online processing.
The goal had been to provide a 30-minute turnaround of an application from the time the
Case Study 4 The FSB Coin System 689
bank received the faxed application from the dealer to the time the loan was either
approved or disapproved. The system had recently been installed and the results had not
been satisfactory. The question now was what to do next.
First Security Bank of Idaho is the second largest bank in the state of Idaho with
branches throughout the state. The bank is a full-service bank providing a broad range of
banking services. Consumer loans and, in particular, auto loans make up an important part
of these services. The bank is part of a larger system covering most of the intermountain
states, and its headquarters are in Salt Lake City.
The auto loan business is a highly competitive field with a number of players includ-
ing full-line banks, credit unions, and consumer finance companies. Because of the highly
competitive nature, interest rates tend to be similar and competition is based on other
factors. An important factor for the dealer is the time it takes to obtain loan approval. The
quicker the loan approval, the quicker a sale can be closed and merchandise moved. A
30-minute turnaround of loan applications would be an important factor to a dealer, who
has a significant impact on the consumer’s decision on where to seek a loan.
The loan application process begins at the automobile dealership. It is there that an
application is completed for the purpose of borrowing money to purchase a car. The
application is then sent to the bank via a fax machine. Most fax transmissions are less
than two minutes in length, and there is a bank of eight receiving fax machines. All
machines are tied to the same 800 number. The plan is that eight machines should pro-
vide sufficient capacity that there should never be the problem of a busy signal received
by the sending machine.
Once the fax transmission is complete, the application is taken from the machine by a
runner and distributed to one of eight data entry clerks. The goal is that data entry should
take no longer than six minutes. The goal was also set that there should be no greater than
5 percent errors.
Once the data input is complete, the input clerk assigns the application to one of six
regions around the state. Each region has a group of specific dealers determined by geo-
graphic distribution. The application, now electronic in form, is distributed to the regions
via the wide area network. The loan officer in the respective region will then process the
loan, make a decision, and fax that decision back to the dealer. The goal is that the loan
officer should complete this function within 20 minutes. This allows about another two
minutes to fax the application back to the dealer.
The system has been operating approximately six months and has failed to meet the
goal of 30 minutes. In addition, the error rate is running approximately 10 percent.
Summary data are provided here:
Number of
Region Applications Average Time Loan Officers
1 6150 58.76 6
2 1485 37.22 2
3 2655 37.00 4
4 1680 51.07 2
5 1440 37.00 2
6 1590 37.01 3
Information on data input indicates that this part of the process is taking almost twice
as long as originally planned. The time from when the runner delivers the document to
when it is entered is currently averaging 9.5 minutes. Also, it has been found that the
time to process an error averages six minutes. Errors are corrected at the region and add
to the re- gion’s processing time.
Todd needed to come up with some recommendations on how to solve the problem.
Staffing seemed to be an issue in some regions, and the performance of the data input
clerks was below expectations. The higher processing times and error rates needed to
be cor- rected. He thought that if he solved these two problems and increased the staff, he
could get the averages in all regions down to 30 minutes.
The centralized storage and distribution operation at Athletic Shoe Company (ASC) is
con- sidering replacement of its conventional manual storage racking systems with an
elaborate automated storage and retrieval system (AS/RS). The objective of this case
study is to come up with the preliminary design of the storage and material handling
systems for ASC that will meet the needs of the company in timely distribution of its
products.
On average, between 100,000 and 150,000 pairs of shoes are shipped per day to be-
tween 8000 and 10,000 shipping destinations. In order to support this level of operations,
it is estimated that rack storage space of up to 3,000,000 pairs of shoes, consisting of
30,000 stock-keeping units (SKUs), is required.
The area available for storage, as shown in Figure 1, is 500,000 square feet. The
height of the ceiling is 40 feet. A first-in, first-out (FIFO) inventory policy is adopted
in the
FIGURE 1
Layout of the Athletic
Shoe Company
Sort,
warehouse.
wrap,
Shipping
and
pack
Store
Unpack
and Receiving
scan
Case Study 5 Automated Warehousing at Athletic Shoe Company 691
Receiving
1. Unload from truck.
2. Scan the incoming boxes/pallets.
3. Send to storage racks.
4. Store.
Shipping
1. Batch pick shipping orders.
2. Send to sortation system.
3. Wrap and pack.
4. Load in outgoing truck.
Tasks
1. Construct a simulation model of the warehouse and perform experiments using
the model to judge the effectiveness and efficiency of the design with respect to
parameters such as flows, capacity, operation, interfacing, and so on.
2. Write a detailed specification of the storage plan: the amount of rack storage space
included in the design (capacity), rack types, dimensions, rack configurations, and
aisles within the layout.
3. Design and specify the material handling equipment for all of the functions listed,
including the interfaces required to change handling methods between functions.
4. Design and specify the AS/R system. Compare a dedicated versus a shared
picker system.
692 Part III Case Study Assignments
The concentrate line stations and the flow of production are shown in Figure 1. The
concentrate line starts from the receiving area. Full pallet loads of 3600 empty cans in
10 layers arrive at the receiving area. The arrival conveyor transports these pallets to the
depalletizer (1). The cans are loaded onto the depalletizer, which is operated by Don.
FIGURE 1
Concentrate line stations for Florida Citrus Company.
6
1a
Starting point
2 01 5 5a
02 3b
4 7
3a 3 3c
2b 3
03 4a
Ending point
The depalletizer pushes out one layer of 360 cans at a time from the pallet and then
raises up one layer of empty cans onto the depalletizer conveyor belt (1a). Conveyor 1a
transports the layer of cans to the depalletizer dispenser. The dispenser separates each can
from the layer of cans. Individual empty cans travel on the empty can conveyor to the
Pfaudler bowl.
The Pfaudler bowl is a big circular container that stores the concentrate. Its 36 filling
devices are used to fill the cans with concentrate. Pamela operates the Pfaudler bowl.
Empty cans travel on the filler bowl conveyor (2b) and are filled with the appropriate
juice concentrate. Filled cans are sent to the lid stamping mechanism (2a) on the filler
bowl con- veyor. The lid stamping closes the filled cans. As the closed cans come
through the lid stamping mechanism, they are transported by the prewash conveyor to
the washing ma- chine to be flushed with water to wash away any leftover concentrate
on the can. Four closed cans are combined as a group. The group of cans is then
transported by the accu- mulate conveyor to the accumulator.
The accumulator combines six such groups (24 cans in all). The accumulated group
of 24 cans is then transported by the prepack conveyor to the Packmaster (3), operated
FIGURE 2
Process flow for Florida Citrus Company.
Travel by palletizer
conveyor
Travel by exit
conveyor Travel by forklift 2
resource
Palletizer Loading zone Exit
Entity: full box pallet Entity: full box pallet
Entity: full box Entity: full box Entity: full box pallet Entity: full box pallet
• 90 full boxes will be
combined as full box
pallet.
Case Study 6 Concentrate Line at Florida Citrus Company 695
by Pat. Pat loads cardboard boxes onto the cardboard feeding machine (3b) next to the
Packmaster. Then the 24 cans are wrapped and packed into each cardboard box.
The glue mechanism inside the Packmaster glues all six sides of the box. The boxes are
then raised up to the palletizer conveyor (3c), which transports the boxes to the
palletizer (4).
The box organizer (7) mechanism loads three boxes at a time onto the pallet. A total of
90 boxes are loaded onto each pallet (10 levels, 9 boxes per level). The palletizer then
low- ers the pallet onto the exit conveyor (4a) to be transported to the loading zone.
From the loading zone a forklift truck carries the pallets to the shipping dock. Figure 2
describes the process flow.
A study conducted by a group of Cal Poly students revealed the cause of most down-
time to be located at the Packmaster. The Packmaster is supposed to pack a group of cans
into a cardboard box. However, if the cardboard is warped, the mechanism will stop the
operation. Another problem with the Packmaster is its glue operation. The glue heads
sometimes are clotted.
All these machines operate in an automatic manner. However, there are frequent
machine stoppages caused by the following factors: change of flavor, poor maintenance,
lack of communication between workers, lack of attention by the workers, inefficient lay-
out of the concentrate line, and bad machine design.
All the stations are arranged in the sequence of the manufacturing process. As such,
the production line cannot operate in a flexible or parallel manner. Also, the machines
depend on product being fed from upstream processes. An upstream machine stoppage
will cause eventual downstream machine stoppages.
Work Measurement
A detailed production study was conducted that brought out the following facts:
Packmaster
The production study also showed the label change time on the Packmaster as follows:
The Packmaster was observed for a total of 45,983 sec. Out of this time, the Packmaster
was working for a total of 24,027 sec, down for 13,108 sec, and being set up for change
of flavor for 8848 sec. The average flavor change time for the Pfaudler bowl is 19.24
percent of the total observed time. The number of cases produced during this observed
time was 11,590. The production rate is calculated to be (11,590/46,384)3600, or about
907 cases per hour.
It was also observed that the Packmaster was down because of flipped cans (8.6 per-
cent), sensor failure (43.9 percent), and miscellaneous other reasons (47.5 percent).
The following information on the conveyors was obtained:
Arrival conveyor
Depalletizer conveyor 28.75 12.6
Empty-cans conveyor 120 130
Filler bowl conveyor 10 126
Prewash conveyor 23.6 255
Accumulate conveyor 38 48
Prepack conveyor 12 35
Palletizer conveyor 54.4 76
Exit conveyor
The Pfaudler bowl was observed for a total of 46,384 sec. Out of this time, the bowl
was working for 27,258 sec, down for 10,278 sec, and being set up for change of flavor
for 8848 sec. The average flavor change time for the Pfaudler bowl is 19.08 percent of the
total observed time. The number of cases produced in this observed time was
11,590. The production rate is calculated to be (11,590/46,384)3600, or about 900 cases
per hour.
Pfaudler Bowl
The flavor change time was observed as given in the following table:
Tasks
1. Build simulation models and figure out the production capacity of the concentrate
line at FCC (without considering any downtime).
2. What would be the capacity after considering the historical downtimes in the line?
3. What are the bottleneck operations in the whole process?
4. How can we reduce the level of inventory in the concentrate line? What would
be the magnitude of reduction in the levels of inventory?
5. If we address the bottleneck operations as found in task 3, what would be the
increase in capacity levels?
Southern California Door Company produces solid wooden doors of various designs for
new and existing homes. A layout of the production facility is shown in Figure 1. The
cur- rent production facility is not balanced well. This leads to frequent congestion and
stock- outs on the production floor. The overall inventory (both raw material and work in
process) is also fairly high. Mr. Santoso, the industrial engineering manager for the
company, has been asked by management to smooth out the flow of production as well as
reduce the lev- els of inventory. The company is also expecting a growth in the volume of
sales. The pro- duction manager is asking Mr. Santoso to find the staffing level and
equipment resources needed for the current level of sales as well as 10, 25, 50, and 100
percent growth in sales volume.
A preliminary process flow study by Mr. Santoso reveals the production flow shown in
Figure 2.
Process Flow
Raw wood material is taken from the raw material storage to carriage 1. The raw material
is inspected for correct sizes and defects. Material that does not meet the specifications is
moved to carriage 1B. Raw wood from carriage 1 is fed into the rip saw machine.
In the rip saw machine, the raw wood is cut into rectangular cross sections. Cut wood
material coming out of the rip saw machine is placed on carriage 3. Waste material from
the cutting operation (rip saw) is placed in carriage 2.
Cut wood from carriage 3 is brought to the moulding shaper and grooved on one side.
Out of the moulding shaper, grooved wood material is placed on carriage 4. From
carriage 4, the grooved wood is stored in carriage 5 (if carriage 5 is full, carriage 6 or 7 is
used). Grooved wood is transported from carriages 5, 6, and 7 to the chop saw working
table.
One by one, the grooved wood material from the chop saw working table is fed into the
chop saw machine. The grooved wood material to be fed is inspected by the operator to
see if
Case Study 7 Balancing the Production Line at Southern California Door Company 699
California Carriage
1B
Carriage
2 Carriage
Door Company. 5
Rip saw (41)
Carriage
8
Carriage Carriage 3
1
Carriage
4
Moulding
sander (42) Moulding
Carriage sander
Carriage
6 7
Sand finishing
Sand finishing
Sa nd finishing
Glue trimming
Storage racks
Auto door clamp
Storage racks
Storage racks
Storage racks
Preassembly
Preassembly
Storage racks
there are any defects in the wood. Usable chopped parts from the chop saw machine are
stored in the chop saw storage shelves. Wood material that has defects is chopped into
small blocks to cut out the defective surfaces using the chop saw and thrown away to
carriage 8.
The chopped parts in the chop saw storage shelves are stacked into batches of a certain
number and then packed with tape. From the chop saw storage shelves, some of the batches
700 Part III Case Study Assignments
FIGURE 2
Process sequences and present input/output flow for Southern California Door Company.
Quantity of machines
Rip saw Chop Saw (2)
(1416) (9840) Present total output capacity
Asterisk indicates that the work center supplies
13,744* two othe rs
Expected input of the work center to which the
712
arrow points
Moulding shaper
Sand finishing (6)
(712)
(576)
1640 576
13,744* 1840
5568 560
1440
1440 512
The present total output capacity of DET represents the number of units of a single product manufactured in an eight-
hour shift. The DET machine supplies parts for two other work centers, preassembly (1st op.) and sand finishing. In
reality, the DET machine has to balance the output between those two work centers mentioned; in other words, the
DET machine is shared by two different parts for two different work centers during an eight-hour shift.
are transported to the double end tenoner (DET) storage, while the rest of the batches are
kept in the chop saw storage shelves.
The transported batches are unpacked in the DET storage and then fed into the DET
ma- chine to be grooved on both sides. The parts coming out of the DET machine are
placed on a roller next to the machine.
The parts are rebatched. From the DET machine, the batches are transported to storage
racks and stored there until further processing. The batches stored in the chop saw storage
shelves are picked up and placed on the preassembly table, as are the batches stored in
the storage racks. The operator inspects to see if there is any defect in the wood.
Defective parts are then taken back from the preassembly table to the storage racks.
Case Study 7 Balancing the Production Line at Southern California Door Company 701
The rest of the parts are given to the second operator in the same workstation. The
sec- ond operator tries to match the color pattern of all the parts needed to assemble the
door (four frames and a center panel). The operator puts glue on both ends of all four
frame parts and preassembles the frame parts and center panel together.
The frame–panel preassembly is moved from the preassembly table to the auto door
clamp conveyor and pressed into the auto door clamp machine. The pressed assembly is
taken out of the auto door clamp machine and carried out by the auto door clamp
conveyor.
Next, the preassembly is picked up and placed on the glue trimming table. Under a
black light, the inspector looks for any excess glue coming out of the assembly parting
lines. Excess glue is trimmed using a specially designed cutter.
From the glue trimming table, the assembly is brought to a roller next to the triple
sanding machine (the auto cross grain sander and the auto drum sander). The operator
feeds the as- sembly into the triple sander. The assembly undergoes three sanding
processes: one through the auto cross grain sander and two through the auto drum
sander.After coming out of the triple sander machine, the sanded assembly is picked up
and placed on a roller between the DET and the triple sander machine. The sanded
assembly waits there for further processing. The oper- ator feeds the sanded assembly into
the DET machine, where it is grooved on two of the sides. Out of the DET machine, the
assembly is taken by the second operator and placed tem- porarily on a roller next to the
DET machine. After finishing with all the assembly, the first operator gets the grooved
assembly and feeds it to the DET machine, where the assembly is grooved again on the
other two sides. Going out of the machine, the grooved assembly
is then placed on a roller between the DET machine and the triple sander machine.
The assembly is stored for further processing. From the roller conveyor, the grooved
assembly is picked up by the operators from the sand finishing station and placed on the
table. The operators finish the sanding process on the table using a handheld power
sander. After finishing the sanding, the assembly is placed on the table for temporary
storage. Finally, the sanded assembly is moved to a roller next to the storage racks to
wait for further processes.
Work Measurement
A detailed work measurement effort was undertaken by Santoso to collect data on various
manufacturing processes involved. Table 1 summarizes the results of all the time studies.
The current number of machines and/or workstations and their output capacities are as
follows:
Output Capacities
194.4,231.82,213.34,206.75,
223.62,227.44
3 Assy comes out of 4.22,5.69,7.15,5.78,5.1,4.75,
machine 5.53,5.1,4.24,4.84
Glue trimming Trimming excess 1 Remove assy from 35.74,17.96,30.59,17.39,
glue out of the auto door clamp 21.48,10.15,16.89,10.87,
assembly machine and inspect 10.59,10.26,14.23,11.92,
for excess glue 24.87,10.91,11.77,15.48,
29.71,10.86,19.64
2 Trim excess glue 58.53,90.87,67.93,70.78,
70.53,77.9,85.88,86.84,78.9,
95.6,78.5,72.65,72.44,91.01,
86.12,84.9,72.56,79.09,77.75
Triple sander 46, 47, Sanding the assembly 1 Get assy from stack 2.45,3.56,3.18,3.16,3.32,3.58,
48 through three different and feed into sander 4.22,2.27,4.76,3.9
sanding machines 2 Sand the assembly 30.72,32.75,34.13,35.66,37,
36.31,36.84,37.03,37.44,38.54
3 Remove sanded assy 3.31,6.54,5.03,5.51,5.22,5.84,
and stack 5.38,6.69,4.22,6.44
Double end 45 Grooving sanded 1 Feed assy into DET 5.99,6.14,6.49,6.46,6.42,6.64,
tenoner (DET) assembly 3.21,4.11,3.71,4.2
2 Groove assy 31.97,32.93,35.11,33.67,
34.06,33.21,33.43,35.23,
33.87,33.72
3 Remove assy and 3.84,3,3.06,2.93,3.06,2.85,
stack 2.88,3.22,1.87,2.41
Sand finishing the
Sand finishing 1 Get part and place 3.49,3.42,3.47,3.29,3.36,3.2,
assembly
on table 5.73,3.02,3.39,3.54,3.71,3.48
2 Sand finish the 215.8,207.57,244.17,254.28,
part 238.36,218.76,341.77,247.59,
252.63,308.06,221.27,233.66
2.26,2.95,2,1.41,3.79,2.74,4.7,
3 Stack parts 3.35,3.09,2.75,2.59,2.71
704 Part III Case Study Assignments
FIGURE 3
Groups of operators for Southern California Door Company.
Minimu Number
Work Center/ m of Utilizatio Group of
Machine Quantity Operators n Operators Notes
Required Working (Shift)
Sand finishing 6 6 1.00 *
Triple sander 2 4 0.51 ****
Glue trimming 3 3 0.75 ***
Auto door clamp 6 0 0.86
Preassembly (2nd op.) 1 1 0.80 **
Preassembly (1st op.) 1 1 0.21 *****
DET 1 2 0.31 **** Grooving assembled doors
1 2 0.08 **** Grooving frames
Chop saw 1 1 0.47 ****
Moulding shaper 1 1 0.54 *****
Rip saw 1 1 0.27 ****
Triple sander
(0.51)
Chop saw
(0.47) Preassembly (2nd op.)
(0.80)
Tasks
Build simulation models to analyze the following:
1. Find the manufacturing capacity of the overall facility. What are the current
bottlenecks of production?
2. How would you balance the flow of production? What improvements in
capacity will that make?
3. What would you suggest to reduce inventory?
4. How could you reduce the manufacturing flow time?
Case Study 8 Material Handling at California Steel Industries, Inc. 705
5. The production manager is asking Mr. Santoso to find out the staffing and equipment
resources needed for the current level of sales as well as 10, 25, 50, and 100 percent
growth in sales volume.
6. Develop layouts for the facility for various levels of production.
7. What kind of material handling equipment would you recommend? Develop the
specifications, amount, and cost.
TM box 5-stand
annealing CSM box
Upender
annealing
#2
galvanizing
Cleaning
Tin mill
Galvanized coils
shipped
transports coils that are to be moved around within a building. There are two 60-ton haulers
and two 40-ton haulers. Assume that one hauler will be down for maintenance at all times.
The following are the process times at each of the production units:
Annealing is a batched process in which groups of coils are treated at one time. The an-
nealing bases at the cold sheet mill allow for coils to be batched three at a time. Coils can
be batched 12 at a time at the tin mill annealing bases.
Assume that each storage bay after a coil has been processed has infinite capacity.
Coils that are slated to be galvanized will go to either of the two galvanizing lines. The #1
contin- uous galvanizing line handles heavy-gauge coils, while the #2 galvanizing line
processes the light-gauge coils.
The proposed layout (see Figure 2) will be very much like the original layout. The
pro- posed material handling system that we are evaluating will utilize the railroads that
connect the three main buildings. The two rails will allow coils to be moved from the tin
mill to the cold sheet mill and the #1 galvanizing line. The top rail is the in-process rail,
which will
Case Study 8 Material Handling at California Steel Industries, Inc. 707
FIGURE 2
Proposed coil TM4 Train coil TM5 TM7
car
handling layout
for In- 5-
process stand
California bay
1L 1R
bay
5-stand
Steel TM11
Industries.
Coil skid
2L
Finished 2R Finished-
goods bay goods bay
Coil transfer
3L
car 3R
TM10 Rail to #1 galvanizing line,
TM14 CSM, and shipping
#2 galvanizing line
TM15
exit bay
#2 galvanizing line
enter bay
move coils that need to be processed at the cold sheet mill or the #1 galvanizing line. The
bottom rail will ship out full hard coils and coils from the #2 galvanizing line. The train
coil cars will be able to carry 100 tons of coils.
In addition, a coil transfer car system will be installed near the #2 galvanizing line. The
car will consist of a smaller “baby” car that will be held inside the belly of a larger
“mother” car. The “mother” car will travel north–south and position itself at a coil skid.
The “baby” car, traveling east–west, will detach from the “mother” car, move underneath
the skid, lift the coil, and travel back to the belly of the “mother” car.
Crane TM 7 will move coils from the 5-stand to the 5-stand bay, as in the current lay-
out. The proposed system, however, will move coils to processing in the #2 galvanizing
line with the assistance of four main cranes, namely TM 5, TM 11, TM 14, and TM 15.
Crane TM 5 will carry coils to the coil skid at the north end of the rail. From there, the
car will carry coils to the south end of the rail and place them on the right coil skid to
wait to be picked up by TM 15 and stored in the #2 galvanizing line entry bay. This crane
will also assist the line operator to move coils into position to be processed. After a coil
is galva- nized, crane TM 14 will move the coil to the #2 galvanizing line delivery bay.
Galvanized coils that are to be shipped will be put on the southernmost coil skid to be
transported by the coil car to the middle skids, where crane TM 11 will place them in
either the rail or truck shipping areas.
One facility change that will take place is the movement of all the box annealing fur-
naces to the cold sheet mill. This change will prevent the back and forth movement of
coils between the tin mill and cold sheet mill.
Tasks
1. Build simulation models of the current and proposed systems.
2. Compare the two material handling systems in terms of throughput time of coils
and work-in-process inventory.
3. Experiment with the modernized model. Determine what will be the optimal
number of train coil cars on the in-process and finished-goods rails.