Simulation Using Promodel
Simulation Using Promodel
Simulation Using
ProModel, Second Edition
I. Study Chapters
The McGrawHill
Companies, 2004
1. Introduction to
Simulation
INTRODUCTION
TO SIMULATION
1.1 Introduction
On March 19, 1999, the following story appeared in The Wall Street Journal:
Captain Chet Rivers knew that his 747-400 was loaded to the limit. The giant plane,
weighing almost 450,000 pounds by itself, was carrying a full load of passengers and
baggage, plus 400,000 pounds of fuel for the long ight from San Francisco to
Australia. As he revved his four engines for takeoff, Capt. Rivers noticed that San
Franciscos famous fog was creeping in, obscuring the hills to the north and west of
the airport.
At full throttle the plane began to roll ponderously down the runway, slowly at rst
but building up to ight speed well within normal limits. Capt. Rivers pulled the throttle back and the airplane took to the air, heading northwest across the San Francisco
peninsula towards the ocean. It looked like the start of another routine ight. Suddenly
the plane began to shudder violently. Several loud explosions shook the craft and
smoke and ames, easily visible in the midnight sky, illuminated the right wing.
Although the plane was shaking so violently that it was hard to read the instruments,
Capt. Rivers was able to tell that the right inboard engine was malfunctioning, backring violently. He immediately shut down the engine, stopping the explosions and
shaking.
However this introduced a new problem. With two engines on the left wing at full
power and only one on the right, the plane was pushed into a right turn, bringing it
directly towards San Bruno Mountain, located a few miles northwest of the airport.
Capt. Rivers instinctively turned his control wheel to the left to bring the plane back
on course. That action extended the aileronscontrol surfaces on the trailing edges
of the wingsto tilt the plane back to the left. However, it also extended the
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Part I
1. Introduction to
Simulation
The McGrawHill
Companies, 2004
Study Chapters
spoilerspanels on the tops of the wingsincreasing drag and lowering lift. With
the nose still pointed up, the heavy jet began to slow. As the plane neared stall speed,
the control stick began to shake to warn the pilot to bring the nose down to gain air
speed. Capt. Rivers immediately did so, removing that danger, but now San Bruno
Mountain was directly ahead. Capt. Rivers was unable to see the mountain due to the
thick fog that had rolled in, but the planes ground proximity sensor sounded an automatic warning, calling terrain, terrain, pull up, pull up. Rivers frantically pulled
back on the stick to clear the peak, but with the spoilers up and the plane still in a
skidding right turn, it was too late. The plane and its full load of 100 tons of fuel
crashed with a sickening explosion into the hillside just above a densely populated
housing area.
Hey Chet, that could ruin your whole day, said Capt. Riverss supervisor, who
was sitting beside him watching the whole thing. Lets rewind the tape and see what
you did wrong. Sure Mel, replied Chet as the two men stood up and stepped outside the 747 cockpit simulator. I think I know my mistake already. I should have used
my rudder, not my wheel, to bring the plane back on course. Say, I need a breather
after that experience. Im just glad that this wasnt the real thing.
The incident above was never reported in the nations newspapers, even though
it would have been one of the most tragic disasters in aviation history, because it
never really happened. It took place in a cockpit simulator, a device which uses computer technology to predict and recreate an airplanes behavior with gut-wrenching
realism.
The relief you undoubtedly felt to discover that this disastrous incident was
just a simulation gives you a sense of the impact that simulation can have in averting real-world catastrophes. This story illustrates just one of the many ways simulation is being used to help minimize the risk of making costly and sometimes
fatal mistakes in real life. Simulation technology is nding its way into an increasing number of applications ranging from training for aircraft pilots to the
testing of new product prototypes. The one thing that these applications have in
common is that they all provide a virtual environment that helps prepare for reallife situations, resulting in signicant savings in time, money, and even lives.
One area where simulation is nding increased application is in manufacturing and service system design and improvement. Its unique ability to accurately
predict the performance of complex systems makes it ideally suited for systems
planning. Just as a ight simulator reduces the risk of making costly errors in actual ight, system simulation reduces the risk of having systems that operate inefciently or that fail to meet minimum performance requirements. While this may
not be life-threatening to an individual, it certainly places a company (not to mention careers) in jeopardy.
In this chapter we introduce the topic of simulation and answer the following
questions:
What is simulation?
Why is simulation used?
How is simulation performed?
When and where should simulation be used?
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 1
The McGrawHill
Companies, 2004
1. Introduction to
Simulation
Introduction to Simulation
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Part I
1. Introduction to
Simulation
The McGrawHill
Companies, 2004
Study Chapters
FIGURE 1.1
Simulation provides animation capability.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 1
1. Introduction to
Simulation
The McGrawHill
Companies, 2004
Introduction to Simulation
The power of simulation lies in the fact that it provides a method of analysis
that is not only formal and predictive, but is capable of accurately predicting the
performance of even the most complex systems. Deming (1989) states, Management of a system is action based on prediction. Rational prediction requires systematic learning and comparisons of predictions of short-term and long-term
results from possible alternative courses of action. The key to sound management decisions lies in the ability to accurately predict the outcomes of alternative
courses of action. Simulation provides precisely that kind of foresight. By simulating alternative production schedules, operating policies, stafng levels, job
priorities, decision rules, and the like, a manager can more accurately predict outcomes and therefore make more informed and effective management decisions.
With the importance in todays competitive market of getting it right the rst
time, the lesson is becoming clear: if at rst you dont succeed, you probably
should have simulated it.
By using a computer to model a system before it is built or to test operating
policies before they are actually implemented, many of the pitfalls that are often
encountered in the start-up of a new system or the modication of an existing system can be avoided. Improvements that traditionally took months and even years
of ne-tuning to achieve can be attained in a matter of days or even hours. Because simulation runs in compressed time, weeks of system operation can be simulated in only a few minutes or even seconds. The characteristics of simulation
that make it such a powerful planning and decision-making tool can be summarized as follows:
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Part I
1. Introduction to
Simulation
The McGrawHill
Companies, 2004
Study Chapters
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 1
The McGrawHill
Companies, 2004
1. Introduction to
Simulation
Introduction to Simulation
FIGURE 1.2
System
Simulation provides a
virtual method for
doing system
experimentation.
Concept
Model
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
10
I. Study Chapters
Part I
The McGrawHill
Companies, 2004
1. Introduction to
Simulation
Study Chapters
FIGURE 1.3
The process of
simulation
experimentation.
Start
Formulate a
hypothesis
Develop a
simulation model
No
Run simulation
experiment
Hypothesis
correct?
Yes
End
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 1
1. Introduction to
Simulation
Introduction to Simulation
The McGrawHill
Companies, 2004
11
Simulation is no longer considered a method of last resort, nor is it a technique reserved only for simulation experts. The availability of easy-to-use simulation software and the ubiquity of powerful desktop computers have made
simulation not only more accessible, but also more appealing to planners and
managers who tend to avoid any kind of solution that appears too complicated. A
solution tool is not of much use if it is more complicated than the problem that it
is intended to solve. With simple data entry tables and automatic output reporting
and graphing, simulation is becoming much easier to use and the reluctance to use
it is disappearing.
The primary use of simulation continues to be in the area of manufacturing.
Manufacturing systems, which include warehousing and distribution systems,
tend to have clearly dened relationships and formalized procedures that are well
suited to simulation modeling. They are also the systems that stand to benet the
most from such an analysis tool since capital investments are so high and changes
are so disruptive. Recent trends to standardize and systematize other business
processes such as order processing, invoicing, and customer support are boosting
the application of simulation in these areas as well. It has been observed that
80 percent of all business processes are repetitive and can benet from the same
analysis techniques used to improve manufacturing systems (Harrington 1991).
With this being the case, the use of simulation in designing and improving business processes of every kind will likely continue to grow.
While the primary use of simulation is in decision support, it is by no means
limited to applications requiring a decision. An increasing use of simulation is in
the area of communication and visualization. Modern simulation software incorporates visual animation that stimulates interest in the model and effectively
communicates complex system dynamics. A proposal for a new system design can
be sold much easier if it can actually be shown how it will operate.
On a smaller scale, simulation is being used to provide interactive, computerbased training in which a management trainee is given the opportunity to practice
decision-making skills by interacting with the model during the simulation. It is
also being used in real-time control applications where the model interacts with
the real system to monitor progress and provide master control. The power of
simulation to capture system dynamics both visually and functionally opens up
numerous opportunities for its use in an integrated environment.
Since the primary use of simulation is in decision support, most of our discussion will focus on the use of simulation to make system design and operational
decisions. As a decision support tool, simulation has been used to help plan and
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
12
I. Study Chapters
Part I
The McGrawHill
Companies, 2004
1. Introduction to
Simulation
Study Chapters
Work-ow planning.
Capacity planning.
Cycle time reduction.
Staff and resource planning.
Work prioritization.
Bottleneck analysis.
Quality improvement.
Cost reduction.
Inventory reduction.
Throughput analysis.
Productivity improvement.
Layout analysis.
Line balancing.
Batch size optimization.
Production scheduling.
Resource scheduling.
Maintenance scheduling.
Control system design.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 1
1. Introduction to
Simulation
Introduction to Simulation
The McGrawHill
Companies, 2004
13
not mean that there can be no uncertainty in the system. If random behavior can
be described using probability expressions and distributions, they can be simulated. It is only when it isnt even possible to make reasonable assumptions of
how a system operates (because either no information is available or behavior is
totally erratic) that simulation (or any other analysis tool for that matter) becomes
useless. Likewise, one-time projects or processes that are never repeated the same
way twice are poor candidates for simulation. If the scenario you are modeling is
likely never going to happen again, it is of little benet to do a simulation.
Activities and events should be interdependent and variable. A system may
have lots of activities, but if they never interfere with each other or are deterministic (that is, they have no variation), then using simulation is probably unnecessary. It isnt the number of activities that makes a system difcult to analyze.
It is the number of interdependent, random activities. The effect of simple interdependencies is easy to predict if there is no variability in the activities. Determining
the ow rate for a system consisting of 10 processing activities is very straightforward if all activity times are constant and activities are never interrupted. Likewise,
random activities that operate independently of each other are usually easy to analyze. For example, 10 machines operating in isolation from each other can be expected to produce at a rate that is based on the average cycle time of each machine
less any anticipated downtime. It is the combination of interdependencies and random behavior that really produces the unpredictable results. Simpler analytical
methods such as mathematical calculations and spreadsheet software become less
adequate as the number of activities that are both interdependent and random increases. For this reason, simulation is primarily suited to systems involving both
interdependencies and variability.
The cost impact of the decision should be greater than the cost of doing the
simulation. Sometimes the impact of the decision itself is so insignicant that it
doesnt warrant the time and effort to conduct a simulation. Suppose, for example,
you are trying to decide whether a worker should repair rejects as they occur or
wait until four or ve accumulate before making repairs. If you are certain that the
next downstream activity is relatively insensitive to whether repairs are done
sooner rather than later, the decision becomes inconsequential and simulation is a
wasted effort.
The cost to experiment on the actual system should be greater than the cost of
simulation. While simulation avoids the time delay and cost associated with experimenting on the real system, in some situations it may actually be quicker and
more economical to experiment on the real system. For example, the decision in a
customer mailing process of whether to seal envelopes before or after they are
addressed can easily be made by simply trying each method and comparing the
results. The rule of thumb here is that if a question can be answered through direct
experimentation quickly, inexpensively, and with minimal impact to the current
operation, then dont use simulation. Experimenting on the actual system also
eliminates some of the drawbacks associated with simulation, such as proving
model validity.
There may be other situations where simulation is appropriate independent of
the criteria just listed (see Banks and Gibson 1997). This is certainly true in the
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
14
I. Study Chapters
Part I
1. Introduction to
Simulation
The McGrawHill
Companies, 2004
Study Chapters
case of models built purely for visualization purposes. If you are trying to sell a
system design or simply communicate how a system works, a realistic animation
created using simulation can be very useful, even though nonbenecial from an
analysis point of view.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 1
1. Introduction to
Simulation
Introduction to Simulation
The McGrawHill
Companies, 2004
15
Systems engineering.
Statistical analysis and design of experiments.
Modeling principles and concepts.
Basic programming and computer skills.
Training on one or more simulation products.
Familiarity with the system being investigated.
Experience has shown that some people learn simulation more rapidly and
become more adept at it than others. People who are good abstract thinkers yet
also pay close attention to detail seem to be the best suited for doing simulation.
Such individuals are able to see the forest while still keeping an eye on the trees
(these are people who tend to be good at putting together 1,000-piece puzzles).
They are able to quickly scope a project, gather the pertinent data, and get a useful model up and running without lots of starts and stops. A good modeler is somewhat of a sleuth, eager yet methodical and discriminating in piecing together all
of the evidence that will help put the model pieces together.
If short on time, talent, resources, or interest, the decision maker need not
despair. Plenty of consultants who are professionally trained and experienced can
provide simulation services. A competitive bid will help get the best price, but one
should be sure that the individual assigned to the project has good credentials.
If the use of simulation is only occasional, relying on a consultant may be the
preferred approach.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
16
I. Study Chapters
Part I
1. Introduction to
Simulation
The McGrawHill
Companies, 2004
Study Chapters
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 1
FIGURE 1.4
The McGrawHill
Companies, 2004
1. Introduction to
Simulation
17
Introduction to Simulation
Concept
Design
Installation
Operation
Cost of making
changes at subsequent
stages of system
development.
Cost
System stage
FIGURE 1.5
Cost without
simulation
System costs
Comparison of
cumulative system
costs with and without
simulation.
Cost with
simulation
Design
phase
Implementation
phase
Operation
phase
the short-term cost may be slightly higher due to the added labor and software
costs associated with simulation, the long-term costs associated with capital
investments and system operation are considerably lower due to better efciencies
realized through simulation. Dismissing the use of simulation on the basis of
sticker price is myopic and shows a lack of understanding of the long-term savings that come from having well-designed, efciently operating systems.
Many examples can be cited to show how simulation has been used to avoid
costly errors in the start-up of a new system. Simulation prevented an unnecessary
expenditure when a Fortune 500 company was designing a facility for producing
and storing subassemblies and needed to determine the number of containers required for holding the subassemblies. It was initially felt that 3,000 containers
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
18
I. Study Chapters
Part I
1. Introduction to
Simulation
The McGrawHill
Companies, 2004
Study Chapters
were needed until a simulation study showed that throughput did not improve signicantly when the number of containers was increased from 2,250 to 3,000. By
purchasing 2,250 containers instead of 3,000, a savings of $528,375 was expected
in the rst year, with annual savings thereafter of over $200,000 due to the savings
in oor space and storage resulting from having 750 fewer containers (Law and
McComas 1988).
Even if dramatic savings are not realized each time a model is built, simulation at least inspires condence that a particular system design is capable of meeting required performance objectives and thus minimizes the risk often associated
with new start-ups. The economic benet associated with instilling condence
was evidenced when an entrepreneur, who was attempting to secure bank nancing to start a blanket factory, used a simulation model to show the feasibility of the
proposed factory. Based on the processing times and equipment lists supplied by
industry experts, the model showed that the output projections in the business
plan were well within the capability of the proposed facility. Although unfamiliar
with the blanket business, bank ofcials felt more secure in agreeing to support
the venture (Bateman et al. 1997).
Often simulation can help improve productivity by exposing ways of making
better use of existing assets. By looking at a system holistically, long-standing
problems such as bottlenecks, redundancies, and inefciencies that previously
went unnoticed start to become more apparent and can be eliminated. The trick is
to nd waste, or muda, advises Shingo; after all, the most damaging kind of waste
is the waste we dont recognize (Shingo 1992). Consider the following actual
examples where simulation helped uncover and eliminate wasteful practices:
GE Nuclear Energy was seeking ways to improve productivity without
investing large amounts of capital. Through the use of simulation, the
company was able to increase the output of highly specialized reactor
parts by 80 percent. The cycle time required for production of each part
was reduced by an average of 50 percent. These results were obtained by
running a series of models, each one solving production problems
highlighted by the previous model (Bateman et al. 1997).
A large manufacturing company with stamping plants located throughout
the world produced stamped aluminum and brass parts on order according
to customer specications. Each plant had from 20 to 50 stamping presses
that were utilized anywhere from 20 to 85 percent. A simulation study
was conducted to experiment with possible ways of increasing capacity
utilization. As a result of the study, machine utilization improved from an
average of 37 to 60 percent (Hancock, Dissen, and Merten 1977).
A diagnostic radiology department in a community hospital was modeled
to evaluate patient and staff scheduling, and to assist in expansion
planning over the next ve years. Analysis using the simulation model
enabled improvements to be discovered in operating procedures that
precluded the necessity for any major expansions in department size
(Perry and Baum 1976).
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 1
1. Introduction to
Simulation
Introduction to Simulation
The McGrawHill
Companies, 2004
19
In each of these examples, signicant productivity improvements were realized without the need for making major investments. The improvements came
through nding ways to operate more efciently and utilize existing resources
more effectively. These capacity improvement opportunities were brought to light
through the use of simulation.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
20
I. Study Chapters
Part I
1. Introduction to
Simulation
The McGrawHill
Companies, 2004
Study Chapters
behavior such as the way entities arrive and their routings can be dened with little, if any, programming using the data entry tables that are provided. ProModel is
used by thousands of professionals in manufacturing and service-related industries and is taught in hundreds of institutions of higher learning.
Part III contains case study assignments that can be used for student projects to
apply the theory they have learned from Part I and to try out the skills they have acquired from doing the lab exercises (Part II). It is recommended that students be assigned at least one simulation project during the course. Preferably this is a project
performed for a nearby company or institution so it will be meaningful. If such a
project cannot be found, or as an additional practice exercise, the case studies provided should be useful. Student projects should be selected early in the course so
that data gathering can get started and the project completed within the allotted
time. The chapters in Part I are sequenced to parallel an actual simulation project.
1.11 Summary
Businesses today face the challenge of quickly designing and implementing complex production and service systems that are capable of meeting growing demands for quality, delivery, affordability, and service. With recent advances in
computing and software technology, simulation tools are now available to help
meet this challenge. Simulation is a powerful technology that is being used with
increasing frequency to improve system performance by providing a way to make
better design and management decisions. When used properly, simulation can reduce the risks associated with starting up a new operation or making improvements to existing operations.
Because simulation accounts for interdependencies and variability, it provides insights that cannot be obtained any other way. Where important system
decisions are being made of an operational nature, simulation is an invaluable
decision-making tool. Its usefulness increases as variability and interdependency
increase and the importance of the decision becomes greater.
Lastly, simulation actually makes designing systems fun! Not only can a designer try out new design concepts to see what works best, but the visualization
makes it take on a realism that is like watching an actual system in operation.
Through simulation, decision makers can play what-if games with a new system
or modied process before it actually gets implemented. This engaging process
stimulates creative thinking and results in good design decisions.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 1
1. Introduction to
Simulation
Introduction to Simulation
The McGrawHill
Companies, 2004
21
3. What are two specic questions that simulation might help answer in a
bank? In a manufacturing facility? In a dental ofce?
4. What are three advantages that simulation has over alternative
approaches to systems design?
5. Does simulation itself optimize a system design? Explain.
6. How does simulation follow the scientic method?
7. A restaurant gets extremely busy during lunch (11:00 A.M. to 2:00 P.M.)
and is trying to decide whether it should increase the number of
waitresses from two to three. What considerations would you look at to
determine whether simulation should be used to make this decision?
8. How would you develop an economic justication for using simulation?
9. Is a simulation exercise wasted if it exposes no problems in a system
design? Explain.
10. A simulation run was made showing that a modeled factory could
produce 130 parts per hour. What information would you want to know
about the simulation study before placing any condence in the results?
11. A PC board manufacturer has high work-in-process (WIP) inventories,
yet machines and equipment seem underutilized. How could simulation
help solve this problem?
12. How important is a statistical background for doing simulation?
13. How can a programming background be useful in doing simulation?
14. Why are good project management and communication skills important
in simulation?
15. Why should the process owner be heavily involved in a simulation
project?
16. For which of the following problems would simulation likely be useful?
a. Increasing the throughput of a production line.
b. Increasing the pace of a worker on an assembly line.
c. Decreasing the time that patrons at an amusement park spend
waiting in line.
d. Determining the percentage defective from a particular machine.
e. Determining where to place inspection points in a process.
f. Finding the most efcient way to ll out an order form.
References
Audon, Wyston Hugh, and L. Kronenberger. The Faber Book of Aphorisms. London: Faber
and Faber, 1964.
Banks, J., and R. Gibson. 10 Rules for Determining When Simulation Is Not Appropriate. IIE Solutions, September 1997, pp. 3032.
Bateman, Robert E.; Royce O. Bowden; Thomas J. Gogg; Charles R. Harrell; and Jack
R. A. Mott. System Improvement Using Simulation. Utah: PROMODEL Corp., 1997.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
22
I. Study Chapters
Part I
1. Introduction to
Simulation
The McGrawHill
Companies, 2004
Study Chapters
Deming, W. E. Foundation for Management of Quality in the Western World. Paper read at
a meeting of the Institute of Management Sciences, Osaka, Japan, 24 July 1989.
Glenney, Neil E., and Gerald T. Mackulak. Modeling & Simulation Provide Key to CIM
Implementation Philosophy. Industrial Engineering, May 1985.
Hancock, Walton; R. Dissen; and A. Merten. An Example of Simulation to Improve Plant
Productivity. AIIE Transactions, March 1977, pp. 210.
Harrell, Charles R., and Donald Hicks. Simulation Software Component Architecture
for Simulation-Based Enterprise Applications. In Proceedings of the 1998 Winter
Simulation Conference, ed. D. J. Medeiros, E. F. Watson, J. S. Carson, and
M. S. Manivannan, pp. 171721. Institute of Electrical and Electronics Engineers,
Piscataway, New Jersey.
Harrington, H. James. Business Process Improvement. New York: McGraw-Hill, 1991.
Hoover, Stewart V., and Ronald F. Perry. Simulation: A Problem-Solving Approach.
Reading, MA: Addison-Wesley, 1989.
Law, A. M., and M. G. McComas. How Simulation Pays Off. Manufacturing Engineering, February 1988, pp. 3739.
Mott, Jack, and Kerim Tumay. Developing a Strategy for Justifying Simulation. Industrial Engineering, July 1992, pp. 3842.
Oxford American Dictionary. New York: Oxford University Press, 1980. [compiled by]
Eugene Enrich et al.
Perry, R. F., and R. F. Baum. Resource Allocation and Scheduling for a Radiology
Department. In Cost Control in Hospitals. Ann Arbor, MI: Health Administration
Press, 1976.
Rohrer, Matt, and Jerry Banks. Required Skills of a Simulation Analyst. IIE Solutions,
May 1998, pp. 723.
Schriber, T. J. The Nature and Role of Simulation in the Design of Manufacturing
Systems. Simulation in CIM and Articial Intelligence Techniques, ed. J. Retti and
K. E. Wichmann. S.D., CA.: Society for Computer Simulation, 1987, pp. 58.
Shannon, Robert E. Introduction to the Art and Science of Simulation. In Proceedings
of the 1998 Winter Simulation Conference, ed. D. J. Medeiros, E. F. Watson,
J. S. Carson, and M. S. Manivannan, pp. 714. Piscataway, NJ: Institute of Electrical
and Electronics Engineers.
Shingo, Shigeo. The Shingo Production Management SystemImproving Process Functions. Trans. Andrew P. Dillon. Cambridge, MA: Productivity Press, 1992.
Solberg, James. Design and Analysis of Integrated Manufacturing Systems. In W. Dale
Compton. Washington, D.C.: National Academy Press, 1988, p. 4.
The Wall Street Journal, March 19, 1999. United 747s Near Miss Sparks a Widespread
Review of Pilot Skills, p. A1.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
The McGrawHill
Companies, 2004
2. System Dynamics
SYSTEM DYNAMICS
2.1 Introduction
Knowing how to do simulation doesnt make someone a good systems designer
any more than knowing how to use a CAD system makes one a good product designer. Simulation is a tool that is useful only if one understands the nature of the
problem to be solved. It is designed to help solve systemic problems that are operational in nature. Simulation exercises fail to produce useful results more often
because of a lack of understanding of system dynamics than a lack of knowing
how to use the simulation software. The challenge is in understanding how the
system operates, knowing what you want to achieve with the system, and being
able to identify key leverage points for best achieving desired objectives. To
illustrate the nature of this challenge, consider the following actual scenario:
The pipe mill for the XYZ Steel Corporation was an important prot center, turning
steel slabs selling for under $200/ton into a product with virtually unlimited demand
selling for well over $450/ton. The mill took coils of steel of the proper thickness and
width through a series of machines that trimmed the edges, bent the steel into a
cylinder, welded the seam, and cut the resulting pipe into appropriate lengths, all on a
continuously running line. The line was even designed to weld the end of one coil to
the beginning of the next one on the y, allowing the line to run continually for days
on end.
Unfortunately the mill was able to run only about 50 percent of its theoretical capacity over the long term, costing the company tens of millions of dollars a year in lost
revenue. In an effort to improve the mills productivity, management studied each step
in the process. It was fairly easy to nd the slowest step in the line, but additional
study showed that only a small percentage of lost production was due to problems at
this bottleneck operation. Sometimes a step upstream from the bottleneck would
23
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
24
I. Study Chapters
Part I
2. System Dynamics
The McGrawHill
Companies, 2004
Study Chapters
have a problem, causing the bottleneck to run out of work, or a downstream step
would go down temporarily, causing work to back up and stop the bottleneck. Sometimes the bottleneck would get so far behind that there was no place to put incoming,
newly made pipe. In this case the workers would stop the entire pipe-making process
until the bottleneck was able to catch up. Often the bottleneck would then be idle waiting until the newly started line was functioning properly again and the new pipe had a
chance to reach it. Sometimes problems at the bottleneck were actually caused by improper work at a previous location.
In short, there was no single cause for the poor productivity seen at this plant.
Rather, several separate causes all contributed to the problem in complex ways. Management was at a loss to know which of several possible improvements (additional or
faster capacity at the bottleneck operation, additional storage space between stations,
better rules for when to shut down and start up the pipe-forming section of the mill,
better quality control, or better training at certain critical locations) would have the
most impact for the least cost. Yet the poor performance of the mill was costing enormous amounts of money. Management was under pressure to do something, but what
should it be?
This example illustrates the nature and difculty of the decisions that an
operations manager faces. Managers need to make decisions that are the best in
some sense. To do so, however, requires that they have clearly dened goals and
understand the system well enough to identify cause-and-effect relationships.
While every system is different, just as every product design is different,
the basic elements and types of relationships are the same. Knowing how the
elements of a system interact and how overall performance can be improved are
essential to the effective use of simulation. This chapter reviews basic system
dynamics and answers the following questions:
What is a system?
What are the elements of a system?
What makes systems so complex?
What are useful system metrics?
What is a systems approach to systems planning?
How do traditional systems analysis techniques compare with simulation?
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 2
The McGrawHill
Companies, 2004
2. System Dynamics
25
System Dynamics
provides prompt and professional service. The difference is between a system that
has been well designed and operates smoothly, and one that is poorly planned and
managed.
A system, as used here, is dened as a collection of elements that function
together to achieve a desired goal (Blanchard 1991). Key points in this denition
include the fact that (1) a system consists of multiple elements, (2) these elements
are interrelated and work in cooperation, and (3) a system exists for the purpose
of achieving specic objectives. Examples of systems are trafc systems, political
systems, economic systems, manufacturing systems, and service systems. Our
main focus will be on manufacturing and service systems that process materials,
information, and people.
Manufacturing systems can be small job shops and machining cells or large
production facilities and assembly lines. Warehousing and distribution as well as
entire supply chain systems will be included in our discussions of manufacturing
systems. Service systems cover a wide variety of systems including health care
facilities, call centers, amusement parks, public transportation systems, restaurants, banks, and so forth.
Both manufacturing and service systems may be termed processing systems
because they process items through a series of activities. In a manufacturing system, raw materials are transformed into nished products. For example, a bicycle
manufacturer starts with tube stock that is then cut, welded, and painted to produce bicycle frames. In service systems, customers enter with some service need
and depart as serviced (and, we hope, satised) customers. In a hospital emergency room, for example, nurses, doctors, and other staff personnel admit and
treat incoming patients who may undergo tests and possibly even surgical procedures before nally being released. Processing systems are articial (they are
human-made), dynamic (elements interact over time), and usually stochastic (they
exhibit random behavior).
Incoming entities
Activities
Resources
Controls
System
Outgoing entities
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
26
I. Study Chapters
Part I
2. System Dynamics
The McGrawHill
Companies, 2004
Study Chapters
2.3.1 Entities
Entities are the items processed through the system such as products, customers,
and documents. Different entities may have unique characteristics such as cost,
shape, priority, quality, or condition. Entities may be further subdivided into the
following types:
Human or animate (customers, patients, etc.).
Inanimate (parts, documents, bins, etc.).
Intangible (calls, electronic mail, etc.).
For most manufacturing and service systems, the entities are discrete items.
This is the case for discrete part manufacturing and is certainly the case for nearly
all service systems that process customers, documents, and others. For some production systems, called continuous systems, a nondiscrete substance is processed
rather than discrete entities. Examples of continuous systems are oil reneries and
paper mills.
2.3.2 Activities
Activities are the tasks performed in the system that are either directly or
indirectly involved in the processing of entities. Examples of activities include
servicing a customer, cutting a part on a machine, or repairing a piece of equipment. Activities usually consume time and often involve the use of resources.
Activities may be classied as
Entity processing (check-in, treatment, inspection, fabrication, etc.).
Entity and resource movement (forklift travel, riding in an elevator, etc.).
Resource adjustments, maintenance, and repairs (machine setups, copy
machine repair, etc.).
2.3.3 Resources
Resources are the means by which activities are performed. They provide the
supporting facilities, equipment, and personnel for carrying out activities. While
resources facilitate entity processing, inadequate resources can constrain processing by limiting the rate at which processing can take place. Resources have
characteristics such as capacity, speed, cycle time, and reliability. Like entities,
resources can be categorized as
Human or animate (operators, doctors, maintenance personnel, etc.).
Inanimate (equipment, tooling, oor space, etc.).
Intangible (information, electrical power, etc.).
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 2
2. System Dynamics
The McGrawHill
Companies, 2004
27
System Dynamics
2.3.4 Controls
Controls dictate how, when, and where activities are performed. Controls impose
order on the system. At the highest level, controls consist of schedules, plans, and
policies. At the lowest level, controls take the form of written procedures and machine control logic. At all levels, controls provide the information and decision
logic for how the system should operate. Examples of controls include
Routing sequences.
Production plans.
Work schedules.
Task prioritization.
Control software.
Instruction sheets.
While the sheer number of elements in a system can stagger the mind (the
number of different entities, activities, resources, and controls can easily exceed
100), the interactions of these elements are what make systems so complex and
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
28
I. Study Chapters
Part I
The McGrawHill
Companies, 2004
2. System Dynamics
Study Chapters
Interdependencies
Variability
Complexity
These two factors characterize virtually all human-made systems and make
system behavior difcult to analyze and predict. As shown in Figure 2.2, the degree of analytical difculty increases exponentially as the number of interdependencies and random variables increases.
2.4.1 Interdependencies
Interdependencies cause the behavior of one element to affect other elements in
the system. For example, if a machine breaks down, repair personnel are put into
action while downstream operations become idle for lack of parts. Upstream
operations may even be forced to shut down due to a logjam in the entity ow
causing a blockage of activities. Another place where this chain reaction or
domino effect manifests itself is in situations where resources are shared between
Analytical difculty
as a function of the
number of
interdependencies and
random variables.
Degree of analytical
difficulty
FIGURE 2.2
Number of interdependencies
and random variables
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 2
2. System Dynamics
System Dynamics
The McGrawHill
Companies, 2004
29
two or more activities. A doctor treating one patient, for example, may be unable
to immediately respond to another patient needing his or her attention. This delay
in response may also set other forces in motion.
It should be clear that the complexity of a system has less to do with the
number of elements in the system than with the number of interdependent relationships. Even interdependent relationships can vary in degree, causing more or
less impact on overall system behavior. System interdependency may be either
tight or loose depending on how closely elements are linked. Elements that are
tightly coupled have a greater impact on system operation and performance than
elements that are only loosely connected. When an element such as a worker or
machine is delayed in a tightly coupled system, the impact is immediately felt by
other elements in the system and the entire process may be brought to a screeching halt.
In a loosely coupled system, activities have only a minor, and often delayed,
impact on other elements in the system. Systems guru Peter Senge (1990) notes
that for many systems, Cause and effect are not closely related in time and
space. Sometimes the distance in time and space between cause-and-effect relationships becomes quite sizable. If enough reserve inventory has been stockpiled,
a truckers strike cutting off the delivery of raw materials to a transmission plant
in one part of the world may not affect automobile assembly in another part of the
world for weeks. Cause-and-effect relationships are like a ripple of water that diminishes in impact as the distance in time and space increases.
Obviously, the preferred approach to dealing with interdependencies is to
eliminate them altogether. Unfortunately, this is not entirely possible for most
situations and actually defeats the purpose of having systems in the rst place.
The whole idea of a system is to achieve a synergy that otherwise would be unattainable if every component were to function in complete isolation. Several
methods are used to decouple system elements or at least isolate their inuence
so disruptions are not felt so easily. These include providing buffer inventories,
implementing redundant or backup measures, and dedicating resources to single tasks. The downside to these mitigating techniques is that they often lead to
excessive inventories and underutilized resources. The point to be made here
is that interdependencies, though they may be minimized somewhat, are simply a fact of life and are best dealt with through effective coordination and
management.
2.4.2 Variability
Variability is a characteristic inherent in any system involving humans and
machinery. Uncertainty in supplier deliveries, random equipment failures, unpredictable absenteeism, and uctuating demand all combine to create havoc in planning system operations. Variability compounds the already unpredictable effect of
interdependencies, making systems even more complex and unpredictable. Variability propagates in a system so that highly variable outputs from one workstation become highly variable inputs to another (Hopp and Spearman 2000).
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
30
I. Study Chapters
Part I
2. System Dynamics
The McGrawHill
Companies, 2004
Study Chapters
Examples
Activity times
Decisions
Quantities
Event intervals
Attributes
Table 2.1 identies the types of random variability that are typical of most manufacturing and service systems.
The tendency in systems planning is to ignore variability and calculate system capacity and performance based on average values. Many commercial scheduling packages such as MRP (material requirements planning) software work this
way. Ignoring variability distorts the true picture and leads to inaccurate performance predictions. Designing systems based on average requirements is like
deciding whether to wear a coat based on the average annual temperature or prescribing the same eyeglasses for everyone based on average eyesight. Adults have
been known to drown in water that was only four feet deepon the average!
Wherever variability occurs, an attempt should be made to describe the nature or
pattern of the variability and assess the range of the impact that variability might
have on system performance.
Perhaps the most illustrative example of the impact that variability can have
on system behavior is the simple situation where entities enter into a single queue
to wait for a single server. An example of this might be customers lining up in
front of an ATM. Suppose that the time between customer arrivals is exponentially distributed with an average time of one minute and that they take an average
time of one minute, exponentially distributed, to transact their business. In queuing theory, this is called an M/M/1 queuing system. If we calculate system performance based solely on average time, there will never be any customers waiting
in the queue. Every minute that a customer arrives the previous customer nishes
his or her transaction. Now if we calculate the number of customers waiting in
line, taking into account the variation, we will discover that the waiting line grows
to innity (the technical term is that the system explodes). Who would guess
that in a situation involving only one interdependent relationship that variation
alone would make the difference between zero items waiting in a queue and an innite number in the queue?
By all means, variability should be reduced and even eliminated wherever
possible. System planning is much easier if you dont have to contend with it.
Where it is inevitable, however, simulation can help predict the impact it will have
on system performance. Likewise, simulation can help identify the degree of
improvement that can be realized if variability is reduced or even eliminated. For
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 2
2. System Dynamics
System Dynamics
The McGrawHill
Companies, 2004
31
example, it can tell you how much reduction in overall ow time and ow time
variation can be achieved if operation time variation can be reduced by, say,
20 percent.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
32
I. Study Chapters
Part I
2. System Dynamics
The McGrawHill
Companies, 2004
Study Chapters
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 2
2. System Dynamics
System Dynamics
The McGrawHill
Companies, 2004
33
Variancethe degree of uctuation that can and often does occur in any
of the preceding metrics. Variance introduces uncertainty, and therefore
risk, in achieving desired performance goals. Manufacturers and service
providers are often interested in reducing variance in delivery and service
times. For example, cycle times and throughput rates are going to have
some variance associated with them. Variance is reduced by controlling
activity times, improving resource reliability, and adhering to schedules.
These metrics can be given for the entire system, or they can be broken down by
individual resource, entity type, or some other characteristic. By relating these
metrics to other factors, additional meaningful metrics can be derived that are
useful for benchmarking or other comparative analysis. Typical relational metrics
include minimum theoretical ow time divided by actual ow time (ow time
efciency), cost per unit produced (unit cost), annual inventory sold divided by
average inventory (inventory turns or turnover ratio), or units produced per cost or
labor input (productivity).
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
34
I. Study Chapters
Part I
2. System Dynamics
The McGrawHill
Companies, 2004
Study Chapters
often based on whether the cost to implement a change produces a higher return
in performance.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 2
2. System Dynamics
The McGrawHill
Companies, 2004
35
System Dynamics
Cost
Optimum
Total cost
Resource costs
Waiting costs
Number of resources
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
36
I. Study Chapters
Part I
2. System Dynamics
The McGrawHill
Companies, 2004
Study Chapters
As shown in Figure 2.3, the number of resources at which the sum of the
resource costs and waiting costs is at a minimum is the optimum number of
resources to have. It also becomes the optimum acceptable waiting time.
In systems design, arriving at an optimum system design is not always
realistic, given the almost endless congurations that are sometimes possible and
limited time that is available. From a practical standpoint, the best that can be
expected is a near optimum solution that gets us close enough to our objective,
given the time constraints for making the decision.
Whether designing a new system or improving an existing system, it is important to follow sound design principles that take into account all relevant variables.
The activity of systems design and process improvement, also called systems
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 2
The McGrawHill
Companies, 2004
2. System Dynamics
37
System Dynamics
FIGURE 2.4
Four-step iterative
approach to systems
improvement.
Identify problems
and opportunities.
Select and
implement the
best solution.
Develop
alternative
solutions.
Evaluate the
solutions.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
38
I. Study Chapters
Part I
2. System Dynamics
The McGrawHill
Companies, 2004
Study Chapters
identify possible areas of focus and leverage points for applying a solution.
Techniques such as cause-and-effect analysis and pareto analysis are useful here.
Once a problem or opportunity has been identied and key decision variables
isolated, alternative solutions can be explored. This is where most of the design
and engineering expertise comes into play. Knowledge of best practices for common types of processes can also be helpful. The designer should be open to all
possible alternative feasible solutions so that the best possible solutions dont get
overlooked.
Generating alternative solutions requires creativity as well as organizational
and engineering skills. Brainstorming sessions, in which designers exhaust every
conceivably possible solution idea, are particularly useful. Designers should use
every stretch of the imagination and not be stied by conventional solutions
alone. The best ideas come when system planners begin to think innovatively and
break from traditional ways of doing things. Simulation is particularly helpful in
this process in that it encourages thinking in radical new ways.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
The McGrawHill
Companies, 2004
2. System Dynamics
Chapter 2
39
System Dynamics
these techniques still can provide rough estimates but fall short in producing the
insights and accurate answers that simulation provides. Systems implemented
using these techniques usually require some adjustments after implementation to
compensate for inaccurate calculations. For example, if after implementing a system it is discovered that the number of resources initially calculated is insufcient
to meet processing requirements, additional resources are added. This adjustment
can create extensive delays and costly modications if special personnel training
or custom equipment is involved. As a precautionary measure, a safety factor is
sometimes added to resource and space calculations to ensure they are adequate.
Overdesigning a system, however, also can be costly and wasteful.
As system interdependency and variability increase, not only does system
performance decrease, but the ability to accurately predict system performance
decreases as well (Lloyd and Melton 1997). Simulation enables a planner to accurately predict the expected performance of a system design and ultimately make
better design decisions.
Systems analysis tools, in addition to simulation, include simple calculations,
spreadsheets, operations research techniques (such as linear programming and
queuing theory), and special computerized tools for scheduling, layout, and so
forth. While these tools can provide quick and approximate solutions, they tend to
make oversimplifying assumptions, perform only static calculations, and are limited to narrow classes of problems. Additionally, they fail to fully account for
interdependencies and variability of complex systems and therefore are not as accurate as simulation in predicting complex system performance (see Figure 2.5).
They all lack the power, versatility, and visual appeal of simulation. They do provide quick solutions, however, and for certain situations produce adequate results.
They are important to cover here, not only because they sometimes provide a
good alternative to simulation, but also because they can complement simulation
by providing initial design estimates for input to the simulation model. They also
FIGURE 2.5
100%
Simulation improves
performance
predictability.
System predictability
With
simulation
50%
Without
simulation
0%
Call centers
Doctor's offices
Machining cells
Low
Banks
Emergency rooms
Production lines
Medium
System complexity
Airports
Hospitals
Factories
High
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
40
I. Study Chapters
Part I
2. System Dynamics
The McGrawHill
Companies, 2004
Study Chapters
can be useful to help validate the results of a simulation by comparing them with
results obtained using an analytic model.
2.9.2 Spreadsheets
Spreadsheet software comes in handy when calculations, sometimes involving
hundreds of values, need to be made. Manipulating rows and columns of numbers
on a computer is much easier than doing it on paper, even with a calculator handy.
Spreadsheets can be used to perform rough-cut analysis such as calculating
average throughput or estimating machine requirements. The drawback to spreadsheet software is the inability (or, at least, limited ability) to include variability in
activity times, arrival rates, and so on, and to account for the effects of interdependencies.
What-if experiments can be run on spreadsheets based on expected values
(average customer arrivals, average activity times, mean time between equipment
failures) and simple interactions (activity A must be performed before activity B).
This type of spreadsheet simulation can be very useful for getting rough performance estimates. For some applications with little variability and component interaction, a spreadsheet simulation may be adequate. However, calculations based
on only average values and oversimplied interdependencies potentially can be
misleading and result in poor decisions. As one ProModel user reported, We just
completed our nal presentation of a simulation project and successfully saved
approximately $600,000. Our management was prepared to purchase an additional overhead crane based on spreadsheet analysis. We subsequently built a
ProModel simulation that demonstrated an additional crane will not be necessary.
The simulation also illustrated some potential problems that were not readily apparent with spreadsheet analysis.
Another weakness of spreadsheet modeling is the fact that all behavior is
assumed to be period-driven rather than event-driven. Perhaps you have tried to
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 2
2. System Dynamics
System Dynamics
The McGrawHill
Companies, 2004
41
gure out how your bank account balance uctuated during a particular period
when all you had to go on was your monthly statements. Using ending balances
does not reect changes as they occurred during the period. You can know the current state of the system at any point in time only by updating the state variables of
the system each time an event or transaction occurs. When it comes to dynamic
models, spreadsheet simulation suffers from the curse of dimensionality because the size of the model becomes unmanageable.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
The McGrawHill
Companies, 2004
2. System Dynamics
42
Part I
FIGURE 2.6
Arriving entities
Study Chapters
Queue Server
Departing entities
Queuing system
conguration.
.
.
.
one or more queues and one or more servers (see Figure 2.6). Entities, referred to
in queuing theory as the calling population, enter the queuing system and either
are immediately served if a server is available or wait in a queue until a server becomes available. Entities may be serviced using one of several queuing disciplines: rst-in, rst-out (FIFO); last-in, rst-out (LIFO); priority; and others. The
system capacity, or number of entities allowed in the system at any one time, may
be either nite or, as is often the case, innite. Several different entity queuing behaviors can be analyzed such as balking (rejecting entry), reneging (abandoning
the queue), or jockeying (switching queues). Different interarrival time distributions (such as constant or exponential) may also be analyzed, coming from either
a nite or innite population. Service times may also follow one of several distributions such as exponential or constant.
Kendall (1953) devised a simple system for classifying queuing systems in
the form A/B/s, where A is the type of interarrival distribution, B is the type of
service time distribution, and s is the number of servers. Typical distribution types
for A and B are
M
G
D
An M/D/1 queuing system, for example, is a system in which interarrival times are
exponentially distributed, service times are constant, and there is a single server.
The arrival rate in a queuing system is usually represented by the Greek letter
lambda () and the service rate by the Greek letter mu (). The mean interarrival
time then becomes 1/ and the mean service time is 1/. A trafc intensity factor
/ is a parameter used in many of the queuing equations and is represented by
the Greek letter rho ().
Common performance measures of interest in a queuing system are based on
steady-state or long-term expected values and include
L = expected number of entities in the system (number in the queue and in
service)
Lq = expected number of entities in the queue (queue length)
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 2
The McGrawHill
Companies, 2004
2. System Dynamics
43
System Dynamics
=
(1 )
( )
Lq = L =
W =
Wq =
2
(1 )
( )
Pn = (1 ) n
n = 0, 1, . . .
If either the expected number of entities in the system or the expected waiting
time is known, the other can be calculated easily using Littles law (1961):
L = W
Littles law also can be applied to the queue length and waiting time:
L q = Wq
Example: Suppose customers arrive to use an automatic teller machine (ATM) at an
interarrival time of 3 minutes exponentially distributed and spend an average of
2.4 minutes exponentially distributed at the machine. What is the expected number of
customers the system and in the queue? What is the expected waiting time for customers in the system and in the queue?
= 20 per hour
= 25 per hour
=
= .8
L=
( )
Solving for L:
20
(25 20)
20
=4
5
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
44
I. Study Chapters
Part I
The McGrawHill
Companies, 2004
2. System Dynamics
Study Chapters
2
(1 )
.82
(1 .8)
.64
.2
= 3.2
Solving for W using Littles formula:
W =
=
4
20
= .20 hrs
= 12 minutes
Solving for Wq using Littles formula:
Wq =
Lq
3.2
20
= .16 hrs
= 9.6 minutes
Descriptive OR techniques such as queuing theory are useful for the most
basic problems, but as systems become even moderately complex, the problems
get very complicated and quickly become mathematically intractable. In contrast,
simulation provides close estimates for even the most complex systems (assuming the model is valid). In addition, the statistical output of simulation is not
limited to only one or two metrics but instead provides information on all performance measures. Furthermore, while OR techniques give only average performance measures, simulation can generate detailed time-series data and
histograms providing a complete picture of performance over time.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 2
2. System Dynamics
System Dynamics
The McGrawHill
Companies, 2004
45
2.10 Summary
An understanding of system dynamics is essential to using any tool for planning
system operations. Manufacturing and service systems consist of interrelated
elements (personnel, equipment, and so forth) that interactively function to produce a specied outcome (an end product, a satised customer, and so on).
Systems are made up of entities (the objects being processed), resources (the personnel, equipment, and facilities used to process the entities), activities (the
process steps), and controls (the rules specifying the who, what, where, when, and
how of entity processing).
The two characteristics of systems that make them so difcult to analyze are
interdependencies and variability. Interdependencies cause the behavior of one
element to affect other elements in the system either directly or indirectly. Variability compounds the effect of interdependencies in the system, making system
behavior nearly impossible to predict without the use of simulation.
The variables of interest in systems analysis are decision, response, and state
variables. Decision variables dene how a system works; response variables
indicate how a system performs; and state variables indicate system conditions at
specic points in time. System performance metrics or response variables are generally time, utilization, inventory, quality, or cost related. Improving system performance requires the correct manipulation of decision variables. System optimization seeks to nd the best overall setting of decision variable values that
maximizes or minimizes a particular response variable value.
Given the complex nature of system elements and the requirement to make
good design decisions in the shortest time possible, it is evident that simulation
can play a vital role in systems planning. Traditional systems analysis techniques
are effective in providing quick but often rough solutions to dynamic systems
problems. They generally fall short in their ability to deal with the complexity and
dynamically changing conditions in manufacturing and service systems. Simulation is capable of imitating complex systems of nearly any size and to nearly any
level of detail. It gives accurate estimates of multiple performance metrics and
leads designers toward good design decisions.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
46
I. Study Chapters
Part I
2. System Dynamics
The McGrawHill
Companies, 2004
Study Chapters
References
Blanchard, Benjamin S. System Engineering Management. New York: John Wiley & Sons,
1991.
Hopp, Wallace J., and M. Spearman. Factory Physics. New York: Irwin/McGraw-Hill,
2000, p. 282.
Kendall, D. G. Stochastic Processes Occurring in the Theory of Queues and Their
Analysis by the Method of Imbedded Markov Chains. Annals of Mathematical
Statistics 24 (1953), pp. 33854.
Kofman, Fred, and P. Senge. Communities of Commitment: The Heart of Learning
Organizations. Sarita Chawla and John Renesch, (eds.), Portland, OR. Productivity
Press, 1995.
Law, Averill M., and David W. Kelton. Simulation Modeling and Analysis. New York:
McGraw-Hill, 2000.
Little, J. D. C. A Proof for the Queuing Formula: L = W. Operations Research 9, no. 3
(1961), pp. 38387.
Lloyd, S., and K. Melton. Using Statistical Process Control to Obtain More Precise
Distribution Fitting Using Distribution Fitting Software. Simulators International
XIV 29, no. 3 (April 1997), pp. 19398.
Senge, Peter. The Fifth Discipline. New York: Doubleday, 1990.
Simon, Herbert A. Models of Man. New York: John Wiley & Sons, 1957, p. 198.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
The McGrawHill
Companies, 2004
3. Simulation Basics
SIMULATION BASICS
3.1 Introduction
Simulation is much more meaningful when we understand what it is actually doing.
Understanding how simulation works helps us to know whether we are applying it
correctly and what the output results mean. Many books have been written that give
thorough and detailed discussions of the science of simulation (see Banks et al. 2001;
Hoover and Perry 1989; Law and Kelton 2000; Pooch and Wall 1993; Ross 1990;
Shannon 1975; Thesen and Travis 1992; and Widman, Loparo, and Nielsen 1989).
This chapter attempts to summarize the basic technical issues related to simulation
that are essential to understand in order to get the greatest benet from the tool. The
chapter discusses the different types of simulation and how random behavior is simulated. A spreadsheet simulation example is given in this chapter to illustrate how
various techniques are combined to simulate the behavior of a common system.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
48
I. Study Chapters
Part I
3. Simulation Basics
The McGrawHill
Companies, 2004
Study Chapters
will look at what the rst two characteristics mean in this chapter and focus on
what the third characteristic means in Chapter 4.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 3
The McGrawHill
Companies, 2004
3. Simulation Basics
49
Simulation Basics
FIGURE 3.1
Examples of (a) a deterministic simulation and (b) a stochastic simulation.
Constant
inputs
Constant
outputs
7
3.4
Random
inputs
Random
outputs
12.3
Simulation
Simulation
106
(a)
(b)
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
50
Part I
FIGURE 3.2
p (x) 1.0
Examples of (a) a
discrete probability
distribution and (b) a
continuous probability
distribution.
The McGrawHill
Companies, 2004
3. Simulation Basics
Study Chapters
.8
.6
(a )
.4
.2
0
0
4
3
5
Discrete Value
f (x) 1.0
.8
.6
(b )
.4
.2
0
0
4
3
5
Continuous Value
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 3
51
Simulation Basics
FIGURE 3.3
The uniform(0,1)
distribution of a
random number
generator.
The McGrawHill
Companies, 2004
3. Simulation Basics
f (x)
f (x )
{ 10
for 0 x 1
elsewhere
Mean 1
2
1.0
Variance 2 1
12
0
1.0
simulating random events occurring in the simulated system such as the arrival
time of cars to a restaurants drive-through window; the time it takes the driver to
place an order; the number of hamburgers, drinks, and fries ordered; and the time
it takes the restaurant to prepare the order. The input to the procedures used to
generate these types of events is a stream of numbers that are uniformly distributed between zero and one (0 x 1). The random number generator is responsible for producing this stream of independent and uniformly distributed numbers
(Figure 3.3).
Before continuing, it should be pointed out that the numbers produced by a
random number generator are not random in the truest sense. For example, the
generator can reproduce the same sequence of numbers again and again, which is
not indicative of random behavior. Therefore, they are often referred to as pseudorandom number generators (pseudo comes from Greek and means false or fake).
Practically speaking, however, good pseudo-random number generators can
pump out long sequences of numbers that pass statistical tests for randomness (the
numbers are independent and uniformly distributed). Thus the numbers approximate real-world randomness for purposes of simulation, and the fact that they are
reproducible helps us in two ways. It would be difcult to debug a simulation
program if we could not regenerate the same sequence of random numbers to reproduce the conditions that exposed an error in our program. We will also learn in
Chapter 10 how reproducing the same sequence of random numbers is useful
when comparing different simulation models. For brevity, we will drop the
pseudo prex as we discuss how to design and keep our random number generator
healthy.
Linear Congruential Generators
There are many types of established random number generators, and researchers
are actively pursuing the development of new ones (LEcuyer 1998). However,
most simulation software is based on linear congruential generators (LCG). The
LCG is efcient in that it quickly produces a sequence of random numbers without requiring a great deal of computational resources. Using the LCG, a sequence
of integers Z1, Z2, Z3, . . . is dened by the recursive formula
Z i = (a Z i1 + c) mod(m)
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
52
I. Study Chapters
Part I
The McGrawHill
Companies, 2004
3. Simulation Basics
Study Chapters
where the constant a is called the multiplier, the constant c the increment, and the
constant m the modulus (Law and Kelton 2000). The user must provide a seed or
starting value, denoted Z 0, to begin generating the sequence of integer values. Z 0,
a, c, and m are all nonnegative integers. The value of Zi is computed by dividing
(aZ i1 + c) by m and setting Z i equal to the remainder part of the division, which
is the result returned by the mod function. Therefore, the Zi values are bounded by
0 Zi m 1 and are uniformly distributed in the discrete case. However, we
desire the continuous version of the uniform distribution with values ranging
between a low of zero and a high of one, which we will denote as Ui for i = 1, 2,
3, . . . . Accordingly, the value of Ui is computed by dividing Zi by m.
In a moment, we will consider some requirements for selecting the values for
a, c, and m to ensure that the random number generator produces a long sequence
of numbers before it begins to repeat them. For now, however, lets assign the following values a = 21, c = 3, and m = 16 and generate a few pseudo-random
numbers. Table 3.1 contains a sequence of 20 random numbers generated from the
recursive formula
Zi = (21Z i1 + 3) mod(16)
An integer value of 13 was somewhat arbitrarily selected between 0 and
m 1 = 16 1 = 15 as the seed (Z 0 = 13) to begin generating the sequence of
TABLE 3.1 Example LCG Zi = (21Zi1 + 3) mod(16),
with Z0 = 13
i
21Zi1 + 3
Zi
Ui = Zi /16
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
276
87
150
129
24
171
234
213
108
255
318
297
192
3
66
45
276
87
150
129
13
4
7
6
1
8
11
10
5
12
15
14
9
0
3
2
13
4
7
6
1
0.2500
0.4375
0.3750
0.0625
0.5000
0.6875
0.6250
0.3125
0.7500
0.9375
0.8750
0.5625
0.0000
0.1875
0.1250
0.8125
0.2500
0.4375
0.3750
0.0625
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 3
3. Simulation Basics
The McGrawHill
Companies, 2004
Simulation Basics
53
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
54
I. Study Chapters
Part I
3. Simulation Basics
The McGrawHill
Companies, 2004
Study Chapters
Following this guideline, the LCG can achieve a full cycle length of over 2.1
billion (231 to be exact) random numbers.
Frequently, the long sequence of random numbers is subdivided into smaller
segments. These subsegments are referred to as streams. For example, Stream 1
could begin with the random number in the rst position of the sequence and
continue down to the random number in the 200,000th position of the sequence.
Stream 2, then, would start with the random number in the 200,001st position of
the sequence and end at the 400,000th position, and so on. Using this approach,
each type of random event in the simulation model can be controlled by a unique
stream of random numbers. For example, Stream 1 could be used to generate the
arrival pattern of cars to a restaurants drive-through window and Stream 2 could
be used to generate the time required for the driver of the car to place an order.
This assumes that no more than 200,000 random numbers are needed to simulate
each type of event. The practical and statistical advantages of assigning unique
streams to each type of event in the model are described in Chapter 10.
To subdivide the generators sequence of random numbers into streams, you
rst need to decide how many random numbers to place in each stream. Next, you
begin generating the entire sequence of random numbers (cycle length) produced
by the generator and recording the Zi values that mark the beginning of each stream.
Therefore, each stream has its own starting or seed value. When using the random
number generator to drive different events in a simulation model, the previously
generated random number from a particular stream is used as input to the generator
to generate the next random number from that stream. For convenience, you may
want to think of each stream as a separate random number generator to be used in
different places in the model. For example, see Figure 10.5 in Chapter 10.
There are two types of linear congruential generators: the mixed congruential
generator and the multiplicative congruential generator. Mixed congruential generators are designed by assigning c > 0. Multiplicative congruential generators are
designed by assigning c = 0. The multiplicative generator is more efcient than the
mixed generator because it does not require the addition of c. The maximum cycle
length for a multiplicative generator can be set within one unit of the maximum
cycle length of the mixed generator by carefully selecting values for a and m. From
a practical standpoint, the difference in cycle length is insignicant considering that
both types of generators can boast cycle lengths of more than 2.1 billion.
ProModel uses the following multiplicative generator:
Zi = (630,360,016Zi1) mod(231 1)
Specically, it is a prime modulus multiplicative linear congruential generator
(PMMLCG) with a = 630,360,016, c = 0, and m = 231 1. It has been extensively tested and is known to be a reliable random number generator for simulation (Law and Kelton 2000). The ProModel implementation of this generator
divides the cycle length of 231 1 = 2,147,483,647 into 100 unique streams.
Testing Random Number Generators
When faced with using a random number generator about which you know very
little, it is wise to verify that the numbers emanating from it satisfy the two
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 3
3. Simulation Basics
Simulation Basics
The McGrawHill
Companies, 2004
55
important properties dened at the beginning of this section. The numbers produced by the random number generator must be (1) independent and (2) uniformly
distributed between zero and one (uniform(0,1)). To verify that the generator
satises these properties, you rst generate a sequence of random numbers U1, U2,
U3, . . . and then subject them to an appropriate test of hypothesis.
The hypotheses for testing the independence property are
H 0: Ui values from the generator are independent
H1: Ui values from the generator are not independent
Several statistical methods have been developed for testing these hypotheses at
a specied signicance level . One of the most commonly used methods is
the runs test. Banks et al. (2001) review three different versions of the runs test
for conducting this independence test. Additionally, two runs tests are implemented in Stat::Fitthe Runs Above and Below the Median Test and the Runs
Up and Runs Down Test. Chapter 6 contains additional material on tests for
independence.
The hypotheses for testing the uniformity property are
H 0: Ui values are uniform(0,1)
H1: Ui values are not uniform(0,1)
Several statistical methods have also been developed for testing these hypotheses
at a specied signicance level . The Kolmogorov-Smirnov test and the chisquare test are perhaps the most frequently used tests. (See Chapter 6 for a description of the chi-square test.) The objective is to determine if the uniform(0,1)
distribution ts or describes the sequence of random numbers produced by the
random number generator. These tests are included in the Stat::Fit software and
are further described in many introductory textbooks on probability and statistics
(see, for example, Johnson 1994).
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
56
I. Study Chapters
Part I
3. Simulation Basics
The McGrawHill
Companies, 2004
Study Chapters
discrete and continuous distributions, is described starting rst with the continuous case. For a review of the other methods, see Law and Kelton (2000).
Continuous Distributions
The application of the inverse transformation method to generate random variates
from continuous distributions is straightforward and efcient for many continuous distributions. For a given probability density function f (x), nd the cumulative distribution function of X. That is, F(x) = P(X x). Next, set U = F(x),
where U is uniform(0,1), and solve for x. Solving for x yields x = F1(U ). The
equation x = F1(U) transforms U into a value for x that conforms to the given
distribution f(x).
As an example, suppose that we need to generate variates from the exponential
distribution with mean . The probability density function f (x) and corresponding
cumulative distribution function F(x) are
1 x /
for x > 0
e
f(x) =
0
elsewhere
1 ex / for x > 0
F(x) =
0
elsewhere
Setting U = F(x) and solving for x yields
U = 1 ex/
ex/ = 1 U
ln (ex/) = ln (1 U)
where ln is the natural logarithm
x/ = ln (1 U)
x = ln (1 U)
The random variate x in the above equation is exponentially distributed with mean
provided U is uniform(0,1).
Suppose three observations of an exponentially distributed random variable
with mean = 2 are desired. The next three numbers generated by the random
number generator are U1 = 0.27, U2 = 0.89, and U3 = 0.13. The three numbers
are transformed into variates x1, x2, and x3 from the exponential distribution with
mean = 2 as follows:
x1 = 2 ln (1 U1) = 2 ln (1 0.27) = 0.63
x2 = 2 ln (1 U2) = 2 ln (1 0.89) = 4.41
x3 = 2 ln (1 U3) = 2 ln (1 0.13) = 0.28
Figure 3.4 provides a graphical representation of the inverse transformation
method in the context of this example. The rst step is to generate U, where U is
uniform(0,1). Next, locate U on the y axis and draw a horizontal line from that
point to the cumulative distribution function [F(x) = 1 ex/2]. From this point
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 3
FIGURE 3.4
Graphical explanation
of inverse
transformation method
for continuous
variates.
3. Simulation Basics
The McGrawHill
Companies, 2004
57
Simulation Basics
F (x)
1.00
U2 = 1 e x2/ 2 = 0.89
0.50
U1 = 1 e x1/ 2 = 0.27
x1 = 2 ln (1 0.27) = 0.63
x2 = 2 ln (1 0.89) = 4.41
of intersection with F(x), a vertical line is dropped down to the x axis to obtain the
corresponding value of the variate. This process is illustrated in Figure 3.4 for
generating variates x1 and x2 given U1 = 0.27 and U2 = 0.89.
Application of the inverse transformation method is straightforward as long
as there is a closed-form formula for the cumulative distribution function, which
is the case for many continuous distributions. However, the normal distribution is
one exception. Thus it is not possible to solve for a simple equation to generate
normally distributed variates. For these cases, there are other methods that can be
used to generate the random variates. See, for example, Law and Kelton (2000)
for a description of additional methods for generating random variates from
continuous distributions.
Discrete Distributions
The application of the inverse transformation method to generate variates from
discrete distributions is basically the same as for the continuous case. The difference is in how it is implemented. For example, consider the following probability
mass function:
0.10
for x = 1
p(x) = P(X = x) = 0.30
for x = 2
0.60
for x = 3
The random variate x has three possible values. The probability that x is equal to
1 is 0.10, P(X = 1) = 0.10; P(X = 2) = 0.30; and P(X = 3) = 0.60. The cumulative distribution function F(x) is shown in Figure 3.5. The random variable x
could be used in a simulation to represent the number of defective components on
a circuit board or the number of drinks ordered from a drive-through window, for
example.
Suppose that an observation from the above discrete distribution is desired.
The rst step is to generate U, where U is uniform(0,1). Using Figure 3.5, the
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
58
FIGURE 3.5
Graphical explanation
of inverse
transformation method
for discrete variates.
I. Study Chapters
Part I
The McGrawHill
Companies, 2004
3. Simulation Basics
Study Chapters
F (x )
1.00
U 2 = 0.89
0.40
U 1 = 0.27
U 3 = 0.05
0.10
1
x3 = 1
2
x1 = 2
3
x2 3
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 3
The McGrawHill
Companies, 2004
3. Simulation Basics
59
Simulation Basics
ATM queue
(FIFO)
6
Interarrival time
4.8 minutes
7th customer
arrives at
21.0 min.
6th customer
arrives at
16.2 min.
ATM server
(resource)
Departing
customers
(entities)
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
60
I. Study Chapters
Part I
The McGrawHill
Companies, 2004
3. Simulation Basics
Study Chapters
for i = 1, 2, 3, . . . , 25
where Xi represents the ith value realized from the exponential distribution with
mean , and Ui is the ith random number drawn from a uniform(0,1) distribution.
The i = 1, 2, 3, . . . , 25 indicates that we will compute 25 values from the transformation equation. However, we need to have two different versions of this equation
to generate the two sets of 25 exponentially distributed random variates needed to
simulate 25 customers because the mean interarrival time of 3.0 minutes is different than the mean service time of 2.4 minutes. Let X1i denote the interarrival
Stream 1
(Z 1i )
3
66
109
116
7
22
81
40
75
42
117
28
79
126
89
80
19
18
125
68
23
102
97
120
91
122
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
0.516
0.852
0.906
0.055
0.172
0.633
0.313
0.586
0.328
0.914
0.219
0.617
0.984
0.695
0.625
0.148
0.141
0.977
0.531
0.180
0.797
0.758
0.938
0.711
0.953
2.18
5.73
7.09
0.17
0.57
3.01
1.13
2.65
1.19
7.36
0.74
2.88
12.41
3.56
2.94
0.48
0.46
11.32
2.27
0.60
4.78
4.26
8.34
3.72
9.17
122
5
108
95
78
105
32
35
98
13
20
39
54
113
72
107
74
21
60
111
30
121
112
51
50
29
0.039
0.844
0.742
0.609
0.820
0.250
0.273
0.766
0.102
0.156
0.305
0.422
0.883
0.563
0.836
0.578
0.164
0.469
0.867
0.234
0.945
0.875
0.398
0.391
0.227
0.10
4.46
3.25
2.25
4.12
0.69
0.77
3.49
0.26
0.41
0.87
1.32
5.15
1.99
4.34
2.07
0.43
1.52
4.84
0.64
6.96
4.99
1.22
1.19
0.62
Random Interarrival
Random Service
Number
Time
Stream 2 Number
Time
(U 1i )
(X1i )
(Z 2i )
(U 2i )
(X2i )
Arrivals to ATM
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
2.18
7.91
15.00
15.17
15.74
18.75
19.88
22.53
23.72
31.08
31.82
34.70
47.11
50.67
53.61
54.09
54.55
65.87
68.14
68.74
73.52
77.78
86.12
89.84
99.01
2.18
7.91
15.00
18.25
20.50
24.62
25.31
26.08
29.57
31.08
31.82
34.70
47.11
52.26
54.25
58.59
60.66
65.87
68.14
72.98
73.62
80.58
86.12
89.84
99.01
0.10
4.46
3.25
2.25
4.12
0.69
0.77
3.49
0.26
0.41
0.87
1.32
5.15
1.99
4.34
2.07
0.43
1.52
4.84
0.64
6.96
4.99
1.22
1.19
0.62
2.28
12.37
18.25
20.50
24.62
25.31
26.08
29.57
29.83
31.49
32.69
36.02
52.26
54.25
58.59
60.66
61.09
67.39
72.98
73.62
80.58
85.57
87.34
91.03
99.63
Average
0.00
0.00
0.00
3.08
4.76
5.87
5.43
3.55
5.85
0.00
0.00
0.00
0.00
1.59
0.64
4.50
6.11
0.00
0.00
4.24
0.10
2.80
0.00
0.00
0.00
1.94
0.10
4.46
3.25
5.33
8.88
6.56
6.20
7.04
6.11
0.41
0.87
1.32
5.15
3.58
4.98
6.57
6.54
1.52
4.84
4.88
7.06
7.79
1.22
1.19
0.62
4.26
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
3. Simulation Basics
The McGrawHill
Companies, 2004
61
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
62
I. Study Chapters
Part I
The McGrawHill
Companies, 2004
3. Simulation Basics
Study Chapters
time and X2i denote the service time generated for the ith customer simulated in the
system. The equation for transforming a random number into an interarrival time
observation from the exponential distribution with mean 3.0 minutes becomes
X1i = 3.0 ln (1 U 1i )
for i = 1, 2, 3, . . . , 25
where U 1i denotes the ith value drawn from the random number generator using
Stream 1. This equation is used in the Arrivals to ATM section of Table 3.2 under
the Interarrival Time (X1i ) column.
The equation for transforming a random number into an ATM service time observation from the exponential distribution with mean 2.4 minutes becomes
X2i = 2.4 ln (1 U 2i )
for i = 1, 2, 3, . . . , 25
where U 2i denotes the ith value drawn from the random number generator using
Stream 2. This equation is used in the ATM Processing Time section of Table 3.2
under the Service Time (X2i ) column.
Lets produce the sequence of U 1i values that feeds the transformation equation (X1i ) for interarrival times using a linear congruential generator (LCG)
similar to the one used in Table 3.1. The equations are
Z 1i = (21Z 1i1 + 3) mod(128)
U 1i = Z 1i /128
for i = 1, 2, 3, . . . , 25
The authors dened Stream 1s starting or seed value to be 3. So we will use
Z 10 = 3 to kick off this stream of 25 random numbers. These equations are used
in the Arrivals to ATM section of Table 3.2 under the Stream 1 (Z 1i ) and Random
Number (U 1i ) columns.
Likewise, we will produce the sequence of U 2i values that feeds the transformation equation (X2i ) for service times using
Z 2i = (21Z 2i1 + 3) mod(128)
U 2i = Z 2i /128
for i = 1, 2, 3, . . . , 25
and will specify a starting seed value of Z 20 = 122, Stream 2s seed value, to kick
off the second stream of 25 random numbers. These equations are used in the
ATM Processing Time section of Table 3.2 under the Stream 2 (Z 2i ) and Random
Number (U 2i ) columns.
The spreadsheet presented in Table 3.2 illustrates 25 random variates for both
the interarrival time, column (X1i ), and service time, column (X2i ). All time
values are given in minutes in Table 3.2. To be sure we pull this together correctly,
lets compute a couple of interarrival times with mean 3.0 minutes and
compare them to the values given in Table 3.2.
Given Z 10 = 3
Z 11 = (21Z 10 + 3) mod(128) = (21(3) + 3) mod(128)
= (66) mod(128) = 66
U 11 = Z 11 /128 = 66/128 = 0.516
X11 = ln (1 U 11 ) = 3.0 ln (1 0.516) = 2.18 minutes
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 3
3. Simulation Basics
The McGrawHill
Companies, 2004
Simulation Basics
63
FIGURE 3.7
Microsoft Excel
snapshot of the ATM
spreadsheet
illustrating the
equations for the
Arrivals to ATM
section.
The value of 2.18 minutes is the rst value appearing under the column,
Interarrival Time (X1i ). To compute the next interarrival time value X12 , we start
by using the value of Z 11 to compute Z 12 .
Given Z 11 = 66
Z 12 = (21Z 11 + 3) mod(128) = (21(66) + 3) mod(128) = 109
U 12 = Z 12 /128 = 109/128 = 0.852
X12 = 3 ln (1 U 12 ) = 3.0 ln (1 0.852) = 5.73 minutes
Figure 3.7 illustrates how the equations were programmed in Microsoft Excel for
the Arrivals to ATM section of the spreadsheet. Note that the U 1i and X1i values
in Table 3.2 are rounded to three and two places to the right of the decimal,
respectively. The same rounding rule is used for U 2i and X2i .
It would be useful for you to verify a few of the service time values with
mean 2.4 minutes appearing in Table 3.2 using
Z 20 = 122
Z 2i = (21Z 2i1 + 3) mod(128)
U 2i = Z 2i /128
X2i = 2.4 ln (1 U 2i )
for i = 1, 2, 3, . . .
The equations started out looking a little difcult to manipulate but turned out
not to be so bad when we put some numbers in them and organized them in a
spreadsheetthough it was a bit tedious. The important thing to note here is that
although it is transparent to the user, ProModel uses a very similar method to
produce exponentially distributed random variates, and you now understand how
it is done.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
64
I. Study Chapters
Part I
3. Simulation Basics
The McGrawHill
Companies, 2004
Study Chapters
The LCG just given has a maximum cycle length of 128 random numbers
(you may want to verify this), which is more than enough to generate 25 interarrival time values and 25 service time values for this simulation. However, it is a
poor random number generator compared to the one used by ProModel. It was
chosen because it is easy to program into a spreadsheet and to compute by hand to
facilitate our understanding. The biggest difference between it and the random
number generator in ProModel is that the ProModel random number generator
manipulates much larger numbers to pump out a much longer stream of numbers
that pass all statistical tests for randomness.
Before moving on, lets take a look at why we chose Z 10 = 3 and Z 20 = 122.
Our goal was to make sure that we did not use the same uniform(0,1) random
number to generate both an interarrival time and a service time. If you look
carefully at Table 3.2, you will notice that the seed value Z 20 = 122 is the Z 125
value from random number Stream 1. Stream 2 was merely dened to start where
Stream 1 ended. Thus our spreadsheet used a unique random number to generate
each interarrival and service time. Now lets add the necessary logic to our
spreadsheet to conduct the simulation of the ATM system.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 3
3. Simulation Basics
The McGrawHill
Companies, 2004
Simulation Basics
65
The Service Time column simply records the simulated amount of time
required for the customer to complete their transaction at the ATM. These values
are copies of the service time X2i values generated in the ATM Processing Time
section of the spreadsheet.
The Departure Time column records the moment in time at which a customer
departs the system after completing their transaction at the ATM. To compute the
time at which a customer departs the system, we take the time at which the customer gained access to the ATM to begin service, column (3), and add to that the
length of time the service required, column (4). For example, the rst customer
gained access to the ATM to begin service at 2.18 minutes, column (3). The service time for the customer was determined to be 0.10 minutes in column (4). So,
the customer departs 0.10 minutes later or at time 2.18 + 0.10 2.28 minutes.
This customers short service time must be because they forgot their PIN number
and could not conduct their transaction.
The Time in Queue column records how long a customer waits in the queue
before gaining access to the ATM. To compute the time spent in the queue, we
take the time at which the ATM began serving the customer, column (3), and subtract from that the time at which the customer arrived to the system, column (2).
The fourth customer arrives to the system at time 15.17 and begins getting service
from the ATM at 18.25 minutes; thus, the fourth customers time in the queue is
18.25 15.17 3.08 minutes.
The Time in System column records how long a customer was in the system.
To compute the time spent in the system, we subtract the customers departure
time, column (5), from the customers arrival time, column (2). The fth customer
arrives to the system at 15.74 minutes and departs the system at 24.62 minutes.
Therefore, this customer spent 24.62 15.74 8.88 minutes in the system.
Now lets go back to the Begin Service Time column, which records the time
at which a customer begins to be served by the ATM. The very rst customer to
arrive to the system when it opens for service advances directly to the ATM. There
is no waiting time in the queue; thus the value recorded for the time that the rst
customer begins service at the ATM is the customers arrival time. With the
exception of the rst customer to arrive to the system, we have to capture the logic
that a customer cannot begin service at the ATM until the previous customer using
the ATM completes his or her transaction. One way to do this is with an IF statement as follows:
IF (Current Customer's Arrival Time < Previous Customer's
Departure Time)
THEN (Current Customer's Begin Service Time = Previous Customer's
Departure Time)
ELSE (Current Customer's Begin Service Time = Current Customer's
Arrival Time)
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
66
I. Study Chapters
Part I
3. Simulation Basics
The McGrawHill
Companies, 2004
Study Chapters
FIGURE 3.8
Microsoft Excel
snapshot of the ATM
spreadsheet
illustrating the IF
statement for the
Begin Service Time
column.
The Excel spreadsheet cell L10 (column L, row 10) in Figure 3.8 is the Begin
Service Time for the second customer to arrive to the system and is programmed
with IF(K10<N9,N9,K10). Since the second customers arrival time (Excel cell
K10) is not less than the rst customers departure time (Excel cell N9), the logical
test evaluates to false and the second customers time to begin service is set to his
or her arrival time (Excel cell K10). The fourth customer shown in Figure 3.8 provides an example of when the logical test evaluates to true, which results in the
fourth customer beginning service when the third customer departs the ATM.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
3. Simulation Basics
Chapter 3
The McGrawHill
Companies, 2004
67
Simulation Basics
1
2
Average
1.94 minutes
0.84 minutes
1.39 minutes
4.26 minutes
2.36 minutes
3.31 minutes
3.6 Summary
Modeling random behavior begins with transforming the output produced by a
random number generator into observations (random variates) from an appropriate statistical distribution. The values of the random variates are combined with
logical operators in a computer program to compute output that mimics the performance behavior of stochastic systems. Performance estimates for stochastic
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
68
I. Study Chapters
Part I
The McGrawHill
Companies, 2004
3. Simulation Basics
Study Chapters
1
for x
f (x) =
0
elsewhere
where = 7 and = 4.
b. Probability mass function:
x
p(x) = P(X = x) = 15
for x = 1, 2, 3, 4, 5
elsewhere
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
3. Simulation Basics
Chapter 3
The McGrawHill
Companies, 2004
Simulation Basics
69
References
Banks, Jerry; John S. Carson II; Barry L. Nelson; and David M. Nicol. Discrete-Event
System Simulation. Englewood Cliffs, NJ: Prentice Hall, 2001.
Hoover, Stewart V., and Ronald F. Perry. Simulation: A Problem-Solving Approach.
Reading, MA: Addison-Wesley, 1989.
Johnson, R. A. Miller and Freunds Probability and Statistics for Engineers. 5th ed.
Englewood Cliffs, NJ: Prentice Hall, 1994.
Law, Averill M., and David W. Kelton. Simulation Modeling and Analysis. New York:
McGraw-Hill, 2000.
LEcuyer, P. Random Number Generation. In Handbook of Simulation: Principles,
Methodology, Advances, Applications, and Practice, ed. J. Banks, pp. 93137.
New York: John Wiley & Sons, 1998.
Pooch, Udo W., and James A. Wall. Discrete Event Simulation: A Practical Approach.
Boca Raton, FL: CRC Press, 1993.
Pritsker, A. A. B. Introduction to Simulation and SLAM II. 4th ed. New York: John Wiley
& Sons, 1995.
Ross, Sheldon M. A Course in Simulation. New York: Macmillan, 1990.
Shannon, Robert E. System Simulation: The Art and Science. Englewood Cliffs, NJ:
Prentice Hall, 1975.
Thesen, Arne, and Laurel E. Travis. Simulation for Decision Making. Minneapolis, MN:
West Publishing, 1992.
Widman, Lawrence E.; Kenneth A. Loparo; and Norman R. Nielsen. Articial Intelligence,
Simulation, and Modeling. New York: John Wiley & Sons, 1989.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
The McGrawHill
Companies, 2004
4. DiscreteEvent
Simulation
DISCRETE-EVENT
SIMULATION
When the only tool you have is a hammer, every problem begins to resemble a
nail.
Abraham Maslow
4.1 Introduction
Building on the foundation provided by Chapter 3 on how random numbers and
random variates are used to simulate stochastic systems, the focus of this chapter
is on discrete-event simulation, which is the main topic of this book. A discreteevent simulation is one in which changes in the state of the simulation model
occur at discrete points in time as triggered by events. The events in the automatic
teller machine (ATM) simulation example of Chapter 3 that occur at discrete
points in time are the arrivals of customers to the ATM queue and the completion
of their transactions at the ATM. However, you will learn in this chapter that the
spreadsheet simulation of the ATM system in Chapter 3 was not technically executed as a discrete-event simulation.
This chapter rst denes what a discrete-event simulation is compared to a
continuous simulation. Next the chapter summarizes the basic technical issues related to discrete-event simulation to facilitate your understanding of how to effectively use the tool. Questions that will be answered include these:
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
72
I. Study Chapters
Part I
The McGrawHill
Companies, 2004
4. DiscreteEvent
Simulation
Study Chapters
State changes in a model occur when some event happens. The state of the model
becomes the collective state of all the elements in the model at a particular point
in time. State variables in a discrete-event simulation are referred to as discretechange state variables. A restaurant simulation is an example of a discrete-event
simulation because all of the state variables in the model, such as the number of
customers in the restaurant, are discrete-change state variables (see Figure 4.1).
Most manufacturing and service systems are typically modeled using discreteevent simulation.
In continuous simulation, state variables change continuously with respect to
time and are therefore referred to as continuous-change state variables. An example of a continuous-change state variable is the level of oil in an oil tanker that is
being either loaded or unloaded, or the temperature of a building that is controlled
by a heating and cooling system. Figure 4.2 compares a discrete-change state
variable and a continuous-change state variable as they vary over time.
FIGURE 4.1
Discrete events cause
discrete state changes.
Number of
customers
in restaurant
3
2
Time
Start
Simulation
Event 1
(customer
arrives)
Event 2
(customer
arrives)
Event 3
(customer
departs)
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 4
The McGrawHill
Companies, 2004
4. DiscreteEvent
Simulation
73
Discrete-Event Simulation
FIGURE 4.2
Comparison of a
discrete-change state
variable and a
continuous-change
state variable.
Continuous-change
state variable
Value
Discrete-change
state variable
Time
Continuous simulation products use either differential equations or difference equations to dene the rates of change in state variables over time.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
74
I. Study Chapters
Part I
4. DiscreteEvent
Simulation
The McGrawHill
Companies, 2004
Study Chapters
Batch processing in which uids are pumped into and out of tanks can often be
modeled using difference equations.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 4
4. DiscreteEvent
Simulation
Discrete-Event Simulation
The McGrawHill
Companies, 2004
75
1.2 minutes. At the start of the activity, a normal random variate is generated
based on these parameters, say 4.2 minutes, and an activity completion event is
scheduled for that time into the future. Scheduled events are inserted chronologically into an event calendar to await the time of their occurrence. Events that
occur at predened intervals theoretically all could be determined in advance and
therefore be scheduled at the beginning of the simulation. For example, entities
arriving every ve minutes into the model could all be scheduled easily at the start
of the simulation. Rather than preschedule all events at once that occur at a set frequency, however, they are scheduled only when the next occurrence must be determined. In the case of a periodic arrival, the next arrival would not be scheduled
until the current scheduled arrival is actually pulled from the event calendar for
processing. This postponement until the latest possible moment minimizes the
size of the event calendar and eliminates the necessity of knowing in advance how
many events to schedule when the length of the simulation may be unknown.
Conditional events are triggered by a condition being met rather than by the
passage of time. An example of a conditional event might be the capturing of a
resource that is predicated on the resource being available. Another example
would be an order waiting for all of the individual items making up the order to be
assembled. In these situations, the event time cannot be known beforehand, so
the pending event is simply placed into a waiting list until the conditions can be
satised. Often multiple pending events in a list are waiting for the same condition. For example, multiple entities might be waiting to use the same resource
when it becomes available. Internally, the resource would have a waiting list for all
items currently waiting to use it. While in most cases events in a waiting list are
processed rst-in, rst-out (FIFO), items could be inserted and removed using a
number of different criteria. For example, items may be inserted according to item
priority but be removed according to earliest due date.
Events, whether scheduled or conditional, trigger the execution of logic that is
associated with that event. For example, when an entity frees a resource, the state
and statistical variables for the resource are updated, the graphical animation is updated, and the input waiting list for the resource is examined to see what activity to
respond to next. Any new events resulting from the processing of the current event
are inserted into either the event calendar or another appropriate waiting list.
In real life, events can occur simultaneously so that multiple entities can be
doing things at the same instant in time. In computer simulation, however, especially when running on a single processor, events can be processed only one at a time
even though it is the same instant in simulated time. As a consequence, a method or
rule must be established for processing events that occur at the exact same simulated
time. For some special cases, the order in which events are processed at the current
simulation time might be signicant. For example, an entity that frees a resource and
then tries to immediately get the same resource might have an unfair advantage over
other entities that might have been waiting for that particular resource.
In ProModel, the entity, downtime, or other item that is currently being
processed is allowed to continue processing as far as it can at the current simulation time. That means it continues processing until it reaches either a conditional
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
76
I. Study Chapters
Part I
The McGrawHill
Companies, 2004
4. DiscreteEvent
Simulation
Study Chapters
FIGURE 4.3
Start
Advance clock to
next event time.
Termination
event?
Yes
Update statistics
and generate
output report.
No
Process event and
schedule any new
events.
Stop
Update statistics,
state variables,
and animation.
Yes
Any conditional
events?
No
event that cannot be satised or a timed delay that causes a future event to be
scheduled. It is also possible that the object simply nishes all of the processing
dened for it and, in the case of an entity, exits the system. As an object is being
processed, any resources that are freed or other entities that might have been created as byproducts are placed in an action list and are processed one at a time in a
similar fashion after the current object reaches a stopping point. To deliberately
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 4
The McGrawHill
Companies, 2004
4. DiscreteEvent
Simulation
77
Discrete-Event Simulation
suspend the current object in order to allow items in the action list to be processed,
a zero delay time can be specied for the current object. This puts the current item
into the future events list (event calendar) for later processing, even though it is
still processed at the current simulation time.
When all scheduled and conditional events have been processed that are
possible at the current simulation time, the clock advances to the next scheduled
event and the process continues. When a termination event occurs, the simulation
ends and statistical reports are generated. The ongoing cycle of processing scheduled and conditional events, updating state and statistical variables, and creating
new events constitutes the essence of discrete-event simulation (see Figure 4.3).
ATM queue
(FIFO)
ATM server
(resource)
Departing
customers
(entities)
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
78
I. Study Chapters
Part I
4. DiscreteEvent
Simulation
The McGrawHill
Companies, 2004
Study Chapters
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 4
4. DiscreteEvent
Simulation
The McGrawHill
Companies, 2004
Discrete-Event Simulation
79
attributes are characteristics of the entity that are maintained for that entity until
the entity exits the system. For example, to compute the amount of time an entity
waited in a queue location, an attribute is needed to remember when the entity entered the location. For the ATM simulation, one entity attribute is used to remember the customers time of arrival to the system. This entity attribute is called the
Arrival Time attribute. The simulation program computes how long each customer entity waited in the queue by subtracting the time that the customer entity
arrived to the queue from the value of the simulation clock when the customer entity gained access to the ATM.
State Variables
Two discrete-change state variables are needed to track how the status (state) of the
system changes as customer entities arrive in and depart from the ATM system.
Number of Entities in Queue at time step i, NQi.
ATM Statusi to denote if the ATM is busy or idle at time step i.
Statistical Accumulators
The objective of the example manual simulation is to estimate the expected
amount of time customers wait in the queue and the expected number of customers waiting in the queue. The average time customers wait in queue is a simple
average. Computing this requires that we record how many customers passed
through the queue and the amount of time each customer waited in the queue. The
average number of customers in the queue is a time-weighted average, which is
usually called a time average in simulation. Computing this requires that we not
only observe the queues contents during the simulation but that we also measure
the amount of time that the queue maintained each of the observed values. We
record each observed value after it has been multiplied (weighted) by the amount
of time it was maintained.
Heres what the simulation needs to tally at each simulation time step i to
compute the two performance measures at the end of the simulation.
Simple-average time in queue.
Record the number of customer entities processed through the queue,
Total Processed. Note that the simulation may end before all customer
entities in the queue get a turn at the ATM. This accumulator keeps track
of how many customers actually made it through the queue.
For a customer processed through the queue, record the time that it waited
in the queue. This is computed by subtracting the value of the simulation
clock time when the entity enters the queue (stored in the entity attribute
array Arrival Time) from the value of the simulation clock time when the
entity leaves the queue, ti Arrival Time.
Time-average number of customers in the queue.
For the duration of the last time step, which is ti ti1 , and the number of
customer entities in the queue during the last time step, which is NQi1 ,
record the product of ti ti1 and NQi1 . Call the product
(ti ti1 )NQi1 the Time-Weighted Number of Entities in the Queue.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
80
I. Study Chapters
Part I
4. DiscreteEvent
Simulation
The McGrawHill
Companies, 2004
Study Chapters
Events
There are two types of recurring scheduled events that change the state of the system: arrival events and departure events. An arrival event occurs when a customer
entity arrives to the queue. A departure event occurs when a customer entity completes its transaction at the ATM. Each processing of a customer entitys arrival
to the queue includes scheduling the future arrival of the next customer entity to
the ATM queue. Each time an entity gains access to the ATM, its future departure
from the system is scheduled based on its expected service time at the ATM. We
actually need a third event to end the simulation. This event is usually called the
termination event.
To schedule the time at which the next entity arrives to the system, the simulation needs to generate an interarrival time and add it to the current simulation
clock time, ti . The interarrival time is exponentially distributed with a mean of
3.0 minutes for our example ATM system. Assume that the function E(3.0) returns an exponentially distributed random variate with a mean of 3.0 minutes. The
future arrival time of the next customer entity can then be scheduled by using the
equation ti + E(3.0).
The customer service time at the ATM is exponentially distributed with a
mean of 2.4 minutes. The future departure time of an entity gaining access to the
ATM is scheduled by the equation ti + E(2.4).
Event Calendar
The event calendar maintains the list of active events (events that have been
scheduled and are waiting to be processed) in chronological order. The simulation
progresses by removing the rst event listed on the event calendar, setting the
simulation clock, ti , equal to the time at which the event is scheduled to occur, and
processing the event.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 4
4. DiscreteEvent
Simulation
Discrete-Event Simulation
The McGrawHill
Companies, 2004
81
compare the results of the manual simulation with those produced by the spreadsheet simulation.
Notice that Table 3.2 contains a subscript i in the leftmost column. This subscript denotes the customer entity number as opposed to the simulation time step.
We wanted to point this out to avoid any confusion because of the different uses
of the subscript. In fact, you can ignore the subscript in Table 3.2 as you pick values from the Service Time and Interarrival Time columns.
A discrete-event simulation logic diagram for the ATM system is shown in
Figure 4.5 to help us carry out the manual simulation. Table 4.1 presents the results of the manual simulation after processing 12 events using the simulation
logic diagram presented in Figure 4.5. The table tracks the creation and scheduling of events on the event calendar as well as how the state of the system changes
and how the values of the statistical accumulators change as events are processed
from the event calendar. Although Table 4.1 is completely lled in, it was initially
blank until the instructions presented in the simulation logic diagram were executed. As you work through the simulation logic diagram, you should process the
information in Table 4.1 from the rst row down to the last row, one row at a time
(completely lling in a row before going down to the next row). A dash () in a
cell in Table 4.1 signies that the simulation logic diagram does not require you to
update that particular cell at the current simulation time step. An arrow ( ) in a
cell in the table also signies that the simulation logic diagram does not require
you to update that cell at the current time step. However, the arrows serve as a reminder to look up one or more rows above your current position in the table to determine the state of the ATM system. Arrows appear under the Number of Entities
in Queue, NQi column, and ATM Statusi column. The only exception to the use of
dashes or arrows is that we keep a running total in the two Cumulative subcolumns in the table for each time step. Lets get the manual simulation started.
i = 0, t0 = 0. As shown in Figure 4.5, the rst block after the start position
indicates that the model is initialized to its starting conditions. The
simulation time step begins at i = 0. The initial value of the simulation
clock is zero, t0 = 0. The system state variables are set to ATM Status0
Idle; Number of Entities in Queue, NQ0 = 0; and the Entity Attribute
Array is cleared. This reects the initial conditions of no customer entities
in the queue and an idle ATM. The statistical accumulator Total Processed is
set to zero. There are two different Cumulative variables in Table 4.1: one to
accumulate the time in queue values of ti Arrival Time, and the other to
accumulate the values of the time-weighted number of entities in the queue,
(ti ti1 )NQi1 . Recall that ti Arrival Time is the amount of time that
entities, which gained access to the ATM, waited in queue. Both Cumulative
variables (ti Arrival Time) and (ti ti1 )NQi1 are initialized to zero.
Next, an initial arrival event and termination event are scheduled and placed
under the Scheduled Future Events column. The listing of an event is
formatted as (Entity Number, Event, and Event Time). Entity Number
denotes the customer number that the event pertains to (such as the rst,
second, or third customer). Event is the type of event: a customer arrives, a
82
Yes
Depart
Yes
No
Any
customers
in queue?
End
End
Event
type?
i=i+1
4. DiscreteEvent
Simulation
FIGURE 4.5
No
Arrive
Is ATM
idle?
i=i+1
Update Event Calendar: Insert Scheduled Future Events in chronological order.
Advance Clock, ti, to the Time of the first event on the calendar and process the event.
I. Study Chapters
i=i+1
Initialize variables and schedule initial arrival event and termination event (Scheduled Future Events).
i=0
Start
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
The McGrawHill
Companies, 2004
(1, Arrive,
(_, End,
(1, Depart,
(2, Arrive,
(_, End,
(2, Arrive,
(_, End,
(2, Depart,
(3, Arrive,
(_, End,
(3, Arrive,
(_, End,
(4, Arrive,
(3, Depart,
(_, End,
(5, Arrive,
(3, Depart,
(_, End,
(3, Depart,
(6, Arrive,
(_, End,
(6, Arrive,
(4, Depart,
(_, End,
(7, Arrive,
(4, Depart,
(_, End,
12
11
10
20.50)
22.00)
22.53)
22.00)
22.53)
24.62)
2.18)
22.00)
2.28)
7.91)
22.00)
7.91)
22.00)
12.37)
15.00)
22.00)
15.00)
22.00)
15.17)
18.25)
22.00)
15.74)
18.25)
22.00)
18.25)
18.75)
22.00)
18.75)
20.50)
22.00)
19.88)
20.50)
22.00)
Clock, ti
22.00
20.50
19.88
18.75
18.25
15.74
15.17
15.00
12.37
7.91
2.28
2.18
Event
End
Depart
Arrive
Arrive
Depart
Arrive
Arrive
Arrive
Depart
Arrive
Depart
Arrive
*(4, 15.17)
(5, 15.74)
(6, 18.75)
*(4, 15.17)
(5, 15.74)
(6, 18.75)
(7, 19.88)
*(5, 15.74)
(6, 18.75)
(7, 19.88)
*(3, 15.00)
(4, 15.17)
(5, 15.74)
*(4, 15.17)
(5, 15.74)
*(3, 15.00)
( )
*(3, 15.00)
(4, 15.17)
*(2, 7.91)
( )
*( )
( )
*( )
( )
*(1, 2.18)
( )
*( )
( )
Busy
Idle
Busy
Idle
Busy
ATM Statusi
Idle
Total Processed
Entities Processed
through Queue
4.76
3.08
7.84
7.84
3.08
3.08
3.08
Time-Weighted
Number of Entities
in Queue
3.00
1.86
2.26
0.50
5.02
0.57
13.21
10.21
8.35
6.09
5.59
0.57
No new events
No new events
4. DiscreteEvent
Simulation
(4, Depart,
(_, End,
(8, Arrive,
(_, End,
(8, Arrive,
(5, Depart,
Entity Number
I. Study Chapters
Entities Waiting in
Queue, array
positions 2, 3, . . .
Entity Attribute
Array (Entity
Number, Arrival
Time)
Number of Entities
in Queue, NQi
Statistical Accumulators
Time in Queue,
ti Arrival Time
System State
Cumulative,
(ti Arrival Time)
Processed Event
(ti ti1)NQi1
Event Calendar
Cumulative,
(ti ti1)NQi1
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
The McGrawHill
Companies, 2004
83
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
84
I. Study Chapters
Part I
4. DiscreteEvent
Simulation
The McGrawHill
Companies, 2004
Study Chapters
customer departs, or the simulation ends. Time is the future time that the
event is to occur. The event (1, Arrive, 2.18) under the Scheduled Future
Events column prescribes that the rst customer entity is scheduled to arrive
at time 2.18 minutes. The arrival time was generated using the equation
t0 + E(3.0). To obtain the value returned from the function E(3), we went
to Table 3.2, read the rst random variate from the Interarrival Time column
(a value of 2.18 minutes), and added it to the current value of the simulation
clock, t0 = 0. The simulation is to be terminated after 22 minutes. Note the
(__, End, 22.00) under the Scheduled Future Events column. For the
termination event, no value is assigned to Entity Number because it is not
relevant.
i = 1, t1 = 2.18. After the initialization step, the list of scheduled future
events is added to the event calendar in chronological order in preparation
for the next simulation time step i = 1. The simulation clock is fast forwarded
to the time that the next event is scheduled to occur, which is t1 = 2.18
(the arrival time of the rst customer to the ATM queue), and then the event
is processed. Following the simulation logic diagram, arrival events are
processed by rst scheduling the future arrival event for the next customer
entity using the equation t1 + E(3.0) = 2.18 + 5.73 = 7.91 minutes. Note
the value of 5.73 returned by the function E(3.0) is the second random
variate listed under the Interarrival Time column of Table 3.2. This future
event is placed under the Scheduled Future Events column in Table 4.1 as
(2, Arrive, 7.91). Checking the status of the ATM from the previous
simulation time step reveals that the ATM is idle (ATM Status0 Idle).
Therefore, the arriving customer entity immediately ows through the
queue to the ATM to conduct its transaction. The future departure event of
this entity from the ATM is scheduled using the equation t1 + E(2.4) =
2.18 + 0.10 = 2.28 minutes. See (1, Depart, 2.28) under the Scheduled
Future Events column, denoting that the rst customer entity is scheduled to
depart the ATM at time 2.28 minutes. Note that the value of 0.10 returned
by the function E(2.4) is the rst random variate listed under the Service
Time column of Table 3.2. The arriving customer entitys arrival time is
then stored in the rst position of the Entity Attribute Array to signify that it
is being served by the ATM. The ATM Status1 is set to Busy, and the
statistical accumulators for Entities Processed through Queue are updated.
Add 1 to Total Processed and since this entity entered the queue and
immediately advanced to the idle ATM for processing, record zero minutes
in the Time in Queue, t1 Arrival Time, subcolumn and update this
statistics cumulative value. The statistical accumulators for Time-Weighted
Number of Entities in the Queue are updated next. Record zero for
(t1 t0 )NQ0 since there were no entities in queue during the previous time
step, NQ0 = 0, and update this statistics cumulative value. Note the arrow
entered under the Number of Entities in Queue, NQ1 column. Recall
that the arrow is placed there to signify that the number of entities waiting
in the queue has not changed from its previous value.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 4
4. DiscreteEvent
Simulation
Discrete-Event Simulation
The McGrawHill
Companies, 2004
85
i = 2, t2 = 2.28. Following the loop back around to the top of the simulation
logic diagram, we place the two new future events onto the event calendar
in chronological order in preparation for the next simulation time step
i = 2. The simulation clock is fast forwarded to t2 = 2.28, and the
departure event for the rst customer entity arriving to the system is
processed. Given that there are no customers in the queue from the previous
time step, NQ1 = 0 (follow the arrows up to get this value), we simply
remove the departed customer from the rst position of the Entity Attribute
Array and change the status of the ATM to idle, ATM Status2 Idle.
The statistical accumulators do not require updating because there are no
customer entities waiting in the queue or leaving the queue. The dashes ()
entered under the statistical accumulator columns indicate that updates are
not required. No new future events are scheduled.
As before, we follow the loop back to the top of the simulation logic diagram,
and place any new events, of which there are none at the end of time step i = 2,
onto the event calendar in chronological order in preparation for the next simulation time step i = 3. The simulation clock is fast forwarded to t3 = 7.91, and the
arrival of the second customer entity to the ATM queue is processed. Complete the
processing of this event and continue the manual simulation until the termination
event (__, End, 22.00) reaches the top of the event calendar.
As you work through the simulation logic diagram to complete the manual
simulation, you will see that the fourth customer arriving to the system requires
that you use logic from a different path in the diagram. When the fourth customer
entity arrives to the ATM queue, simulation time step i = 6, the ATM is busy
(see ATM Status5) processing customer entity 3s transaction. Therefore, the
fourth customer entity joins the queue and waits to use the ATM. (Dont forget
that it invoked the creation of the fth customers arrival event.) The fourth entitys arrival time of 15.17 minutes is stored in the last position of the Entity Attribute Array in keeping with the rst-in, rst-out (FIFO) rule. The Number of
Entities in Queue, NQ6 , is incremented to 1. Further, the Time-Weighted Number
of Entities in the Queue statistical accumulators are updated by rst computing
(t6 t5 )NQ5 = (15.17 15.00)0 = 0 and then recording the result. Next, this
statistics cumulative value is updated. Customers 5, 6, and 7 also arrive to the
system nding the ATM busy and therefore take their place in the queue to wait
for the ATM.
The fourth customer waited a total of 3.08 minutes in the queue (see the
ti Arrival Time subcolumn) before it gained access to the ATM in time step
i = 8 as the third customer departed. The value of 3.08 minutes in the queue for
the fourth customer computed in time step i = 8 by t8 Arrival Time
18.25 15.17 = 3.08 minutes. Note that Arrival Time is the time that the fourth
customer arrived to the queue, and that the value is stored in the Entity Attribute
Array.
At time t12 = 22.00 minutes the simulation terminates and tallies the nal
values for the statistical accumulators, indicating that a total of ve customer
entities were processed through the queue. The total amount of time that these ve
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
86
I. Study Chapters
Part I
4. DiscreteEvent
Simulation
The McGrawHill
Companies, 2004
Study Chapters
customers waited in the queue is 7.84 minutes. The nal cumulative value for
Time-Weighted Number of Entities in the Queue is 13.21 minutes. Note that at the
end of the simulation, two customers are in the queue (customers 6 and 7) and one
is at the ATM (customer 5). A few quick observations are worth considering
before we discuss how the accumulated values are used to calculate summary statistics for a simulation output report.
This simple and brief (while tedious) manual simulation is relatively easy to
follow. But imagine a system with dozens of processes and dozens of factors inuencing behavior such as downtimes, mixed routings, resource contention, and
others. You can see how essential computers are for performing a simulation of
any magnitude. Computers have no difculty tracking the many relationships and
updating the numerous statistics that are present in most simulations. Equally as
important, computers are not error prone and can perform millions of instructions
per second with absolute accuracy. We also want to point out that the simulation
logic diagram (Figure 4.5) and Table 4.1 were designed to convey the essence of
what happens inside a discrete-event simulation program. When you view a trace
report of a ProModel simulation in Lab Chapter 8 you will see the simularities
between the trace report and Table 4.1. Although the basic process presented is
sound, its efciency could be improved. For example, there is no need to keep
both a scheduled future events list and an event calendar. Instead, future
events are inserted directly onto the event calendar as they are created. We separated them to facilitate our describing the ow of information in the discrete-event
framework.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 4
The McGrawHill
Companies, 2004
4. DiscreteEvent
Simulation
87
Discrete-Event Simulation
The average time that customer entities waited in the queue for their turn on
the ATM during the manual simulation reported in Table 4.1 is a simple-average
statistic. Recall that the simulation processed ve customers through the queue.
Let xi denote the amount of time that the i th customer processed spent in the
queue. The average waiting time in queue based on the n = 5 observations is
5
Average time in queue =
=
i=1
xi
0 + 0 + 0 + 3.08 + 4.76
5
7.84 minutes
= 1.57 minutes
5
The values necessary for computing this average are accumulated under the
Entities Processed through Queue columns of the
manual simulation table (see the
last row of Table 4.1 for the cumulative value (ti Arrival Time) = 7.84 and
Total Processed = 5).
Time-Average Statistic
A time-average statistic, sometimes called a time-weighted average, reports the
average value of a response variable weighted by the time duration for each
observed value of the variable:
n
Time average =
i=1 (Ti x i )
where xi denotes the value of the i th observation, Ti denotes the time duration of
the i th observation (the weighting factor), and T denotes the total duration over
which the observations were collected. Example time-average statistics include
the average number of entities in a system, the average number of entities at a location, and the average utilization of a resource. An average of a time-weighted
response variable in ProModel is computed as a time average.
The average number of customer entities waiting in the queue location for
their turn on the ATM during the manual simulation is a time-average statistic.
Figure 4.6 is a plot of the number of customer entities in the queue during the
manual simulation recorded in Table 4.1. The 12 discrete-events manually simulated in Table 4.1 are labeled t1 , t2 , t3 , . . . , t11 , t12 on the plot. Recall that ti denotes the value of the simulation clock at time step i in Table 4.1, and that its initial value is zero, t0 = 0.
Using the notation from the time-average equation just given, the total simulation time illustrated in Figure 4.6 is T = 22 minutes. The Ti denotes the duration of time step i (distance between adjacent discrete-events in Figure 4.6). That
is, Ti = ti ti1 for i = 1, 2, 3, . . . , 12. The xi denotes the queues contents
(number of customer entities in the queue) during each Ti time interval. Therefore, xi = NQi1 for i = 1, 2, 3, . . . , 12 (recall that in Table 4.1, NQi1 denotes
the number of customer entities in the queue from ti1 to ti ). The time-average
88
t0
T1 2.18
T2 0.1
t3
10
11
12
t4
14
T6 0.17
T5 2.63
13
Simulation time, T 22
t1 t2
T8 2.51
17
T7 0.57
16
t5 t6 t7
15
t8
18
20
T12 1.50
19
t11
21
t12
22
4. DiscreteEvent
Simulation
FIGURE 4.6
I. Study Chapters
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
The McGrawHill
Companies, 2004
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 4
4. DiscreteEvent
Simulation
The McGrawHill
Companies, 2004
Discrete-Event Simulation
89
i=1 (Ti xi )
12
=
i=1 (ti
ti1 )NQi1
T
Average NQ =
Average NQ =
13.21
= 0.60 customers
22
12
(ti ti1 )NQi1 )
You may recognize that the numerator of this equation ( i=1
calculates the area under the plot of the queues contents during the simulation
(Figure 4.6). The values necessary for computing this area are accumulated under
the Time-Weighted Number of Entities in Queue column of Table 4.1 (see the
Cumulative value of 13.21 in the tables last row).
4.4.5 Issues
Even though this example is a simple and somewhat crude simulation, it provides
a good illustration of basic simulation issues that need to be addressed when conducting a simulation study. First, note that the simulation start-up conditions can
bias the output statistics. Since the system started out empty, queue content statistics are slightly less than what they might be if we began the simulation with
customers already in the system. Second, note that we ran the simulation for only
22 minutes before calculating the results. Had we ran longer, it is very likely that
the long-run average time in the queue would have been somewhat different (most
likely greater) than the time from the short run because the simulation did not
have a chance to reach a steady state.
These are the kinds of issues that should be addressed whenever running a
simulation. The modeler must carefully analyze the output and understand the signicance of the results that are given. This example also points to the need for
considering beforehand just how long a simulation should be run. These issues are
addressed in Chapters 9 and 10.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
90
I. Study Chapters
Part I
The McGrawHill
Companies, 2004
4. DiscreteEvent
Simulation
Study Chapters
FIGURE 4.7
Typical components of
simulation software.
Modeling
interface
Modeling
processor
Simulation
interface
Model Simulation
data
data
Simulation
processor
Output
interface
Output
data
Output
processor
Simulation
processor
entering and editing model information. External les used in the simulation are
specied here as well as run-time options (number of replications and so on).
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 4
4. DiscreteEvent
Simulation
Discrete-Event Simulation
The McGrawHill
Companies, 2004
91
request snapshot reports, pan or zoom the layout, and so forth. If visual interactive
capability is provided, the user is even permitted to make changes dynamically to
model variables with immediate visual feedback of the effects of such changes.
The animation speed can be adjusted and animation can even be disabled by
the user during the simulation. When unconstrained, a simulation is capable of
running as fast as the computer can process all of the events that occur within the
simulated time. The simulation clock advances instantly to each scheduled event;
the only central processing unit (CPU) time of the computer that is used is what is
necessary for processing the event logic. This is how simulation is able to run in
compressed time. It is also the reason why large models with millions of events
take so long to simulate. Ironically, in real life activities take time while events
take no time. In simulation, events take time while activities take no time. To slow
down a simulation, delay loops or system timers are used to create pauses between
events. These techniques give the appearance of elapsing time in an animation. In
some applications, it may even be desirable to run a simulation at the same rate as
a real clock. These real-time simulations are achieved by synchronizing the simulation clock with the computers internal system clock. Human-in-the-loop (such
as operator training simulators) and hardware-in-the-loop (testing of new equipment and control systems) are examples of real-time simulations.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
92
I. Study Chapters
Part I
4. DiscreteEvent
Simulation
The McGrawHill
Companies, 2004
Study Chapters
displayed during the simulation itself, although some simulation products create
an animation le that can be played back at the end of the simulation. In addition
to animated gures, dynamic graphs and history plots can be displayed during the
simulation.
Animation and dynamically updated displays and graphs provide a visual representation of what is happening in the model while the simulation is running.
Animation comes in varying degrees of realism from three-dimensional animation
to simple animated owcharts. Often, the only output from the simulation that is of
interest is what is displayed in the animation. This is particularly true when simulation is used for facilitating conceptualization or for communication purposes.
A lot can be learned about model behavior by watching the animation (a picture is worth a thousand words, and an animation is worth a thousand pictures).
Animation can be as simple as circles moving from box to box, to detailed, realistic graphical representations. The strategic use of graphics should be planned in
advance to make the best use of them. While insufcient animation can weaken
the message, excessive use of graphics can distract from the central point to be
made. It is always good to dress up the simulation graphics for the nal presentation; however, such embellishments should be deferred at least until after the
model has been debugged.
For most simulations where statistical analysis is required, animation is no
substitute for the postsimulation summary, which gives a quantitative overview of
the entire system performance. Basing decisions on the animation alone reects
shallow thinking and can even result in unwarranted conclusions.
Resource utilization.
Queue sizes.
Waiting times.
Processing rates.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 4
4. DiscreteEvent
Simulation
Discrete-Event Simulation
The McGrawHill
Companies, 2004
93
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
94
I. Study Chapters
Part I
4. DiscreteEvent
Simulation
The McGrawHill
Companies, 2004
Study Chapters
FIGURE 4.9
ProModel animation provides useful feedback.
takes place. This background might be a CAD layout imported into the model. The
dynamic animation objects that move around on the background during the simulation include entities (parts, customers, and so on) and resources (people, fork
trucks, and so forth). Animation also includes dynamically updated counters,
indicators, gauges, and graphs that display count, status, and statistical information
(see Figure 4.9).
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
The McGrawHill
Companies, 2004
4. DiscreteEvent
Simulation
Chapter 4
95
Discrete-Event Simulation
FIGURE 4.10
Summary report of simulation activity.
-------------------------------------------------------------------------------General Report
Output from C:\ProMod4\models\demos\Mfg_cost.mod [Manufacturing Costing Optimization]
Date: Feb/27/2003
Time: 06:50:05 PM
-------------------------------------------------------------------------------Scenario
: Model Parameters
Replication
: 1 of 1
Warmup Time
: 5 hr
Simulation Time : 15 hr
--------------------------------------------------------------------------------
LOCATIONS
Location
Name
----------Receive
NC Lathe 1
NC Lathe 2
Degrease
Inspect
Bearing Que
Loc1
Scheduled
Hours
--------10
10
10
10
10
10
10
Capacity
-------2
1
1
2
1
100
5
Total
Entries
------21
57
57
114
113
90
117
Average
Minutes
Per Entry
--------57.1428
10.1164
9.8918
10.1889
4.6900
34.5174
25.6410
Average
Contents
-------2
0.961065
0.939725
1.9359
0.883293
5.17762
5
Maximum
Contents
-------2
1
1
2
1
13
5
Current
Contents
-------2
1
1
2
1
11
5
RESOURCES
Resource
Name
-------CellOp.1
CellOp.2
CellOp.3
CellOp
Units
----1
1
1
3
Scheduled
Hours
--------10
10
10
30
Number
Of Times
Used
-------122
118
115
355
Average
Minutes
Per
Usage
-------2.7376
2.7265
2.5416
2.6704
Average
Minutes
Travel
To Use
-------0.1038
0.1062
0.1020
0.1040
Average
Minutes
Moving
Average
Minutes
Wait for
Res, etc.
--------31.6055
3.2269
2.4885
35.5899
Average
Minutes
Travel
To Park
-------0.0000
0.0000
0.0000
0.0000
% Blocked
In Travel
--------0.00
0.00
0.00
0.00
ENTITY ACTIVITY
Entity
Name
------Pallet
Blank
Cog
Reject
Bearing
Total
Exits
----19
0
79
33
78
Current
Quantity
In System
--------2
7
3
0
12
Average
Minutes
In
System
--------63.1657
52.5925
49.5600
42.1855
-------0.0000
0.8492
0.8536
0.0500
Average
Minutes
In
Operation
--------1.0000
33.5332
33.0656
0.0000
Average
Minutes
Blocked
--------30.5602
14.9831
13.1522
6.5455
% Util
-----57.76
55.71
50.67
54.71
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
96
FIGURE 4.11
Time-series graph
showing changes in
queue size over time.
FIGURE 4.12
Histogram of queue
contents.
I. Study Chapters
Part I
Study Chapters
4. DiscreteEvent
Simulation
The McGrawHill
Companies, 2004
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 4
4. DiscreteEvent
Simulation
The McGrawHill
Companies, 2004
97
Discrete-Event Simulation
Simulators
Languages
Ease of use
Flexibility
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
98
I. Study Chapters
Part I
The McGrawHill
Companies, 2004
4. DiscreteEvent
Simulation
Study Chapters
Early
simulators
Current best-of-breed
products
Ease of use
Early
languages
Hard
Easy
FIGURE 4.14
Low
High
Flexibility
Simulation is a technology that will continue to evolve as related technologies improve and more time is devoted to the development of the software. Products will become easier to use with more intelligence being incorporated into the
software itself. Evidence of this trend can already be seen by optimization and
other time-saving utilities that are appearing in simulation products. Animation
and other graphical visualization techniques will continue to play an important
role in simulation. As 3-D and other graphic technologies advance, these features
will also be incorporated into simulation products.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 4
4. DiscreteEvent
Simulation
Discrete-Event Simulation
The McGrawHill
Companies, 2004
99
Simulation products targeted at vertical markets are on the rise. This trend is
driven by efforts to make simulation easier to use and more solution oriented.
Specic areas where dedicated simulators have been developed include call center management, supply chain management, and high-speed processing. At the
same time many simulation applications are becoming more narrowly focused,
others are becoming more global and look at the entire enterprise or value chain
in a hierarchical fashion from top to bottom.
Perhaps the most dramatic change in simulation will be in the area of software interoperability and technology integration. Historically, simulation has
been viewed as a stand-alone, project-based technology. Simulation models were
built to support an analysis project, to predict the performance of complex
systems, and to select the best alternative from a few well-dened alternatives.
Typically these projects were time-consuming and expensive, and relied heavily
on the expertise of a simulation analyst or consultant. The models produced were
generally single use models that were discarded after the project.
In recent years, the simulation industry has seen increasing interest in extending the useful life of simulation models by using them on an ongoing basis
(Harrell and Hicks 1998). Front-end spreadsheets and push-button user interfaces
are making such models more accessible to decision makers. In these exible simulation models, controlled changes can be made to models throughout the system
life cycle. This trend is growing to include dynamic links to databases and other
data sources, enabling entire models actually to be built and run in the background
using data already available from other enterprise applications.
The trend to integrate simulation as an embedded component in enterprise
applications is part of a larger development of software components that can be
distributed over the Internet. This movement is being fueled by three emerging
information technologies: (1) component technology that delivers true object
orientation; (2) the Internet or World Wide Web, which connects business communities and industries; and (3) Web service technologies such as JZEE and
Microsofts .NET (DOTNET). These technologies promise to enable parallel
and distributed model execution and provide a mechanism for maintaining distributed model repositories that can be shared by many modelers (Fishwick 1997).
The interest in Web-based simulation, like all other Web-based applications, continues to grow.
4.9 Summary
Most manufacturing and service systems are modeled using dynamic, stochastic,
discrete-event simulation. Discrete-event simulation works by converting all activities to events and consequent reactions. Events are either time-triggered or
condition-triggered, and are therefore processed either chronologically or when a
satisfying condition has been met.
Simulation models are generally dened using commercial simulation software that provides convenient modeling constructs and analysis tools. Simulation
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
100
I. Study Chapters
Part I
4. DiscreteEvent
Simulation
The McGrawHill
Companies, 2004
Study Chapters
software consists of several modules with which the user interacts. Internally,
model data are converted to simulation data, which are processed during the
simulation. At the end of the simulation, statistics are summarized in an output
database that can be tabulated or graphed in various forms. The future of simulation is promising and will continue to incorporate exciting new technologies.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 4
4. DiscreteEvent
Simulation
Discrete-Event Simulation
The McGrawHill
Companies, 2004
101
References
Bowden, Royce. The Spectrum of Simulation Software. IIE Solutions, May 1998,
pp. 4446.
Fishwick, Paul A. Web-Based Simulation. In Proceedings of the 1997 Winter Simulation
Conference, ed. S. Andradottir, K. J. Healy, D. H. Withers, and B. L. Nelson. Institute
of Electric and Electronics Engineers, Piscataway, NJ, 1997. pp. 100109.
Gottfried, Byron S. Elements of Stochastic Process Simulation. Englewood Cliffs, NJ:
Prentice Hall, 1984, p. 8.
Haider, S. W., and J. Banks. Simulation Software Products for Analyzing Manufacturing
Systems. Industrial Engineering, July 1986, p. 98.
Harrell, Charles R., and Don Hicks. Simulation Software Component Architecture for
Simulation-Based Enterprise Applications. Proceedings of the 1998 Winter Simulation Conference, ed. D. J. Medeiros, E. F. Watson, J. S. Carson, and M. S. Manivannan.
Institute of Electrical and Electronics Engineers, Piscataway, New Jersey, 1998,
pp. 171721.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
The McGrawHill
Companies, 2004
5. Getting Started
GETTING STARTED
For which of you, intending to build a tower, sitteth not down rst, and counteth
the cost, whether he have sufcient to nish it? Lest haply, after he hath laid the
foundation, and is not able to nish it, all that behold it begin to mock him,
Saying, This man began to build, and was not able to nish.
Luke 14:2830
5.1 Introduction
In this chapter we look at how to begin a simulation project. Specically, we
discuss how to select a project and set up a plan for successfully completing it.
Simulation is not something you do simply because you have a tool and a process
to which it can be applied. Nor should you begin a simulation without forethought
and preparation. A simulation project should be carefully planned following basic
project management principles and practices. Questions to be answered in this
chapter are
While specic tasks may vary from project to project, the basic procedure for
doing simulation is essentially the same. Much as in building a house, you are
better off following a time-proven methodology than approaching it haphazardly.
In this chapter, we present the preliminary activities for preparing to conduct a
simulation study. We then cover the steps for successfully completing a simulation
project. Subsequent chapters elaborate on these steps. Here we focus primarily
103
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
104
I. Study Chapters
Part I
5. Getting Started
The McGrawHill
Companies, 2004
Study Chapters
on the rst step: dening the objective, scope, and requirements of the study. Poor
planning, ill-dened objectives, unrealistic expectations, and unanticipated costs
can turn a simulation project sour. For a simulation project to succeed, the objectives and scope should be clearly dened and requirements identied and quantied
for conducting the project.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 5
5. Getting Started
Getting Started
The McGrawHill
Companies, 2004
105
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
106
I. Study Chapters
Part I
5. Getting Started
The McGrawHill
Companies, 2004
Study Chapters
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 5
5. Getting Started
Getting Started
The McGrawHill
Companies, 2004
107
Many vendors offer guarantees on their products so they may be returned after
some trial period. This allows you to try out the software to see how well it ts
your needs.
The services provided by the software provider can be a lifesaver. If working
late on a project, it may be urgent to get immediate help with a modeling or software problem. Basic and advanced training classes, good documentation, and lots
of example models can provide invaluable resources for becoming procient in
the use of the software.
When selecting simulation software, it is important to assess the total cost of
ownership. There often tends to be an overemphasis on the purchase price of the
software with little regard for the cost associated with learning and using the
software. It has been recommended that simulation software be purchased on
the basis of productivity rather than price (Banks and Gibson 1997). The purchase
price of the software can sometimes be only a small fraction of the cost in time
and labor that results from having a tool that is difcult to use or inadequate for
the application.
Other considerations that may come into play when selecting a product include quality of the documentation, hardware requirements (for example, is a
graphics accelerator card required?), and available consulting services.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
108
I. Study Chapters
Part I
5. Getting Started
The McGrawHill
Companies, 2004
Study Chapters
are rened and sometimes redened with each iteration. The decision to push
toward further renement should be dictated by the objectives and constraints of
the study as well as by sensitivity analysis, which determines whether additional
renement will yield meaningful results. Even after the results are presented,
there are often requests to conduct additional experiments. Describing this iterative process, Pritsker and Pegden (1979) observe,
The stages of simulation are rarely performed in a structured sequence beginning with
problem denition and ending with documentation. A simulation project may involve
false starts, erroneous assumptions which must later be abandoned, reformulation of
the problem objectives, and repeated evaluation and redesign of the model. If properly
done, however, this iterative process should result in a simulation model which properly assesses alternatives and enhances the decision making process.
Define objective,
scope, and
requirements
Collect and
analyze system
data
Build model
Validate model
Conduct
experiments
Present
results
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 5
5. Getting Started
Getting Started
The McGrawHill
Companies, 2004
109
The remainder of this chapter focuses on the rst step of dening the objective, scope, and requirements of the study. The remaining steps will be discussed
in later chapters.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
110
I. Study Chapters
Part I
5. Getting Started
The McGrawHill
Companies, 2004
Study Chapters
1. What is the best way to route material, customers, or calls through the
system?
2. What is the best way to allocate personnel for a particular set of tasks?
3. What is the best schedule for preventive maintenance?
4. How much preventive maintenance should be performed?
5. What is the best priority rule for selecting jobs and tasks?
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 5
5. Getting Started
Getting Started
The McGrawHill
Companies, 2004
111
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
112
I. Study Chapters
Part I
5. Getting Started
The McGrawHill
Companies, 2004
Study Chapters
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 5
The McGrawHill
Companies, 2004
5. Getting Started
113
Getting Started
Data-gathering responsibilities.
Experimentation.
Form of results.
FIGURE 5.2
Conning the model to
impacting activities.
Activity
A
Activity
B
Activity
C
Scope of model
Activity
D
Activity
E
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
114
I. Study Chapters
Part I
5. Getting Started
The McGrawHill
Companies, 2004
Study Chapters
FIGURE 5.3
Effect of level of detail on model development time.
One-to-one
correspondence
Minimum
required
Level of
detail
white box model that is very detailed and produces a one-to-one correspondence between the model and the system.
Determining the appropriate level of detail is an important decision. Too
much detail makes it difcult and time-consuming to develop and debug the
model. Too little detail may make the model unrealistic by oversimplifying the
process. Figure 5.3 illustrates how the time to develop a model is affected by
the level of detail. It also highlights the importance of including only enough detail to meet the objectives of the study.
The level of detail is determined largely by the degree of accuracy required in
the results. If only a rough estimate is being sought, it may be sufcient to model just
the ow sequence and processing times. If, on the other hand, a close answer is
needed, all of the elements that drive system behavior should be precisely modeled.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 5
5. Getting Started
Getting Started
The McGrawHill
Companies, 2004
115
Dates should be set for completing the data-gathering phase because, left uncontrolled, it could go on indenitely. One lesson you learn quickly is that good data
are elusive and you can always spend more time trying to rene the data. Nearly all
models are based partially on assumptions, simply because complete and accurate
information is usually lacking. The project team, in cooperation with stakeholders
in the system, will need to agree on the assumptions to be made in the model.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
116
I. Study Chapters
Part I
5. Getting Started
The McGrawHill
Companies, 2004
Study Chapters
that focus attention on the area of interest. If lots of color and detail are added to the
animation, it may detract from the key issues. Usually the best approach is to keep
stationary or background graphics simple, perhaps displaying only a schematic of
the layout using neutral colors. Entities or other dynamic graphics can then be
displayed more colorfully to make them stand out. Sometimes the most effective
presentation is a realistic 3-D animation. Other times the ow of entities along a
owchart consisting of simple boxes and arrows may be more effective.
Another effective use of animation in presenting the results is to run two or
more scenarios side by side, displaying a scoreboard that shows how they compare on one or two key performance measures. The scoreboard may even include
a bar graph or other chart that is dynamically updated to compare results.
Most decision makers such as managers need to have only a few key items of
information for making the decision. It should be remembered that people, not the
model, make the nal decision. With this in mind, every effort should be made to
help the decision maker clearly understand the options and their associated
consequences. The use of charts helps managers visualize and focus their attention on key decision factors. Charts are attention grabbers and are much more effective in making a point than plowing through a written document or sheets of
computer printout.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 5
5. Getting Started
Getting Started
The McGrawHill
Companies, 2004
117
Unclear objectives.
Unskilled modelers.
Unavailable data.
Unmanaged expectations.
Unsupportive management.
Underestimated requirements.
Uninvolved process owner(s).
If the right procedure is followed, and the necessary time and resources are committed to the project, simulation is always going to provide some benet to the
decision-making process. The best way to ensure success is to make sure that
everyone involved is educated in the process and understands the benets and
limitations of simulation.
5.8 Summary
Simulation projects are almost certain to fail if there is little or no planning. Doing
simulation requires some preliminary work so that the appropriate resources and
personnel are in place. Beginning a simulation project requires selecting the right
application, dening objectives, acquiring the necessary tools and resources, and
planning the work to be performed. Applications should be selected that hold the
greatest promise for achieving company goals.
Simulation is most effective when it follows a logical procedure. Objectives
should be clearly stated and a plan developed for completing the project. Data
gathering should focus on dening the system and formulating a conceptual
model. A simulation should then be built that accurately yet minimally captures
the system denition. The model should be veried and validated to ensure that
the results can be relied upon. Experiments should be run that are oriented toward
meeting the original objectives. Finally, the results should be presented in a way
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
118
I. Study Chapters
Part I
5. Getting Started
The McGrawHill
Companies, 2004
Study Chapters
that clearly represents the ndings of the study. Simulation is an iterative process
that requires redenition and ne-tuning at all stages in the process.
Objectives should be clearly dened and agreed upon to avoid wasted efforts.
They should be documented and followed to avoid scope creep.Aconcisely stated
objective and a written scope of work can help keep a simulation study on track.
Specic items to address when planning a study include dening the model
scope, describing the level of detail required, assigning data-gathering responsibilities, specifying the types of experiments, and deciding on the form of the results. Simulation objectives together with time, resources, and budget constraints
drive the rest of the decisions that are made in completing a simulation project.
Finally, it is people, not the model, who ultimately make the decision.
The importance of involvement of process owners and stakeholders throughout the project cannot be overemphasized. Management support throughout the
project is vital. Ultimately, it is only when expectations are met or exceeded that
a simulation can be deemed a success.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
5. Getting Started
Chapter 5
9.
10.
11.
12.
13.
14.
The McGrawHill
Companies, 2004
Getting Started
119
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
120
I. Study Chapters
Part I
5. Getting Started
The McGrawHill
Companies, 2004
Study Chapters
The Problem
AST Research Inc., founded in 1980, has become a multibillion-dollar PC manufacturer.
We assemble personal computers and servers in Fort Worth, Texas, and offshore. For a long
time, we optimized (planned) our assembly procedures using traditional methods
gathering time-and-motion data via stopwatches and videotape, performing simple
arithmetic calculations to obtain information about the operation and performance of the
assembly line, and using seat-of-the-pants guesstimates to optimize assembly line output
and labor utilization.
The Model
In December 1994 a new vice president joined AST. Management had long been committed to increasing the plants efciency and output, and our new vice president had experience in using simulation to improve production. We began using ProModel simulation as a
tool for optimizing our assembly lines, and to improve our condence that changes proposed in the assembly process would work out in practice. The results have been signicant. They include
Now when we implement changes, we are condent that those changes in fact will improve
things.
The rst thing we did was learn how to use the simulation software. Then we attempted
to construct a model of our current operations. This would serve two important functions: it
would tell us whether we really understood how to use the software, and it would validate
our understanding of our own assembly lines. If we could not construct a model of our
assembly line that agreed with the real one, that would mean that there was a major aw
either in our ability to model the system or in our understanding of our own operations.
Building a model of our own operations sounded simple enough. After all, we had an
exhaustive catalog of all the steps needed to assemble a computer, and (from previous data
collection efforts) information on how long each operation took. But building a model that
agreed with our real-world situation turned out to be a challenging yet tremendously educational activity.
For example, one of our early models showed that we were producing several thousand
units in a few hours. Since we were not quite that goodwe were off by at least a factor of
10we concluded that we had a major problem in the model we had built, so we went back
to study things in more detail. In every case, our early models failed because we had overlooked or misunderstood how things actually worked on our assembly lines.
Eventually, we built a model that worked and agreed reasonably well with our realworld system out in the factory. To make use of the model, we generated ideas that we
thought would cut down our assembly time and then simulated them in our model.
We examined a number of changes proposed by our engineers and others, and then simulated the ones that looked most promising. Some proposed changes were counterproductive according to our simulation results. We also did a detailed investigation of our testing stations to determine whether it was more efcient to move computers to be tested into
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 5
5. Getting Started
Getting Started
The McGrawHill
Companies, 2004
121
the testing station via FIFO (rst-in, rst-out) or LIFO (last-in, rst-out). Modeling showed
us that FIFO was more efcient. When we implemented that change, we realized the gain
we had predicted.
Simulation helped us avoid buying more expensive equipment. Some of our materialhandling specialists predicted, based on their experience, that if we increased throughput
by 30 percent, we would have to add some additional, special equipment to the assembly
oor or risk some serious blockages. Simulation showed us that was not true and in practice the simulation turned out to be correct.
We determined that we could move material faster if we gave material movers a specic
pattern to follow instead of just doing things sequentially. For example, in moving certain
items from our testing area, we determined that the most time-efcient way would be to
move shelf 1 rst, followed by shelf 4, then shelf 3, and so on.
After our rst round of making serious changes to our operation and simulating them,
our actual production was within a few percentage points of our predicted production.
Also, by combining some tasks, we were able to reduce our head count on each assembly
line signicantly.
We have completed several rounds of changes, and today, encouraged by the experience
of our new investor, Samsung, we have made a signicant advance that we call Vision 5.
The idea of Vision 5 is to have only ve people in each cell assembling computers.
Although there was initially some skepticism about whether this concept would work, our
simulations showed that it would, so today we have converted one of our focused
factories to this concept and have experienced additional benets. Seeing the benets
from that effort has caused our management to increase its commitment to simulation.
The Results
Simulation has proven its effectiveness at AST Research. We have achieved a number of
useful, measurable goals. For competitive reasons, specic numbers cannot be provided;
however, in order of importance, the benets we have achieved are
Other benets included increased ability to explain and justify proposed changes to
management through the use of the graphic animation. Simulation helped us make fewer
missteps in terms of implementing changes that could have impaired our output. We were
able to try multiple scenarios in our efforts to improve productivity and efciency at comparatively low cost and risk. We also learned that the best simulation efforts invite participation by more disciplines in the factory, which helps in terms of team-building. All of
these benets were accomplished at minimal cost. These gains have also caused a cultural
shift at AST, and because we have a tool that facilitates production changes, the company
is now committed to continuous improvement of our assembly practices.
Our use of simulation has convinced us that it produces real, measurable resultsand
equally important, it has helped us avoid making changes that we thought made common
sense, but when simulated turned out to be ineffective. Because of that demonstrable
payoff, simulation has become a key element of our toolkit in optimizing production.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
122
I. Study Chapters
Part I
5. Getting Started
The McGrawHill
Companies, 2004
Study Chapters
Questions
1. What were the objectives for using simulation at AST?
2. Why was simulation better than the traditional methods they were using to achieve
these objectives?
3. What common-sense solution was disproved by using simulation?
4. What were some of the unexpected side benets from using simulation?
5. What insights on the use of simulation did you gain from this case study?
CASE STUDY B
DURHAM REGIONAL HOSPITAL SAVES $150,000
ANNUALLY USING SIMULATION TOOLS
Bonnie Lowder
Management Engineer, Premier
Durham Regional Hospital, a 450-bed facility located in Durham, North Carolina, has been
serving Durham County for 25 years. This public, full-service, acute-care facility is facing
the same competition that is now a part of the entire health care industry. With that in mind,
Durham Regional Hospital is making a conscious effort to provide the highest quality of
care while also controlling costs.
To assist with cost control efforts, Durham Regional Hospital uses Premiers CustomerBased Management Engineering program. Premiers management engineers are very
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 5
5. Getting Started
Getting Started
The McGrawHill
Companies, 2004
123
involved with the hospitals reengineering and work redesign projects. Simulation is one of
the tools the management engineers use to assist in the redesign of hospital services and
processes. Since the hospital was preparing to add an outpatient services area that was to
open in May 1997, a MedModel simulation project was requested by Durham Regional
Hospital to see how this Express Services area would impact their other outpatient areas.
This project involved the addition of an outpatient Express Services addition. The
Express Services area is made up of two radiology rooms, four phlebotomy lab stations, a
patient interview room, and an EKG room. The model was set up to examine which kind
of patients would best be serviced in that area, what hours the clinic would operate, and
what stafng levels would be necessary to provide optimum care.
The Model
Data were collected from each department with potential Express Services patients. The
new Express Services area would eliminate the current reception desk in the main radiology department; all radiology outpatients would have their order entry at Express Services.
In scal year 1996, the radiology department registered 21,159 outpatients. Of those, onethird could have had their procedure performed in Express Services. An average of 18 outpatient surgery patients are seen each week for their preadmission testing. All these patients
could have their preadmission tests performed in the Express Services area. The laboratory
sees approximately 14 walk-in phlebotomy patients per week. Of those, 10 patients are
simple collections and 4 are considered complex. The simple collections can be performed
by anyone trained in phlebotomy. The complex collections should be done by skilled lab
personnel. The collections for all of these patients could be performed in Express Services.
Based on the data, 25 patients a day from the Convenient Care Clinic will need simple
X rays and could also use the Express Services area. Procedure times for each patient were
determined from previous data collection and observation.
The model was built in two months. Durham Regional Hospital had used simulation in
the past for both Emergency Department and Ambulatory Care Unit redesign projects and
thus the management team was convinced of its efcacy. After the model was completed, it
was presented to department managers from all affected areas. The model was presented to
the assembled group in order to validate the data and assumptions. To test for validity, the
model was run for 30 replications, with each replication lasting a period of two weeks. The
results were measured against known values.
The Results
The model showed that routing all Convenient Care Clinic patients through Express
Services would create a bottleneck in the imaging rooms. This would create unacceptable
wait times for the radiology patients and the Convenient Care patients. Creating a model
scenario where Convenient Care patients were accepted only after 5:00 P.M. showed that
the anticipated problem could be eliminated. The model also showed that the weekend volume would be very low. Even at minimum stafng levels, the radiology technicians and
clerks would be underutilized. The recommendation was made to close the Express Services area on the weekends. Finally, the model showed that the stafng levels could be
lower than had been planned. For example, the workload for the outpatient lab tech drops
off after 6:00 P.M. The recommendation was to eliminate outpatient lab techs after 6:00 P.M.
Further savings could be achieved by cross-training the radiology technicians and possibly
the clerks to perform simple phlebotomy. This would also provide for backup during busy
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
124
I. Study Chapters
Part I
5. Getting Started
The McGrawHill
Companies, 2004
Study Chapters
times. The savings for the simulation efforts were projected to be $148,762 annually. These
savings were identied from the difference in stafng levels initially requested for Express
Services and the levels that were validated after the simulation model results were analyzed, as well as the closing of the clinic on weekends. This model would also be used in
the future to test possible changes to Express Services. Durham Regional Hospital would
be able to make minor adjustments to the area and visualize the outcome before implementation. Since this was a new area, they would also be able to test minor changes before
the area was opened.
The results of the model allowed the hospital to avoid potential bottlenecks in the radiology department, reduce the proposed stafng levels in the Express Services area, and validate that the clinic should be closed on weekends. As stated by Dottie Hughes, director of
Radiology Services: The simulation model allowed us to see what changes needed to be
made before the area is opened. By making these changes now, we anticipate a shorter wait
time for our patients than if the simulation had not been used. The simulation results were
able to show that an annual savings of $148,762 could be expected by altering some of the
preconceived Express Services plan.
Future Applications
Larry Suitt, senior vice president, explains, Simulation has proved to be a valuable tool for
our hospital. It has allowed us to evaluate changes in processes before money is spent on
construction. It has also helped us to redesign existing services to better meet the needs of
our patients. Durham Regional Hospital will continue to use simulation in new projects to
improve its health care processes. The hospital is responsible for the ambulance service for
the entire county. After the 911 call is received, the hospitals ambulance service picks up
the patient and takes him or her to the nearest hospital. Durham Regional Hospital is planning to use simulation to evaluate how relocating some of the ambulances to other stations
will affect the response time to the 911 calls.
Questions
1.
2.
3.
4.
5.
References
Banks, Jerry, and Randall R. Gibson. Selecting Simulation Software. IIE Solutions, May
1997, pp. 2932.
Kelton, W. D. Statistical Issues in Simulation. In Proceedings of the 1996 Winter
Simulation Conference, ed. J. Charnes, D. Morrice, D. Brunner, and J. Swain, 1996,
pp. 4754.
Pritsker, Alan B., and Claude Dennis Pegden. Introduction to Simulation and SLAM.
New York: John Wiley & Sons, 1979.
Schrage, Michael. Serious Play: How the Worlds Best Companies Simulate to Innovate.
Cambridge, MA: Harvard Business School Press, 1999.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
The McGrawHill
Companies, 2004
DATA COLLECTION
AND ANALYSIS
6.1 Introduction
In the previous chapter, we discussed the importance of having clearly dened
objectives and a well-organized plan for conducting a simulation study. In this
chapter, we look at the data-gathering phase of a simulation project that denes
the system being modeled. The result of the data-gathering effort is a conceptual
or mental model of how the system is congured and how it operates. This conceptual model may take the form of a written description, a ow diagram, or even
a simple sketch on the back of an envelope. It becomes the basis for the simulation model that will be created.
Data collection is the most challenging and time-consuming task in simulation. For new systems, information is usually very sketchy and only roughly estimated. For existing systems, there may be years of raw, unorganized data to sort
through. Information is seldom available in a form that is directly usable in building a simulation model. It nearly always needs to be ltered and massaged to get it
into the right format and to reect the projected conditions under which the system
is to be analyzed. Many data-gathering efforts end up with lots of data but little useful information. Data should be gathered purposefully to avoid wasting not only
the modelers time but also the time of individuals who are supplying the data.
This chapter presents guidelines and procedures for gathering data. Statistical
techniques for analyzing data and tting probability distributions to data are also
discussed. The following questions are answered:
What is the best procedure to follow when gathering data?
What types of data should be gathered?
125
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
126
I. Study Chapters
Part I
The McGrawHill
Companies, 2004
Study Chapters
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 6
The McGrawHill
Companies, 2004
127
records of repair times, for example, often lump together the time spent
waiting for repair personnel to become available and the actual time
spent performing the repair. What you would like to do is separate the
waiting time from the actual repair time because the waiting time is
a function of the availability of the repair person, which may vary
depending on the system operation.
4. Look for common groupings. When dealing with lots of variety in a
simulation such as hundreds of part types or customer proles, it helps to
look for common groupings or patterns. If, for example, you are
modeling a process that has 300 entity types, it may be difcult to get
information on the exact mix and all of the varied routings that can
occur. Having such detailed information is usually too cumbersome to
work with even if you did have it. The solution is to reduce the data
to common behaviors and patterns. One way to group common data is to
rst identify general categories into which all data can be assigned. Then
the percentage of cases that fall within each category is calculated or
estimated. It is not uncommon for beginning modelers to attempt to use
actual logged input streams such as customer arrivals or material
shipments when building a model. After struggling to dene hundreds of
individual arrivals or routings, it begins to dawn on them that they can
group this information into a few categories and assign probabilities that
any given instance will fall within a particular category. This allows
dozens and sometimes hundreds of unique instances to be described in a
few brief commands. The secret to identifying common groupings is to
think probabilistically.
5. Focus on essence rather than substance. A system denition for
modeling purposes should capture the cause-and-effect relationships
and ignore the meaningless (to simulation) details. This is called system
abstraction and seeks to dene the essence of system behavior rather
than the substance. A system should be abstracted to the highest level
possible while still preserving the essence of the system operation.
Using this black box approach to system denition, we are not
concerned about the nature of the activity being performed, such as
milling, grinding, or inspection. We are interested only in the impact
that the activity has on the use of resources and the delay of entity
ow. A procient modeler constantly is thinking abstractly about the
system operation and avoids getting caught up in the mechanics of the
process.
6. Separate input variables from response variables. First-time modelers
often confuse input variables that dene the operation of the system with
response variables that report system performance. Input variables dene
how the system works (activity times, routing sequences, and the like)
and should be the focus of the data gathering. Response variables describe
how the system responds to a given set of input variables (amount of
work in process, resource utilization, throughput times, and so on).
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
128
I. Study Chapters
Part I
The McGrawHill
Companies, 2004
Study Chapters
Of course, most of these steps overlap, and some iteration will occur as
objectives change and assumptions are rened. The balance of this chapter is
devoted to providing recommendations and examples of how to perform each of
these steps.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 6
The McGrawHill
Companies, 2004
129
operational information is easy to dene. If, on the other hand, the process has
evolved into an informal operation with no set rules, it can be very difcult to dene. For a system to be simulated, operating policies that are undened and
ambiguous must be codied into dened procedures and rules. If decisions and
outcomes vary, it is important to at least dene this variability statistically using
probability expressions or distributions.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
130
I. Study Chapters
Part I
The McGrawHill
Companies, 2004
Study Chapters
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 6
The McGrawHill
Companies, 2004
131
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
132
Part I
FIGURE 6.1
Product A
The McGrawHill
Companies, 2004
Study Chapters
Entity ow diagram.
Station 1
Station 2
Station 3A
Station 3B
FIGURE 6.2
Entity ow diagram
for patient processing
at Dr. Browns ofce.
Check-in
counter
Patient
Waiting
room
Exam
room
(3)
Checkout
counter
go and denes what happens to the entity, not where it happens. An entity ow
diagram, on the other hand, is more of a routing chart that shows the physical
movement of entities through the system from location to location. An entity ow
diagram should depict any branching that may occur in the ow such as routings
to alternative work centers or rework loops. The purpose of the entity ow diagram is to document the overall ow of entities in the system and to provide a
visual aid for communicating the entity ow to others. A ow diagram is easy to
understand and gets everyone thinking in the same way about the system. It can
easily be expanded as additional information is gathered to show activity times,
where resources are used, and so forth.
Using the entity ow diagram and information provided, a summary description of operation is created using a table format, as shown in Table 6.1.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 6
The McGrawHill
Companies, 2004
133
Location
Activity
Time
Activity
Resource
Next
Location
Check-in counter
Waiting room
N(1,.2) min.
None
Secretary
None
Waiting room
Exam room
Exam room
Checkout counter
N(15,4) min.
N(3,.5) min.
Doctor
Secretary
Checkout counter
Exit
Move
Trigger
Move
Time
Move
Resource
None
When room is
available
None
None
0.2 min.
0.8 min.*
None
Nurse
0.2 min.
None
None
None
Notice that the description of operation really provides the details of the entity ow diagram. This detail is needed for dening the simulation model. The
times associated with activities and movements can be just estimates at this stage.
The important thing to accomplish at this point is simply to describe how entities
are processed through the system.
The entity ow diagram, together with the description of operation, provides
a good data document that can be expanded as the project progresses. At this
point, it is a good idea to conduct a structured walk-through of the operation using
the entity ow diagram as the focal point. Individuals should be involved in this
review who are familiar with the operation to ensure that the description of operation is accurate and complete.
Based on this description of operation, a rst cut at building the model can
begin. Using ProModel, a model for simulating Dr. Browns practice can be built
in a matter of just a few minutes. Little translation is needed as both the diagram
(Figure 6.2) and the data table (Table 6.1) can be entered pretty much in the same
way they are shown. The only additional modeling information required to build
a running model is the interarrival time of patients.
Getting a basic model up and running early in a simulation project helps hold
the interest of stakeholders. It also helps identify missing information and motivates team members to try to ll in these information gaps. Additional questions
about the operation of the system are usually raised once a basic model is running.
Some of the questions that begin to be asked are Have all of the routings been
accounted for? and Have any entities been overlooked? In essence, modeling the
system actually helps dene and validate system data.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
134
I. Study Chapters
Part I
The McGrawHill
Companies, 2004
Study Chapters
At this point, any numerical values such as activity times, arrival rates, and
others should also be rmed up. Having a running model enables estimates and
other assumptions to be tested to see if it is necessary to spend additional time getting more accurate information. For existing systems, obtaining more accurate
data is usually accomplished by conducting time studies of the activity or event
under investigation. A sample is gathered to represent all conditions under which
the activity or event occurs. Any biases that do not represent normal operating
conditions are eliminated. The sample size should be large enough to provide
an accurate picture yet not so large that it becomes costly without really adding
additional information.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 6
The McGrawHill
Companies, 2004
135
For comparative studies in which two design alternatives are evaluated, the
fact that assumptions are made is less signicant because we are evaluating
relative performance, not absolute performance. For example, if we are trying to
determine whether on-time deliveries can be improved by assigning tasks to a
resource by due date rather than by rst-come, rst-served, a simulation can provide useful information to make this decision without necessarily having completely accurate data. Because both models use the same assumptions, it may be
possible to compare relative performance. We may not know what the absolute
performance of the best option is, but we should be able to assess fairly accurately
how much better one option is than another.
Some assumptions will naturally have a greater inuence on the validity of a
model than others. For example, in a system with large processing times compared to move times, a move time that is off by 20 percent may make little or no
difference in system throughput. On the other hand, an activity time that is off by
20 percent could make a 20 percent difference in throughput. One way to assess
the inuence of an assumption on the validity of a model is through sensitivity
analysis. Sensitivity analysis, in which a range of values is tested for potential impact on model performance, can indicate just how accurate an assumption needs
to be. A decision can then be made to rm up the assumption or to leave it as is.
If, for example, the degree of variation in a particular activity time has little or no
impact on system performance, then a constant activity time may be used. At the
other extreme, it may be found that even the type of distribution has a noticeable
impact on model behavior and therefore needs to be selected carefully.
A simple approach to sensitivity analysis for a particular assumption is to run
three different scenarios showing (1) a best or most optimistic case, (2) a
worst or most pessimistic case, and (3) a most likely or best-guess case. These
runs will help determine the extent to which the assumption inuences model
behavior. It will also help assess the risk of relying on the particular assumption.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
136
I. Study Chapters
Part I
The McGrawHill
Companies, 2004
Study Chapters
FIGURE 6.3
Descriptive statistics
for a sample data set
of 100 observations.
0.41
0.67
0.98
1.70
0.51
0.78
0.49
1.30
1.60
0.65
0.89
0.64
0.89
1.40
0.72
0.49
0.92
1.30
1.20
0.82
0.59
0.88
0.62
1.00
0.76
1.10
1.50
1.40
0.49
0.52
0.98
0.57
0.97
1.00
0.61
0.74
1.10
1.30
0.35
0.52
0.47
0.87
1.30
0.88
0.37
0.97
0.64
0.96
0.41
0.80
0.70
0.43
1.20
0.52
0.66
0.93
0.96
0.95
0.54
0.72
0.94
0.97
1.10
1.30
0.75
0.76
0.87
1.60
0.83
1.20
0.39
1.20
1.00
0.59
1.10
0.66
1.10
0.58
1.20
0.59
0.92
1.50
0.44
0.35
0.76
0.57
0.50
1.10
0.99
1.60
distribution), and stationarity (the distribution of the data doesnt change with time)
should be determined. Using data analysis software such as Stat::Fit, data sets can
be automatically analyzed, tested for usefulness in a simulation, and matched to the
best-tting underlying distribution. Stat::Fit is bundled with ProModel and can be
accessed from the Tools menu after opening ProModel. To illustrate how data are
analyzed and converted to a form for use in simulation, lets take a look at a data set
containing 100 observations of an inspection operation time, shown in Table 6.2.
By entering these data or importing them from a le into Stat::Fit, a descriptive analysis of the data set can be performed. The summary of this analysis is
displayed in Figure 6.3. These parameters describe the entire sample collection.
Because reference will be made later to some of these parameters, a brief denition of each is given below.
Meanthe average value of the data.
Medianthe value of the middle observation when the data are sorted in
ascending order.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 6
The McGrawHill
Companies, 2004
137
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
138
I. Study Chapters
Part I
The McGrawHill
Companies, 2004
Study Chapters
TABLE 6.3 100 Outdoor Temperature Readings from 8:00 A.M. to 8:00 P.M.
57
63
70
75
80
83
79
75
71
67
57
63
71
75
81
83
79
74
70
67
58
64
72
76
81
83
78
73
70
66
59
64
72
77
82
82
77
73
70
66
59
65
73
78
83
82
77
72
70
66
60
66
74
78
83
81
76
72
69
65
60
66
73
79
84
81
76
72
69
66
62
68
74
80
84
80
75
71
68
65
62
68
75
80
83
81
74
71
68
65
62
69
75
81
84
80
75
71
68
64
n j
(xi x)(x
i+ j x)
2 (n j)
i=1
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 6
The McGrawHill
Companies, 2004
139
FIGURE 6.4
Scatter plot showing
uncorrelated data.
FIGURE 6.5
Scatter plot showing
correlated temperature
data.
where j is the lag or distance between data points; is the standard deviation of
the population, approximated by the standard deviation of the sample; and X is
the sample mean. The calculation is carried out to 15 of the length of the data set,
where diminishing pairs start to make the calculation unreliable.
This calculation of autocorrelation assumes that the data are taken from a stationary process; that is, the data would appear to come from the same distribution
regardless of when the data were sampled (that is, the data are time invariant). In
the case of a time series, this implies that the time origin may be shifted without
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
140
I. Study Chapters
Part I
The McGrawHill
Companies, 2004
Study Chapters
FIGURE 6.6
Autocorrelation
graph showing
noncorrelation.
FIGURE 6.7
Autocorrelation graph
showing correlation.
affecting the statistical characteristics of the series. Thus the variance for the whole
sample can be used to represent the variance of any subset. If the process being
studied is not stationary, the calculation of autocorrelation is more complex.
The autocorrelation value varies between 1 and 1 (that is, between positive
and negative correlation). If the autocorrelation is near either extreme, the data are
autocorrelated. Figure 6.6 shows an autocorrelation plot for the 100 inspection
time observations from Table 6.2. Notice that the values are near zero, indicating
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 6
The McGrawHill
Companies, 2004
141
FIGURE 6.8
Runs test based on
points above and
below the median and
number of turning
points.
little or no correlation. The numbers in parentheses below the x axis are the maximum autocorrelation in both the positive and negative directions.
Figure 6.7 is an autocorrelation plot for the sampled temperatures in
Table 6.3. The graph shows a broad autocorrelation.
Runs Tests
The runs test looks for runs in the data that might indicate data correlation. A run
in a series of observations is the occurrence of an uninterrupted sequence of numbers showing the same trend. For instance, a consecutive set of increasing or
decreasing numbers is said to provide runs up or down respectively. Two
types of runs tests that can be made are the median test and the turning point test.
Both of these tests can be conducted automatically on data using Stat::Fit. The runs
test for the 100 sample inspection times in Table 6.2 is summarized in Figure 6.8.
The result of each test is either do not reject the hypothesis that the series is random
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
142
I. Study Chapters
Part I
The McGrawHill
Companies, 2004
Study Chapters
or reject that hypothesis with the level of signicance given. The level of signicance is the probability that a rejected hypothesis is actually truethat is, that the
test rejects the randomness of the series when the series is actually random.
The median test measures the number of runsthat is, sequences of numbers, above and below the median. The run can be a single number above or below
the median if the numbers adjacent to it are in the opposite direction. If there are
too many or too few runs, the randomness of the series is rejected. This median
runs test uses a normal approximation for acceptance or rejection that requires
that the number of data points above or below the median be greater than 10.
The turning point test measures the number of times the series changes direction (see Johnson, Kotz, and Kemp 1992). Again, if there are too many turning
points or too few, the randomness of the series is rejected. This turning point runs
test uses a normal approximation for acceptance or rejection that requires more
than 12 data points.
While there are many other runs tests for randomness, some of the most sensitive require larger data sets, in excess of 4,000 numbers (see Knuth 1981).
The number of runs in a series of observations indicates the randomness of
those observations. A few runs indicate strong correlation, point to point. Several
runs may indicate cyclic behavior.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 6
The McGrawHill
Companies, 2004
143
bimodal, as shown in Figure 6.9. The fact that there are two clusters of data indicates that there are at least two distinct causes of downtimes, each producing
different distributions for repair times. Perhaps after examining the cause of the
downtimes, it is discovered that some were due to part jams that were quickly
xed while others were due to mechanical failures that took longer to repair.
One type of nonhomogeneous data occurs when the distribution changes over
time. This is different from two or more distributions manifesting themselves over
the same time period such as that caused by mixed types of downtimes. An example of a time-changing distribution might result from an operator who works 20 percent faster during the second hour of a shift than during the rst hour. Over long periods of time, a learning curve phenomenon occurs where workers perform at a
faster rate as they gain experience on the job. Such distributions are called nonstationary or time variant because of their time-changing nature. A common example
of a distribution that changes with time is the arrival rate of customers to a service
facility. Customer arrivals to a bank or store, for example, tend to occur at a rate that
uctuates throughout the day. Nonstationary distributions can be detected by plotting subgroups of data that occur within successive time intervals. For example,
sampled arrivals between 8 A.M. and 9 A.M. can be plotted separately from arrivals
between 9 A.M. and 10 A.M., and so on. If the distribution is of a different type or is
the same distribution but shifted up or down such that the mean changes value over
time, the distribution is nonstationary. This fact will need to be taken into account
when dening the model behavior. Figure 6.10 is a plot of customer arrival rates for
a department store occurring by half-hour interval between 10 A.M. and 6 P.M.
FIGURE 6.9
Part jams
Mechanical failures
Frequency of
occurrence
Bimodal distribution of
downtimes indicating
multiple causes.
Repair time
Change in rate of
customer arrivals
between 10 A.M. and
6 P.M.
Rate of arrival
FIGURE 6.10
10:00 A.M.
12:00 noon
2:00 P.M.
Time of day
4:00 P.M.
6:00 P.M.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
144
I. Study Chapters
Part I
The McGrawHill
Companies, 2004
Study Chapters
Note that while the type of distribution (Poisson) is the same for each period, the
rate (and hence the mean interarrival time) changes every half hour.
The second case we look at is where two sets of data have been gathered and we
desire to know whether they come from the same population or are identically distributed. Situations where this type of testing is useful include the following:
Interarrival times have been gathered for different days and you want to
know if the data collected for each day come from the same distribution.
Activity times for two different operators were collected and you want to
know if the same distribution can be used for both operators.
Time to failure has been gathered on four similar machines and you are
interested in knowing if they are all identically distributed.
One easy way to tell whether two sets of data have the same distribution is to
run Stat::Fit and see what distribution best ts each data set. If the same distribution ts both sets of data, you can assume that they come from the same population. If in doubt, they can simply be modeled as separate distributions.
Several formal tests exist for determining whether two or more data sets can
be assumed to come from identical populations. Some of them apply to specic
families of distributions such as analysis of variance (ANOVA) tests for normally
distributed data. Other tests are distribution independent and can be applied to
compare data sets having any distribution, such as the Kolmogorov-Smirnov twosample test and the chi-square multisample test (see Hoover and Perry 1990). The
Kruskal-Wallis test is another nonparametric test because no assumption is made
about the distribution of the data (see Law and Kelton 2000).
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 6
The McGrawHill
Companies, 2004
145
Using an empirical distribution in ProModel requires that the data be converted to either a continuous or discrete frequency distribution. A continuous
frequency distribution summarizes the percentage of values that fall within
given intervals. In the case of a discrete frequency distribution, it is the percentage of times a particular value occurs. For continuous frequency distributions
the intervals need not be of equal width. During the simulation, random variates
are generated using a continuous, piecewise-linear empirical distribution function based on the grouped data (see Law and Kelton, 2000). The drawbacks to
using an empirical distribution as input to a simulation are twofold. First, an insufcient sample size may create an articial bias or choppiness in the distribution that does not represent the true underlying distribution. Second, empirical
distributions based on a limited sample size often fail to capture rare extreme
values that may exist in the population from which they were sampled. As a general rule, empirical distributions should be used only for rough-cut modeling or
when the shape is very irregular and doesnt permit a good distribution t.
Representing the data using a theoretical distribution involves tting a theoretical distribution to the data. During the simulation, random variates are generated from the probability distribution to provide the simulated random values.
Fitting a theoretical distribution to sample data smooths articial irregularities in
the data and ensures that extreme values are included. For these reasons, it is best
to use theoretical distributions if a reasonably good t exists. Most popular simulation software provide utilities for tting distributions to numerical data, thus
relieving the modeler from performing this complicated procedure. A modeler
should be careful when using a theoretical distribution to ensure that if an
unbounded distribution is used, the extreme values that can be generated are
realistic. Techniques for controlling the range of unbounded distributions in a
simulation model are presented in Chapter 7.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
146
I. Study Chapters
Part I
The McGrawHill
Companies, 2004
Study Chapters
Frequency
Arrivals
per 5-Minute Interval
Frequency
0
1
2
3
4
5
15
11
19
16
8
8
6
7
8
9
10
11
7
4
5
3
3
1
Frequency distributions can be graphically shown using a histogram. A histogram depicting the discrete frequency distribution in Table 6.4 is shown in
Figure 6.11.
Continuous Frequency Distributions
A continuous frequency distribution denes ranges of values within which sample
values fall. Going back to our inspection time sample consisting of 100 observations, we can construct a continuous distribution for these data because values can
take on any value within the interval specied. A frequency distribution for the
data has been constructed using Stat::Fit (Figure 6.12).
Note that a relative frequency or density is shown (third column) as well as
cumulative (ascending and descending) frequencies. All of the relative densities
add up to 1, which is veried by the last value in the ascending cumulative frequency column.
A histogram based on this frequency distribution was also created in Stat::Fit
and is shown in Figure 6.13.
Note that the frequency distribution and histogram for our sample inspection
times are based on dividing the data into six even intervals or cells. While there
are guidelines for determining the best interval or cell size, the most important
thing is to make sure that enough cells are dened to show a gradual transition in
values, yet not so many cells that groupings become obscured. The number of intervals should be based on the total number of observations and the variance in the
data. One rule of thumb is to set the number of intervals to the cube root of twice
1
the number of samplesthat is, (2N ) 3 . This is the default method used in
Stat::Fit. The goal is to use the minimum number of intervals possible without losing information about the spread of the data.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 6
The McGrawHill
Companies, 2004
147
FIGURE 6.11
Histogram showing arrival count per ve-minute interval.
20
Frequency
15
10
0
0
Arrival count
FIGURE 6.12
Frequency distribution for 100 observed inspection times.
10
11
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
148
I. Study Chapters
Part I
The McGrawHill
Companies, 2004
Study Chapters
FIGURE 6.13
Histogram distribution
for 100 observed
inspection times.
generate random variates in the simulation rather than relying only on a sampling
of observations from the population. Dening the theoretical distribution that best
ts sample data is called distribution tting. Before discussing how theoretical
distributions are t to data, it is helpful to have a basic understanding of at least
the most common theoretical distributions.
There are about 12 statistical distributions that are commonly used in simulation (Banks and Gibson 1997). Theoretical distributions can be dened by a
simple set of parameters usually dening dispersion and density. A normal distribution, for example, is dened by a mean value and a standard deviation value.
Theoretical distributions are either discrete or continuous, depending on whether
a nite set of values within the range or an innite continuum of possible values
within a range can occur. Discrete distributions are seldom used in manufacturing
and service system simulations because they can usually be dened by simple
probability expressions. Below is a description of a few theoretical distributions
sometimes used in simulation. These particular ones are presented here purely because of their familiarity and ease of understanding. Beginners to simulation usually feel most comfortable using these distributions, although the precautions
given for their use should be noted. An extensive list of theoretical distributions
and their applications is given in Appendix A.
Binomial Distribution
The binomial distribution is a discrete distribution that expresses the probability
( p) that a particular condition or outcome can occur in n trials. We call an occurrence of the outcome of interest a success and its nonoccurrence a failure.
For a binomial distribution to apply, each trial must be a Bernoulli trial: it must
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 6
The McGrawHill
Companies, 2004
149
be independent and have only two possible outcomes (success or failure), and
the probability of a success must remain constant from trial to trial. The mean
of a binomial distribution is given by np, where n is the number of trials and p
is the probability of success on any given trial. The variance is given by
np(1 p).
A common application of the binomial distribution in simulation is to test for
the number of defective items in a lot or the number of customers of a particular
type in a group of customers. Suppose, for example, it is known that the probability of a part being defective coming out of an operation is .1 and we inspect the
parts in batch sizes of 10. The number of defectives for any given sample can be
determined by generating a binomial random variate. The probability mass function for the binomial distribution in this example is shown in Figure 6.14.
Uniform Distribution
A uniform or rectangular distribution is used to describe a process in which the
outcome is equally likely to fall between the values of a and b. In a uniform
distribution, the mean is (a + b)/2. The variance is expressed by (b a)2 /12.
The probability density function for the uniform distribution is shown in
Figure 6.15.
FIGURE 6.14
0.6
0.5
0.4
p (x)
The probability
mass function of a
binomial
distribution
(n = 10, p = .1).
0.3
0.2
0.1
0
0
FIGURE 6.15
The probability density
function of a uniform
distribution.
f (x )
b
x
10
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
150
I. Study Chapters
Part I
The McGrawHill
Companies, 2004
Study Chapters
FIGURE 6.16
The probability density
function of a
triangular distribution.
f (x )
b
x
The uniform distribution is often used in the early stages of simulation projects because it is a convenient and well-understood source of random variation.
In the real world, it is extremely rare to nd an activity time that is uniformly
distributed because nearly all activity times have a central tendency or mode.
Sometimes a uniform distribution is used to represent a worst-case test for variation when doing sensitivity analysis.
Triangular Distribution
A triangular distribution is a good approximation to use in the absence of data, especially if a minimum, maximum, and most likely value (mode) can be estimated.
These are the three parameters of the triangular distribution. If a, m, and b represent the minimum, mode, and maximum values respectively of a triangular distribution, then the mean of a triangular distribution is (a + m + b)/3. The variance
is dened by (a 2 + m 2 + b2 am ab mb)/18. The probability density function for the triangular distribution is shown in Figure 6.16.
The weakness of the triangular distribution is that values in real activity times
rarely taper linearly, which means that the triangular distribution will probably
create more variation than the true distribution. Also, extreme values that may be
rare are not captured by a triangular distribution. This means that the full range of
values of the true distribution of the population may not be represented by the triangular distribution.
Normal Distribution
The normal distribution (sometimes called the Gaussian distribution) describes
phenomena that vary symmetrically above and below the mean (hence the
bell-shaped curve). While the normal distribution is often selected for dening activity times, in practice manual activity times are rarely ever normally distributed.
They are nearly always skewed to the right (the ending tail of the distribution is
longer than the beginning tail). This is because humans can sometimes take signicantly longer than the mean time, but usually not much less than the mean
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 6
The McGrawHill
Companies, 2004
151
FIGURE 6.17
The probability density
function for a normal
distribution.
f (x )
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
152
I. Study Chapters
Part I
The McGrawHill
Companies, 2004
Study Chapters
FIGURE 6.18
The probability density
function for an
exponential
distribution.
f (x )
For an exponential distribution, the variance is the same as the mean. The probability density function of the exponential distribution is shown in Figure 6.18.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 6
The McGrawHill
Companies, 2004
153
FIGURE 6.19
Ranking distributions by goodness of t for inspection time data set.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
154
I. Study Chapters
Part I
The McGrawHill
Companies, 2004
Study Chapters
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
The McGrawHill
Companies, 2004
Chapter 6
155
8.2
8.6
7.4
11.1
12.3
16.8
15.2
8.3
14.5
10.7
13.8
15.2
13.5
9.2
16.3
14.9
12.9
14.3
7.5
15.1
10.3
9.6
11.1
11.8
9.5
16.3
12.4
16.9
13.2
10.7
Interval
Observed
Frequency (oi )
H0 Probability
( pi )
H0 Expected
Frequency (ei )
(oi ei )2
ei
79
911
1113
1315
1517
7
7
10
8
8
.20
.20
.20
.20
.20
8
8
8
8
8
0.125
0.125
0.50
0
0
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
156
I. Study Chapters
Part I
The McGrawHill
Companies, 2004
Study Chapters
p(7 x < 9) =
f (x ) d x =
7
1
dx =
10
x
10
9
=
7
9
7
2
=
= .20.
10
10
10
For a uniform distribution the probabilities for all intervals are equal, so the
remaining intervals also have a hypothesized probability of .20.
Step 3: Calculate the expected frequency for each cell (ei). The expected
frequency (e) for each cell (i ) is the expected number of observations that
would fall into each interval if the null hypothesis were true. It is calculated
by multiplying the total number of observations (n) by the probability ( p)
that an observation would fall within each cell. So for each cell, the
expected frequency (ei ) equals npi .
In our example, since the hypothesized probability ( p) of each cell is
the same, the expected frequency for every cell is ei = npi = 40 .2 = 8.
Step 4: Adjust cells if necessary so that all expected frequencies are at
least 5. If the expected frequency of any cell is less than 5, the cells must
be adjusted. This rule of ve is a conservative rule that provides
satisfactory validity of the chi-square test. When adjusting cells, the easiest
approach is to simply consolidate adjacent cells. After any consolidation,
the total number of cells should be at least 3; otherwise you no longer have
a meaningful differentiation of the data and, therefore, will need to gather
additional data. If you merge any cells as the result of this step, you will
need to adjust the observed frequency, hypothesized probability, and
expected frequency of those cells accordingly.
In our example, the expected frequency of each cell is 8, which meets
the minimum requirement of 5, so no adjustment is necessary.
Step 5: Calculate the chi-square
equation for calculating the
kstatistic. The
2
= i=1
(oi ei )2 /ei . If the t is good the
chi-square statistic is calc
chi-square statistic will be small.
For our example,
2
calc
=
5
(oi ei )2
= .125 + .125 + .50 + 0 + 0 = 0.75.
ei
i=1
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 6
The McGrawHill
Companies, 2004
157
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
158
I. Study Chapters
Part I
The McGrawHill
Companies, 2004
Study Chapters
FIGURE 6.20
Visual comparison
between beta
distribution and a
histogram of the 100
sample inspection time
values.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 6
The McGrawHill
Companies, 2004
159
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
160
I. Study Chapters
Part I
The McGrawHill
Companies, 2004
Study Chapters
FIGURE 6.21
Normal distribution with mean = 1 and standard deviation = .25.
.25
.5
.75
1.25
1.50
1.75
FIGURE 6.22
A triangular
distribution with
minimum = 2,
mode = 5, and
maximum = 15.
15
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 6
The McGrawHill
Companies, 2004
161
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
162
I. Study Chapters
Part I
The McGrawHill
Companies, 2004
Study Chapters
random variable X will have values generated for X where min X < max. Thus
values will never be equal to the maximum value (in this case 20). Because generated values are automatically truncated when used in a context requiring an integer, only integer values that are evenly distributed from 1 to 19 will occur (this
is effectively a discrete uniform distribution).
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 6
The McGrawHill
Companies, 2004
163
Objective
The objective of the study is to determine station utilization and throughput of the
system.
Entity Flow Diagram
Rejected monitors
19 & 21
monitor
Station 1
Station 2
Inspection
21 monitor
Reworked
monitors
Station 3
Entities
19" monitor
21" monitor
25" monitor
Workstation Information
Workstation
Buffer Capacity
Defective Rate
5
8
5
5
5%
8%
0%
0%
Station 1
Station 2
Station 3
Inspection
Processing Sequence
Entity
19" monitor
21" monitor
25" monitor
Station
Station 1
Station 2
Inspection
Station 1
Station 2
Inspection
Station 1
Station 2
Inspection
Station 3
0.8, 1, 1.5
0.9, 1.2, 1.8
1.8, 2.2, 3
0.8, 1, 1.5
1.1, 1.3, 1.9
1.8, 2.2, 3
0.9, 1.1, 1.6
1.2, 1.4, 2
1.8, 2.3, 3.2
0.5, 0.7, 1
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
164
I. Study Chapters
Part I
The McGrawHill
Companies, 2004
Study Chapters
Monitor Size
Probability
19"
21"
25"
.6
.3
.1
Move Times
All movement is on an accumulation conveyor with the following times:
From
Station 1
Station 2
Inspection
Inspection
Inspection
Station 1
To
Time (seconds)
Station 2
Inspection
Station 3
Station 1
Station 2
Inspection
12
15
12
20
14
18
Move Triggers
Entities move from one location to the next based on available capacity of the
input buffer at the next location.
Work Schedule
Stations are scheduled to operate eight hours a day.
Assumption List
No downtimes (downtimes occur too infrequently).
Dedicated operators at each workstation are always available during the
scheduled work time.
Rework times are half of the normal operation times.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 6
The McGrawHill
Companies, 2004
165
6.13 Summary
Data for building a model should be collected systematically with a view of how
the data are going to be used in the model. Data are of three types: structural, operational, and numerical. Structural data consist of the physical objects that make
up the system. Operational data dene how the elements behave. Numerical data
quantify attributes and behavioral parameters.
When gathering data, primary sources should be used rst, such as historical
records or specications. Developing a questionnaire is a good way to request information when conducting personal interviews. Data gathering should start with
structural data, then operational data, and nally numerical data. The rst piece of
the puzzle to be put together is the routing sequence because everything else
hinges on the entity ow.
Numerical data for random variables should be analyzed to test for independence and homogeneity. Also, a theoretical distribution should be t to the data if
there is an acceptable t. Some data are best represented using an empirical distribution. Theoretical distributions should be used wherever possible.
Data should be documented, reviewed, and approved by concerned individuals. This data document becomes the basis for building the simulation model and
provides a baseline for later modication or for future studies.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
166
I. Study Chapters
Part I
The McGrawHill
Companies, 2004
Study Chapters
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 6
The McGrawHill
Companies, 2004
167
Observations
10
14
12
19. While doing your homework one afternoon, you notice that you are
frequently interrupted by friends. You decide to record the times
between interruptions to see if they might be exponentially distributed.
Here are 30 observed times (in minutes) that you have recorded;
conduct a goodness-of-t test to see if the data are exponentially
distributed. (Hint: Use the data average as an estimate of the mean. For
the range, assume a range between 0 and innity. Divide the cells based
on equal probabilities ( pi ) for each cell rather than equal cell intervals.)
2.08
2.96
16.17
14.57
2.79
2.15
6.86
0.91
2.11
0.29
11.69
0.96
4.86
2.13
2.38
2.73
18.29
6.28
2.55
2.20
0.83
0.73
5.25
0.94
5.94
1.40
2.81
1.76
7.42
13.76
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
168
I. Study Chapters
Part I
The McGrawHill
Companies, 2004
Study Chapters
Drive-through order
Kitchen
Drive-through
pickup
Pickup
Order
Table area
40 percent of the time, two 30 percent of the time, three 18 percent of the time, four 10
percent of the time, and ve 2 percent of the time. Eating time is normally distributed
with a mean of 15 minutes and a standard deviation of 2 minutes. If a walk-in customer
enters and sees that more than 15 customers are waiting to place their orders, the customer will balk (that is, leave).
Harrys is especially popular as a drive-through restaurant. Cars enter at a rate of 10 per
hour during peak times, place their orders, and then pull forward to pick up their orders. No
more than ve cars can be in the pickup queue at a time. One person is dedicated to taking
orders. If over seven cars are at the order station, arriving cars will drive on.
The time to take orders is uniformly distributed between 0.5 minute and 1.2 minutes including payment. Orders take an average of 8.2 minutes to ll with a standard deviation of
1.2 minutes (normal distribution). These times are the same for both walk-in and drivethrough customers.
The objective of the simulation is to analyze performance during peak periods to see
how long customers spend waiting in line, how long lines become, how often customers
balk (pass by), and what the utilization of the table area is.
Problem
Summarize these data in table form and list any assumptions that need to be made in
order to conduct a meaningful simulation study based on the data given and objectives
specied.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 6
The McGrawHill
Companies, 2004
169
References
Banks, Jerry; John S. Carson II; Barry L. Nelson; and David M. Nicol. Discrete-Event
System Simulation. Englewood Cliffs, NJ: Prentice Hall, 2001.
Banks, Jerry, and Randall R. Gibson. Selecting Simulation Software. IIE Solutions, May
1997, pp. 2932.
. Stat::Fit. South Kent, CT: Geer Mountain Software Corporation, 1996.
Breiman, Leo. Statistics: With a View toward Applications. New York: Houghton Mifin,
1973.
Brunk, H. D. An Introduction to Mathematical Statistics. 2nd ed. N.Y.: Blaisdell Publishing Co. 1965.
Carson, John S. Convincing Users of Models Validity Is Challenging Aspect of Modelers Job. Industrial Engineering, June 1986, p. 77.
Hoover, S. V., and R. F. Perry. Simulation: A Problem Solving Approach. Reading, MA,
Addison-Wesley, 1990.
Johnson, Norman L.; Samuel Kotz; and Adrienne W. Kemp. Univariate Discrete Distributions. New York: John Wiley & Sons, 1992, p. 425.
Knuth, Donald E. Seminumerical Algorithms. Reading, MA: Addison-Wesley, 1981.
Law, Averill M., and W. David Kelton. Simulation Modeling & Analysis. New York:
McGraw-Hill, 2000.
Stuart, Alan, and J. Keith Ord. Kendalls Advanced Theory of Statistics. vol. 2. Cambridge:
Oxford University Press, 1991.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
The McGrawHill
Companies, 2004
7. Model Building
MODEL BUILDING
Every theory [model] should be stated [built] as simply as possible, but not
simpler.
Albert Einstein
7.1 Introduction
In this chapter we look at how to translate a conceptual model of a system into a
simulation model. The focus is on elements common to both manufacturing and
service systems such as entity ow and resource allocation. Modeling issues
more specic to either manufacturing or service systems will be covered in later
chapters.
Modeling is more than knowing how to use a simulation software tool.
Learning to use modern, easy-to-use software is one of the least difcult aspects
of modeling. Indeed, current simulation software makes poor and inaccurate models easier to create than ever before. Unfortunately, software cannot make decisions about how the elements of a particular system operate and how they should
interact with each other. This is the role of the modeler.
Modeling is considered an art or craft as much as a science. Knowing the
theory behind simulation and understanding the statistical issues are the science
part. But knowing how to effectively and efciently represent a system using a
simulation tool is the artistic part of simulation. It takes a special knack to be able
to look at a system in the abstract and then creatively construct a representative
logical model using a simulation tool. If three different people were to model the
same system, chances are three different modeling approaches would be taken.
Modelers tend to use the techniques with which they are most familiar. So the best
way to develop good modeling skills is to look at lots of good examples and, most
of all, practice, practice, practice! Skilled simulation analysts are able to quickly
translate a process into a simulation model and begin conducting experiments.
171
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
172
I. Study Chapters
Part I
7. Model Building
The McGrawHill
Companies, 2004
Study Chapters
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 7
7. Model Building
Model Building
The McGrawHill
Companies, 2004
173
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
174
I. Study Chapters
Part I
7. Model Building
The McGrawHill
Companies, 2004
Study Chapters
terms of the same structural and operational elements that were described in
Chapter 6. This is essentially how models are dened using ProModel.
Relationship between
model complexity and
model utility (also
known as the Laffer
curve).
Complexity
FIGURE 7.1
Optimum level of
model complexity
Utility
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 7
7. Model Building
The McGrawHill
Companies, 2004
Model Building
175
Not all simulation products provide the same set or classication of modeling
elements. Even these elements could be further subdivided to provide greater differentiation. Locations, for example, could be subdivided into workstations,
buffers, queues, and storage areas. From a system dynamics perspective, however,
these are all still simply places to which entities are routed and where operations
or activities may be performed. For this reason, it is easier just to think of them all
in the generic sense as locations. The object classication used by ProModel is
simple yet broad enough to encompass virtually any object encountered in most
manufacturing and service systems.
These model elements or objects have behavior associated with them
(discussed in Section 7.4, Operational Elements) and attributes. Most common
behaviors and attributes are selectable from menus, which reduces the time required to build models. The user may also predene behaviors and attributes that
are imported into a model.
7.3.1 Entities
Entities are the objects processed in the model that represent the inputs and outputs of the system. Entities in a system may have special characteristics such as
speed, size, condition, and so on. Entities follow one or more different routings in
a system and have processes performed on them. They may arrive from outside
the system or be created within the system. Usually, entities exit the system after
visiting a dened sequence of locations.
Simulation models often make extensive use of entity attributes. For example, an entity may have an attribute called Condition that may have a value of 1
for defective or 0 for nondefective. The value of this attribute may determine
where the entity gets routed in the system. Attributes are also frequently used to
gather information during the course of the simulation. For example, a modeler
may dene an attribute called ValueAddedTime to track the amount of valueadded time an entity spends in the system.
The statistics of interest that are generally collected for entities include time in
the system (ow time), quantity processed (output), value-added time, time spent
waiting to be serviced, and the average number of entities in the system.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
176
I. Study Chapters
Part I
The McGrawHill
Companies, 2004
7. Model Building
Study Chapters
Entities to Include
When deciding what entities to include in a model, it is best to look at every kind
of entity that has a bearing on the problem being addressed. For example, if a
component part is assembled to a base item at an assembly station, and the station
is always stocked with the component part, it is probably unnecessary to model
the component part. In this case, what is essential to simulate is just the time delay
to perform the assembly. If, however, the component part may not always be
available due to delays, then it might be necessary to simulate the ow of component parts as well as the base items. The rule is that if you can adequately capture
the dynamics of the system without including the entity, dont include it.
Entity Aggregating
It is not uncommon for some manufacturing systems to have hundreds of part
types or for a service system to have hundreds of different customer types. Modeling each one of these entity types individually would be a painstaking task that
would yield little, if any, benet. A better approach is to treat entity types in the
aggregate whenever possible (see Figure 7.2). This works especially well when all
entities have the same processing sequence. Even if a slight difference in processing exists, it often can be handled through use of attributes or by using probabilities. If statistics by entity type are not required and differences in treatment can be
dened using attributes or probabilities, it makes sense to aggregate entity types
into a single generic entity and perhaps call it part or customer.
Entity Resolution
Each individual item or person in the system need not always be represented by a
corresponding model entity. Sometimes a group of items or people can be represented by a single entity (see Figure 7.3). For example, a single entity might be
used to represent a batch of parts processed as a single unit or a party of people
eating together in a restaurant. If a group of entities is processed as a group and
moved as a group, there is no need to model them individually. Activity times or
statistics that are a function of the size of the group can be handled using an
attribute that keeps track of the items represented by the single entity.
Type A
Type B
Type X
Type C
9 entities
1 entity
FIGURE 7.2
FIGURE 7.3
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 7
7. Model Building
Model Building
The McGrawHill
Companies, 2004
177
7.3.2 Locations
Locations are places in the system that entities visit for processing, waiting, or decision making. A location might be a treatment room, workstation, check-in point,
queue, or storage area. Locations have a holding capacity and may have certain
times that they are available. They may also have special input and output such as
input based on highest priority or output based on rst-in, rst out (FIFO).
In simulation, we are often interested in the average contents of a location
such as the average number of customers in a queue or the average number of
parts in a storage rack. We might also be interested in how much time entities
spend at a particular location for processing. There are also location state statistics
that are of interest such as utilization, downtime, or idle time.
Locations to Include
Deciding what to model as a route location depends largely on what happens at
the location. If an entity merely passes through a location en route to another without spending any time, it probably isnt necessary to include the location. For example, a water spray station through which parts pass without pausing probably
doesnt need to be included in a model. In considering what to dene as a location,
any point in the ow of an entity where one or more of the following actions take
place may be a candidate:
Place where an entity is detained for a specied period of time while
undergoing an activity (such as fabrication, inspection, or cleaning).
Place where an entity waits until some condition is satised (like the
availability of a resource or the accumulation of multiple entities).
Place or point where some action takes place or logic gets executed, even
though no time is required (splitting or destroying an entity, sending a
signal, incrementing an attribute or variable).
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
178
I. Study Chapters
Part I
7. Model Building
The McGrawHill
Companies, 2004
Study Chapters
FIGURE 7.4
Example of combining
three parallel stations
into a single station.
Capacity 1
Capacity 1
Capacity 1
Operation 10
Capacity 1
Operation 30
Capacity 1
Operation 20
1 min. each
Capacity 1
Capacity 3
Capacity 1
Operation 10
Operation 20
1 min.
Operation 30
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 7
The McGrawHill
Companies, 2004
7. Model Building
179
Model Building
FIGURE 7.5
Example of combining three serial stations into a single station.
Capacity 1
Capacity 1
Capacity 1
Capacity 1
Capacity 1
Station 1
Station 2
1 min.
Station 3
1 min.
Station 4
1 min.
Station 5
Capacity 1
Capacity 3
Capacity 1
Stations 24
3 min.
Station 5
1 min.
Station 1
1 min.
might be a synchronous transfer line that has multiple serial stations. All of them
could be represented as a single location with a capacity equal to the number of
stations (see Figure 7.5). Parts enter the location, spend an amount of time equal
to the sum of all the station times, and then exit the location. The behavior may
not be exactly the same as having individual stations, such as when the location
becomes blocked and up to three parts may be nished and waiting to move to
station 5. The modeler must decide if the representation is a good enough approximation for the intended purpose of the simulation.
7.3.3 Resources
Resources are the agents used to process entities in the system. Resources may be
either static or dynamic depending on whether they are stationary (like a copy machine) or move about in the system (like an operator). Dynamic resources behave
much like entities in that they both move about in the system. Like entities, resources may be either animate (living beings) or inanimate (a tool or machine). The
primary difference between entities and resources is that entities enter the system,
have a dened processing sequence, and, in most cases, nally leave the system.
Resources, however, usually dont have a dened ow sequence and remain in the
system (except for off-duty times). Resources often respond to requests for their
use, whereas entities are usually the objects requiring the use of resources.
In simulation, we are interested in how resources are utilized, how many resources are needed, and how entity processing is affected by resource availability.
The response time for acquiring a resource may also be of interest.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
180
I. Study Chapters
Part I
7. Model Building
The McGrawHill
Companies, 2004
Study Chapters
Resources to Include
The decision as to whether a resource should be included in a model depends
largely on what impact it has on the behavior of the system. If the resource is
dedicated to a particular workstation, for example, there may be little benet in including it in the model since entities never have to wait for the resource to become
available before using it. You simply assign the processing time to the workstation.
If, on the other hand, the resource may not always be available (it experiences
downtime) or is a shared resource (multiple activities compete for the same
resource), it should probably be included. Once again, the consideration is how
much the resource is likely to affect system behavior.
Resource Travel Time
One consideration when modeling the use of resources is the travel time associated with mobile resources. A modeler must ask whether a resource is immediately
accessible when available, or if there is some travel time involved. For example, a
special piece of test equipment may be transported around to several locations in
a facility as needed. If the test equipment is available when needed at some location, but it takes 10 minutes to get the test equipment to the requesting location,
that time should be accounted for in the model. The time for the resource to move
to a location may also be a function of the distance it must travel.
Consumable Resources
Depending on the purpose of the simulation and degree of inuence on system
behavior, it may be desirable to model consumable resources. Consumable
resources are used up during the simulation and may include
Services such as electricity or compressed air.
Supplies such as staples or tooling.
Consumable resources are usually modeled either as a function of time or as
a step function associated with some event such as the completion of an operation.
This can be done by dening a variable or attribute that changes value with time
or by event. A variable representing the consumption of packaging materials, for
example, might be based on the number of entities processed at a packaging
station.
Transport Resources
Transport resources are resources used to move entities within the system.
Examples of transport resources are lift trucks, elevators, cranes, buses, and airplanes. These resources are dynamic and often are capable of carrying multiple
entities. Sometimes there are multiple pickup and drop-off points to deal with.
The transporter may even have a prescribed route it follows, similar to an entity
routing. A common example of this is a bus route.
In advanced manufacturing systems, the most complex element to model is
often the transport or material handling system. This is because of the complex
operation that is associated with these computer-controlled systems such as
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 7
7. Model Building
Model Building
The McGrawHill
Companies, 2004
181
7.3.4 Paths
Paths dene the course of travel for entities and resources. Paths may be isolated,
or they may be connected to other paths to create a path network. In ProModel
simple paths are automatically created when a routing path is dened. A routing
path connecting two locations becomes the default path of travel if no explicitly
dened path or path network connects the locations.
Paths linked together to form path networks are common in manufacturing
and service systems. In manufacturing, aisles are connected to create travel ways
for lift trucks and other material handlers. An AGVS sometimes has complex path
networks that allow controlled trafc ow of the vehicles in the system. In service
systems, ofce complexes have hallways connecting other hallways that connect
to ofces. Transportation systems use roadways, tracks, and so on that are often
interconnected.
When using path networks, there can sometimes be hundreds of routes to take
to get from one location to another. ProModel is able to automatically navigate entities and resources along the shortest path sequence between two locations. Optionally, you can explicitly dene the path sequence to take to get from one point
to any other point in the network.
7.4.1 Routings
Routings dene the sequence of ow for entities from location to location. When
entities complete their activity at a location, the routing denes where the entity
goes next and species the criterion for selecting from among multiple possible
locations.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
182
I. Study Chapters
Part I
7. Model Building
The McGrawHill
Companies, 2004
Study Chapters
Frequently entities may be routed to more than one possible location. When
choosing from among multiple alternative locations, a rule or criterion must be
dened for making the selection. A few typical rules that might be used for
selecting the next location in a routing decision include
Probabilisticentities are routed to one of several locations according to
a frequency distribution.
First availableentities go to the rst available location in the order they
are listed.
By turnthe selection rotates through the locations in the list.
Most available capacityentities select the location that has the most
available capacity.
Until fullentities continue to go to a single location until it is full and
then switch to another location, where they continue to go until it is full,
and so on.
Randomentities choose randomly from among a list of locations.
User conditionentities choose from among a list of locations based on a
condition dened by the user.
Recirculation
Sometimes entities revisit or pass through the same location multiple times. The
best approach to modeling this situation is to use an entity attribute to keep track
of the number of passes through the location and determine the operation or routing accordingly. When using an entity attribute, the attribute is incremented either
on entry to or on exit from a location and tested before making the particular operation or routing decision to see which pass the entity is currently on. Based on
the value of the attribute, a different operation or routing may be executed.
Unordered Routings
Certain systems may not require a specic sequence for visiting a set of locations
but allow activities to be performed in any order as long as they all eventually get
performed. An example is a document requiring signatures from several departments. The sequence in which the signatures are obtained may be unimportant as
long as all signatures are obtained.
In unordered routing situations, it is important to keep track of which
locations have or havent been visited. Entity attributes are usually the most practical way of tracking this information. An attribute may be dened for each possible location and then set to 1 whenever that location is visited. The routing is then
based on which of the dened attributes are still set to zero.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
The McGrawHill
Companies, 2004
7. Model Building
Chapter 7
183
Model Building
in terms of the time required, the resources used, and any other logic that impacts
system performance. For operations requiring more than a time and resource designation, detailed logic may need to be dened using ifthen statements, variable
assignment statements, or some other type of statement (see Section 7.4.8, Use
of Programming Logic).
An entity operation is one of several different types of activities that take
place in a system. As with any other activity in the system, the decision to include
an entity operation in a model should be based on whether the operation impacts
entity ow in some way. For example, if a labeling activity is performed on entities in motion on a conveyor, the activity need not be modeled unless there are situations where the labeler experiences frequent interruptions.
Consolidation of Entities
Entities often undergo operations where they are consolidated or become either
physically or logically connected with other entities. Examples of entity consolidation include batching and stacking. In such situations, entities are allowed to
simply accumulate until a specied quantity has been gathered, and then they are
grouped together into a single unit. Entity consolidation may be temporary, allowing them to later be separated, or permanent, in which case the consolidated
entities no longer retain their individual identities. Figure 7.6 illustrates these two
types of consolidation.
Examples of consolidating multiple entities to a single entity include
Accumulating multiple items to ll a container.
Gathering people together into groups of ve for a ride at an amusement park.
Grouping items to load them into an oven for heating.
In ProModel, entities are consolidated permanently using the COMBINE command. Entities may be consolidated temporarily using the GROUP command.
Attachment of Entities
In addition to consolidating accumulated entities at a location, entities can also be
attached to a specic entity at a location. Examples of attaching entities might be
FIGURE 7.6
Consolidation of
entities into a single
entity. In (a)
permanent
consolidation, batched
entities get destroyed.
In (b) temporary
consolidation, batched
entities are preserved
for later unbatching.
before
after
(a)
before
(b)
after
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
184
I. Study Chapters
Part I
The McGrawHill
Companies, 2004
7. Model Building
Study Chapters
FIGURE 7.7
Attachment of one or
more entities to
another entity. In (a)
permanent attachment,
the attached entities
get destroyed. In (b)
temporary attachment,
the attached entities
are preserved for later
detachment.
after
before
( a)
after
before
(b)
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 7
7. Model Building
The McGrawHill
Companies, 2004
Model Building
185
FIGURE 7.8
Multiple entities
created from a single
entity. Either (a) the
entity splits into
multiple entities (the
original entity is
destroyed) or (b) the
entity creates one or
more entities (the
original entity
continues).
before
after
(a)
before
after
(b)
Examples of entities being split or creating new entities from a single entity
include
A container or pallet load being broken down into the individual items
comprising the load.
Driving in and leaving a car at an automotive service center.
Separating a form from a multiform document.
A customer placing an order that is processed while the customer waits.
A length of bar stock being cut into smaller pieces.
In ProModel, entities are split using a SPLIT statement. New entities are created
from an existing entity using a CREATE statement. Alternatively, entities can be conveniently split or created using the routing options provided in ProModel.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
186
I. Study Chapters
Part I
7. Model Building
The McGrawHill
Companies, 2004
Study Chapters
distributed with a mean of 1.6 minutes and a standard deviation of 0.2 minute. Examples of periodic arrivals include
Parts arriving from an upstream operation that is not included in the model.
Customers arriving to use a copy machine.
Phone calls for customer service during a particular part of the day.
Periodic arrivals are dened in ProModel by using the arrivals table.
Scheduled Arrivals
Scheduled arrivals occur when entities arrive at specied times with possibly some
dened variation (that is, a percentage will arrive early or late). Scheduled arrivals
may occur in quantities greater than one such as a shuttle bus transporting guests at
a scheduled time. It is often desirable to be able to read in a schedule from an external le, especially when the number of scheduled arrivals is large and the schedule may change from run to run. Examples of scheduled arrivals include
Customer appointments to receive a professional service such as counseling.
Patients scheduled for lab work.
Production release times created through an MRP (material requirements
planning) system.
Scheduled arrivals sometime occur at intervals, such as appointments that occur
at 15-minute intervals with some variation. This may sound like a periodic arrival;
however, periodic arrivals are autocorrelated in that the absolute time of each arrival is dependent on the time of the previous arrival. In scheduled arrival intervals, each arrival occurs independently of the previous arrival. If one appointment
arrives early or late, it will not affect when the next appointment arrives.
ProModel provides a straightforward way for dening scheduled arrivals
using the arrivals table. A variation may be assigned to a scheduled arrival to simulate early or late arrivals for appointments.
Fluctuating Arrivals
Sometimes entities arrive at a rate that uctuates with time. For example, the rate
at which customers arrive at a bank usually varies throughout the day with peak
and lull times. This pattern may be repeated each day (see Figure 7.9). Examples
of uctuating arrivals include
Customers arriving at a restaurant.
Arriving ights at an international airport.
Arriving phone calls for customer service.
In ProModel, uctuating arrivals are specied by dening an arrival cycle
pattern for a time period that may be repeated as often as desired.
Event-Triggered Arrivals
In many situations, entities are introduced to the system by some internal trigger
such as the completion of an operation or the lowering of an inventory level to a
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 7
7. Model Building
The McGrawHill
Companies, 2004
Model Building
187
FIGURE 7.9
120
100
80
60
40
20
9A.M. 10A.M. 11A.M.12 NOON 1P.M. 2P.M. 3P.M. 4P.M. 5P.M. 6P.M.
Hour of day
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
188
I. Study Chapters
Part I
7. Model Building
The McGrawHill
Companies, 2004
Study Chapters
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 7
7. Model Building
Model Building
The McGrawHill
Companies, 2004
189
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
190
I. Study Chapters
Part I
7. Model Building
The McGrawHill
Companies, 2004
Study Chapters
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 7
The McGrawHill
Companies, 2004
7. Model Building
191
Model Building
times, one solution is to try to synchronize the arrivals with the work schedule.
This usually complicates the way arrivals are dened. Another solution, and usually an easier one, is to have the arrivals enter a preliminary location where they
test whether the facility is closed and, if so, exit the system. In ProModel, if a
location where entities are scheduled to arrive is unavailable at the time of an
arrival, the arriving entities are simply discarded.
FIGURE 7.10
Resource downtime
occurring every 20
minutes based on total
elapsed time.
Start
Interrupt
Interrupt
10
Idle
Busy
Idle
Down
Time (minutes)
14
Busy
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
192
Part I
FIGURE 7.11
Start
Resource downtime
occurring every 20
minutes, based on
operating time.
The McGrawHill
Companies, 2004
7. Model Building
Study Chapters
Interrupt
12
Idle
Busy
8
Idle
Busy
Time (minutes)
mean of 2 minutes, the time between failures should be dened as xlast + E(10)
where xlast is the last repair time generated using E(2) minutes.
Downtimes Based on Time in Use
Most equipment and machine failures occur only when the resource is in use. A
mechanical or tool failure, for example, generally happens only when a machine
is running, not while a machine is idle. In this situation, the interval between
downtimes would be dened relative to actual machine operation time. A machine
that goes down every 20 minutes of operating time for a three-minute repair is
illustrated in Figure 7.11. Note that any idle times and downtimes are not included
in determining when the next downtime occurs. The only time counted is the actual operating time.
Because downtimes usually occur randomly, the time to failure is most accurately dened as a probability distribution. Studies have shown, for example, that
the operating time to failure is often exponentially distributed.
Downtimes Based on the Number of Times Used
The last type of downtime occurs based on the number of times a location was
used. For example, a tool on a machine may need to be replaced every 50 cycles
due to tool wear, or a copy machine may need paper added after a mean of 200
copies with a standard deviation of 25 copies. ProModel permits downtimes to be
dened in this manner by selecting ENTRY as the type of downtime and then specifying the number of entity entries between downtimes.
Downtime Resolution
Unfortunately, data are rarely available on equipment downtime. When they are
available, they are often recorded as overall downtime and seldom broken down
into number of times down and time between failures. Depending on the nature of
the downtime information and degree of resolution required for the simulation,
downtimes can be treated in the following ways:
Ignore the downtime.
Simply increase processing times to adjust for downtime.
Use average values for mean time between failures (MTBF) and mean
time to repair (MTTR).
Use statistical distributions for time between failures and time to repair.
Ignoring Downtime. There are several situations where it might make sense to
ignore downtimes in building a simulation model. Obviously, one situation is
where absolutely no data are unavailable on downtimes. If there is no knowledge
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 7
7. Model Building
Model Building
The McGrawHill
Companies, 2004
193
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
194
I. Study Chapters
Part I
The McGrawHill
Companies, 2004
7. Model Building
Study Chapters
time. It also implies that during periods of high equipment utilization, the same
amount of downtime occurs as during low utilization periods. Equipment failures
should generally be based on operating time and not on elapsed time because
elapsed time includes operating time, idle time, and downtime. It should be left to
the simulation to determine how idle time and downtime affect the overall elapsed
time between failures.
To illustrate the difference this can make, lets assume that the following
times were logged for a given operation:
Status
Time (Hours)
In use
Down
Idle
20
5
15
Total time
40
The last option is the easiest way to handle downtimes and, in fact, may be
adequate in situations where either the processing times or the downtimes are relatively short. In such circumstances, the delay in entity ow is still going to
closely approximate what would happen in the actual system.
If the entity resumes processing later using either the same or another resource, a decision must be made as to whether only the remaining process time is
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 7
The McGrawHill
Companies, 2004
7. Model Building
195
Model Building
used or if additional time must be added. By default, ProModel suspends processing until the location or resource returns to operation. Alternatively, other logic
may be dened.
A similar situation occurs in activity times that have more than one distribution. For example, when a machine goes down, 30 percent of the time it takes
Triangular(0.2, 1.5, 3) minutes to repair and 70 percent of the time it takes Triangular(3, 7.5, 15) minutes to repair. The logic for the downtime denition might be
if rand() <= .30
then wait T(.2, 1.5, 3) min
else wait T(3, 7.5, 15) min
Probability of Occurrence
A
B
C
.20
.50
.30
E(5)
E(8)
E(12)
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
196
I. Study Chapters
Part I
7. Model Building
The McGrawHill
Companies, 2004
Study Chapters
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 7
7. Model Building
The McGrawHill
Companies, 2004
197
Model Building
module. To do this within operation logic (or in any other logic), you would enter
something like the following, where Count is dened as a local variable:
int Count = 1
while Count < 11 do
{
NumOfBins[Count] = 4
Inc Count
}
The braces { and } are the ProModel notation (also used in C++ and Java)
for starting and ending a block of logic. In this case it is the block of statements to be
executed repeatedly by an object as long as the local variable Count is less than 11.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
198
I. Study Chapters
Part I
7. Model Building
The McGrawHill
Companies, 2004
Study Chapters
model. When faced with building a supermodel, it is always a good idea to partition the model into several submodels and tackle the problem on a smaller scale
rst. Once each of the submodels has been built and validated, they can be merged
into a larger composite model. This composite model can be structured either as a
single monolithic model or as a hierarchical model in which the details of each
submodel are hidden unless explicitly opened for viewing. Several ways have
been described for merging individual submodels into a composite model
(Jayaraman and Agarwal 1996). Three of the most common ways that might be
considered for integrating submodels are
Option 1: Integrate all of the submodels just as they have been built. This
approach preserves all of the detail and therefore accuracy of the individual
submodels. However, the resulting composite model may be enormous and
cause lengthy execution times. The composite model may be structured as
a at model or, to reduce complexity, as a hierarchical model.
Option 2: Use only the recorded output from one or more of the submodels.
By simulating and recording the time at which each entity exits the model
for a single submodel, these exit times can be used in place of the
submodel for determining the arrival times for the larger model. This
eliminates the need to include the overhead of the individual submodel in
the composite model. This technique, while drastically reducing the
complexity of the composite model, may not be possible if the interaction
with the submodel is two-way. For submodels representing subsystems
that simply feed into a larger system (in which case the subsystem operates
fairly independently of downstream activities), this technique is valid. An
example is an assembly facility in which fabricated components or even
subassemblies feed into a nal assembly line. Basically, each feeder line is
viewed as a black box whose output is read from a le.
Option 3: Represent the output of one or more of the submodels as
statistical distributions. This approach is the same as option 2, but instead
of using the recorded output times from the submodel in the composite
model, a statistical distribution is t to the output times and used to
generate the input to the composite model. This technique eliminates the
need for using data les that, depending on the submodel, may be quite
large. Theoretically, it should also be more accurate because the true
underlying distribution is used instead of just a sample unless there are
discontinuities in the output. Multiple sample streams can also be
generated for running multiple replications.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 7
7. Model Building
Model Building
The McGrawHill
Companies, 2004
199
One is to include cost factors in the model itself and dynamically update cost collection variables during the simulation. ProModel includes a cost module for assigning costs to different factors in the simulation such as entity cost, waiting cost,
and operation cost. The alternative approach is to run a cost analysis after the simulation, applying cost factors to collected cost drivers such as resource utilization
or time spent in storage. The rst method is best when it is difcult to summarize
cost drivers. For example, the cost per unit of production may be based on the
types of resources used and the time for using each type. This may be a lot of information for each entity to carry using attributes. It is much easier to simply update the entitys cost attribute dynamically whenever a particular resource has
been used. Dynamic cost tracking suffers, however, from requiring cost factors to
be considered during the modeling stage rather than the analysis stage. For some
models, it may be difcult to dynamically track costs during a simulation, especially when relationships become very complex.
The preferred way to analyze costs, whenever possible, is to do a postsimulation analysis and to treat cost modeling as a follow-on activity to system modeling rather than as a concurrent activity (see Lenz and Neitzel 1995). There are
several advantages to separating the logic model from the cost model. First, the
model is not encumbered with tracking information that does not directly affect
how the model operates. Second, and perhaps more importantly, post analysis of
costs gives more exibility for doing what-if scenarios with the cost model. For
example, different cost scenarios can be run based on varying labor rates in a
matter of seconds when applied to simulation output data that are immediately
available. If modeled during the simulation, a separate simulation would have to
be run applying each labor rate.
7.6 Summary
Model building is a process that takes a conceptual model and converts it to a simulation model. This requires a knowledge of the modeling paradigm of the particular simulation software being used and a familiarity with the different modeling
constructs that are provided in the software. Building a model involves knowing
what elements to include in the model and how to best express those elements in
the model. The principle of parsimony should always be followed, which results
in the most minimal model possible that achieves the simulation objectives.
Finally, the keys to successful modeling are seeing lots of examples and practice,
practice, practice!
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
200
I. Study Chapters
Part I
7. Model Building
The McGrawHill
Companies, 2004
Study Chapters
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 7
7. Model Building
Model Building
The McGrawHill
Companies, 2004
201
18. What is the problem with modeling downtimes in terms of mean time
between failures (MTBF) and mean time to repair (MTTR)?
19. Why should unplanned downtimes or failures be dened as a function
of usage time rather than total elapsed time on the clock?
20. In modeling repair times, how should the time spent waiting for a
repairperson be modeled?
21. What is preemption? What activities or events might preempt other
activities in a simulation?
22. A boring machine experiences downtimes every ve hours
(exponentially distributed). It also requires routine preventive
maintenance (PM) after every eight hours (xed) of operation. If a
downtime occurs within two hours of the next scheduled PM, the PM
is performed as part of the repair time (no added time is needed) and,
after completing the repair coupled with the PM, the next PM is set
for eight hours away. Conceptually, how would you model this
situation?
23. A real estate agent schedules six customers (potential buyers) each day,
one every 1.5 hours, starting at 8 A.M. Customers are expected to arrive
for their appointments at the scheduled times. However, past experience
shows that customer arrival times are normally distributed with a mean
equal to the scheduled time and a standard deviation of ve minutes.
The time the agent spends with each customer is normally distributed
with a mean of 1.4 hours and a standard deviation of .2 hours. Develop a
simulation model to calculate the expected waiting time for customers.
References
Jayaraman, Arun, and Arun Agarwal. Simulating an Engine Plant. Manufacturing Engineering, November 1996, pp. 6068.
Law, A. M. Introduction to Simulation: A Powerful Tool for Analyzing Complex Manufacturing Systems. Industrial Engineering, 1986, 18(5):5758.
Lenz, John, and Ray Neitzel. Cost Modeling: An Effective Means to Compare Alternatives. Industrial Engineering, January 1995, pp. 1820.
Shannon, Robert E. Introduction to the Art and Science of Simulation. In Proceedings of
the 1998 Winter Simulation Conference, ed. D. J. Medeiros, E. F. Watson, J. S. Carson,
and M. S. Manivannan. Piscataway, NJ: Institute of Electrical and Electronics
Engineers, 1998.
Thompson, Michael B. Expanding Simulation beyond Planning and Design. Industrial
Engineering, October 1994, pp. 6466.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
The McGrawHill
Companies, 2004
MODEL VERIFICATION
AND VALIDATION
8.1 Introduction
Building a simulation model is much like developing an architectural plan for a
house. A good architect will review the plan and specications with the client or
owner of the house to ensure that the design meets the clients expectations. The
architect will also carefully check all dimensions and other specications shown
on the plan for accuracy. Once the architect is reasonably satised that the right
information has been accurately represented, a contractor can be given the plan to
begin building the house. In a similar way the simulation analyst should examine
the validity and correctness of the model before using it to make implementation
decisions.
In this chapter we cover the importance and challenges associated with model
verication and validation. We also present techniques for verifying and validating models. Balci (1997) provides a taxonomy of more than 77 techniques for
model verication and validation. In this chapter we give only a few of the more
common and practical methods used. The greatest problem with verication and
validation is not one of failing to use the right techniques but failing to use any
technique. Questions addressed in this chapter include the following:
Two case studies are presented at the end of the chapter showing how verication and validation techniques have been used in actual simulation projects.
203
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
204
I. Study Chapters
Part I
The McGrawHill
Companies, 2004
Study Chapters
System
Two-step translation
process to convert a
real-world system to
a simulation model.
Concept
Va
Ve
r
a
lid
ific
tio
at
io
n
Model
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 8
The McGrawHill
Companies, 2004
205
Time and budget pressures can be overcome through better planning and increased prociency in the validation process. Because validation doesnt have to
be performed to complete a simulation study, it is the activity that is most likely to
get shortchanged when pressure is being felt to complete the project on time or
within budget. Not only does the modeler become pressed for time and resources,
but often others on whom the modeler relies for feedback on model validity also
become too busy to get involved the way they should.
Laziness is a bit more difcult to deal with because it is characteristic of
human nature and is not easily overcome. Discipline and patience must be developed before one is willing to painstakingly go back over a model once it has been
built and is running. Model building is a creative activity that can actually be quite
fun. Model verication and validation, on the other hand, are a laborious effort
that is too often dreaded.
The problem of overcondence can be dealt with only by developing a more
critical and even skeptical attitude toward ones own work. Too often it is
assumed that if the simulation runs, it must be okay. A surprising number of decisions are based on invalid simulations simply because the model runs and produces results. Computer output can give an aura of validity to results that may be
completely in error. This false sense of security in computer output is a bit like
saying, It must be true, it came out of the computer. Having such a naive mentality can be dangerously misleading.
The nal problem is simply a lack of knowledge of verication and validation procedures. This is a common problem particularly among newcomers to
simulation. This seems to be one of the last areas in which formal training is received because it is presumed to be nonessential to model building and output
analysis. Another misconception of validation is that it is just another phase in a
simulation project rather than a continuous activity that should be performed
throughout the entire project. The intent of this chapter is to dispel common misconceptions, help shake attitudes of indifference, and provide insight into this important activity.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
206
I. Study Chapters
Part I
The McGrawHill
Companies, 2004
Study Chapters
the tangled mess and gure out what the model creator had in mind become
almost futile. It is especially discouraging when attempting to use a poorly constructed model for future experimentation. Trying to gure out what changes need
to be made to model new scenarios becomes difcult if not impossible.
The solution to creating models that ease the difculty of verication and validation is to rst reduce the amount of complexity of the model. Frequently the
most complex models are built by amateurs who do not have sense enough to
know how to abstract system information. They code way too much detail into the
model. Once a model has been simplied as much as possible, it needs to be coded
so it is easily readable and understandable. Using object-oriented techniques such
as encapsulation can help organize model data. The right simulation software can
also help keep model data organized and readable by providing table entries and
intelligent, parameterized constructs rather than requiring lots of low-level programming. Finally, model data and logic code should be thoroughly and clearly
documented. This means that every subroutine used should have an explanation
of what it does, where it is invoked, what the parameters represent, and how to
change the subroutine for modications that may be anticipated.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 8
The McGrawHill
Companies, 2004
207
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
208
I. Study Chapters
Part I
The McGrawHill
Companies, 2004
Study Chapters
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 8
The McGrawHill
Companies, 2004
209
In the top-down approach, the verication testing begins with the main module and moves down gradually to lower modules. At the top level, you are more
interested that the outputs of modules are as expected given the inputs. If discrepancies arise, lower-level code analysis is conducted.
In both approaches sample test data are used to test the program code. The
bottom-up approach typically requires a smaller set of data to begin with. After
exercising the model using test data, the model is stress tested using extreme input
values. With careful selection, the results of the simulation under extreme conditions can be predicted fairly well and compared with the test results.
Checking for Reasonable Output
In any simulation model, there are operational relationships and quantitative values that are predictable during the simulation. An example of an operational relationship is a routing that occurs whenever a particular operation is completed. A
predictable quantitative value might be the number of boxes on a conveyor always
being between zero and the conveyor capacity. Often the software itself will ag
values that are out of an acceptable range, such as an attempt to free more resources than were captured.
For simple models, one way to help determine reasonableness of the output is
to replace random times and probabilistic outcomes with constant times and deterministic outcomes in the model. This allows you to predict precisely what the
results will be because the analytically determined results should match the results of the simulation.
Watching the Animation
Animation can be used to visually verify whether the simulation operates the way
you think it should. Errors are detected visually that can otherwise go unnoticed.
The animation can be made to run slowly enough for the analyst to follow along
visually. However, the amount of time required to observe an entire simulation
run can be extremely long. If the animation is sped up, the runtime will be smaller,
but inconsistent behavior will be more difcult to detect.
To provide the most meaningful information, it helps to have interactive
simulation capability, which allows the modeler to examine data associated with
simulation objects during runtime. For example, during the simulation, the user
can view the current status of a workstation or resource to see if state variables are
being set correctly.
Animation is usually more helpful in identifying a problem than in discovering the cause of a problem. The following are some common symptoms apparent
from the animation that reveal underlying problems in the model:
The simulation runs ne for hours and even days and then suddenly freezes.
A terminating simulation that should have emptied the system at the end
of the simulation leaves some entities stranded at one or more locations.
A resource sits idle when there is plenty of work waiting to be done.
A particular routing never gets executed.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
210
I. Study Chapters
Part I
The McGrawHill
Companies, 2004
Study Chapters
Trace messages describe chronologically what happens during the simulation, event by event. A typical trace message might be the time that an entity
enters a particular location or the time that a specic resource is freed. Trace messaging can be turned on or off, and trace messages can usually be either displayed
directly on the screen or written to a le for later analysis. In ProModel, tracing
can also be turned on and off programmatically by entering a Trace command at
appropriate places within the model logic. A segment of an example of a trace
message listing is shown in Figure 8.2. Notice that the simulation time is shown
in the left column followed by a description of what is happening at that time in
the right column.
Another way in which behavior can be tracked during a simulation run is
through the use of a debugger. Anyone who has used a modern commercial compiler to do programming is familiar with debuggers. A simulation debugger is a
utility that displays and steps through the actual logic entered by the user to dene
the model. As logic gets executed, windows can be opened to display variable and
FIGURE 8.2
Fragment of a trace
listing.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 8
The McGrawHill
Companies, 2004
211
FIGURE 8.3
ProModel debugger
window.
state values as they dynamically change. Like trace messaging, debugging can be
turned on either interactively or programmatically. An example of a debugger
window is shown in Figure 8.3.
Experienced modelers make extensive use of trace and debugging capabilities. Animation and output reports are good for detecting problems in a simulation, but trace and debug messages help uncover why problems occur.
Using trace and debugging features, event occurrences and state variables
can be examined and compared with hand calculations to see if the program is operating as it should. For example, a trace list might show when a particular operation began and ended. This can be compared against the input operation time to
see if they match.
One type of error that a trace or debugger is useful in diagnosing is gridlock.
This situation is caused when there is a circular dependency where one action depends on another action, which, in turn, is dependent on the rst action. You may
have experienced this situation at a busy trafc intersection or when trying to
leave a crowded parking lot after a football game. It leaves you with a sense of
utter helplessness and frustration. An example of gridlock in simulation sometimes occurs when a resource attempts to move a part from one station to another,
but the second station is unavailable because an entity there is waiting for the
same resource. The usual symptom of this situation is that the model appears to
freeze up and no more activity takes place. Meanwhile the simulation clock races
rapidly forward. A trace of events can help detect why the entity at the second station is stuck waiting for the resource.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
212
I. Study Chapters
Part I
The McGrawHill
Companies, 2004
Study Chapters
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 8
The McGrawHill
Companies, 2004
213
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
214
I. Study Chapters
Part I
The McGrawHill
Companies, 2004
Study Chapters
system. The modeler should have at least an intuitive idea of how the
model will react to a given change. It should be obvious, for example, that
doubling the number of resources for a bottleneck operation should
increase, though not necessarily double, throughput.
Running traces. An entity or sequence of events can be traced through the
model processing logic to see if it follows the behavior that would occur
in the actual system.
Conducting Turing tests. People who are knowledgeable about the
operations of a system are asked if they can discriminate between system
and model outputs. If they are unable to detect which outputs are the
model outputs and which are the actual system outputs, this is another
piece of evidence to use in favor of the model being valid.
A common method of validating a model of an existing system is to compare
the model performance with that of the actual system. This approach requires that
an as-is simulation be built that corresponds to the current system. This helps
calibrate the model so that it can be used for simulating variations of the same
model. After running the as-is simulation, the performance data are compared to
those of the real-world system. If sufcient data are available on a performance
measure of the real system, a statistical test can be applied called the Students
t test to determine whether the sampled data sets from both the model and the
actual system come from the same distribution. An F test can be performed to test
the equality of variances of the real system and the simulation model. Some of the
problems associated with statistical comparisons include these:
Simulation model performance is based on very long periods of time. On
the other hand, real system performances are often based on much shorter
periods and therefore may not be representative of the long-term
statistical average.
The initial conditions of the real system are usually unknown and are
therefore difcult to replicate in the model.
The performance of the real system is affected by many factors that may
be excluded from the simulation model, such as abnormal downtimes or
defective materials.
The problem of validation becomes more challenging when the real system
doesnt exist yet. In this case, the simulation analyst works with one or more
experts who have a good understanding of how the real system should operate.
They team up to see whether the simulation model behaves reasonably in a variety of situations. Animation is frequently used to watch the behavior of the simulation and spot any nonsensical or unrealistic behavior.
The purpose of validation is to mitigate the risk associated with making decisions based on the model. As in any testing effort, there is an optimum amount
of validation beyond which the return may be less valuable than the investment
(see Figure 8.4). The optimum effort to expend on validation should be based on
minimizing the total cost of the validation effort plus the risk cost associated
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 8
FIGURE 8.4
215
Optimum effort
Total cost
Validation cost
Cost
Optimum level of
validation looks at the
trade-off between
validation cost and
risk cost.
The McGrawHill
Companies, 2004
Risk cost
Validation effort
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
216
I. Study Chapters
Part I
The McGrawHill
Companies, 2004
Study Chapters
informal and intuitive approach to validation, while the HP case study relies on
more formal validation techniques.
St. John Hospital and Medical Center Obstetrical Unit
At St. John Hospital and Medical Center, a simulation study was conducted by the
Management Engineering Department to plan a renovation of the obstetrical unit
to accommodate modern family demands to provide a more private, homelike
atmosphere with a family-oriented approach to patient care during a mothers
hospital stay. The current layout did not support this approach and had encountered inefciencies when trying to accommodate such an approach (running newborns back and forth and the like).
The renovation project included provision for labor, delivery, recovery, and
postpartum (LDRP) rooms with babies remaining with their mothers during the
postpartum stay. The new LDRPs would be used for both low- and high-risk
mothers. The hospital administration wanted to ensure that there would be enough
room for future growth while still maintaining high utilization of every bed.
The purpose of the simulation was to determine the appropriate number of
beds needed in the new LDRPs as well as in other related areas of the hospital.
A secondary objective was to determine what rules should be used to process patients through the new system to make the best utilization of resources. Because
this was a radically new conguration, an as-is model was not built. This, of
course, made model validation more challenging. Data were gathered based on
actual operating records, and several assumptions were made such as the length of
stay for different patient classications and the treatment sequence of patients.
The validation process was actually a responsibility given to a team that was
assembled to work with the simulation group. The team consisted of the nursing
managers responsible for each OB area and the Womens Health program director. The rst phase of the review phase was to receive team approval for the
assumptions document and owcharts of the processes. This review was completed prior to developing the model. This assumptions document was continually
revised and reviewed throughout the duration of the project.
The next phase of the review came after all data had been collected and analyzed. These reviews not only helped ensure the data were valid but established
trust and cooperation between the modeling group and the review team. This
began instilling condence in the simulation even before the results were
obtained.
Model verication was performed during model building and again at model
completion. To help verify the model, patients, modeled as entities, were traced
through the system to ensure that they followed the same processing sequence
shown on the owcharts with the correct percentages of patients following each
probabilistic branch. In addition, time displays were placed in the model to verify
the length of stay at each step in the process.
Model validation was performed with both the nursing staff and physician
representatives. This included comparing actual data collected with model results
where comparisons made sense (remember, this was a new conguration being
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 8
The McGrawHill
Companies, 2004
217
modeled). The animation that showed the actual movement of patients and use of
resources was a valuable tool to use during the validation process and increased
the credibility of the model.
The use of a team approach throughout the data-gathering and model
development stages proved invaluable. It both gave the modeler immediate feedback regarding incorrect assumptions and boosted the condence of the team
members in the simulation results. The frequent reviews also allowed the group to
bounce ideas off one another. As the result of conducting a sound and convincing
simulation study, the hospitals administration was persuaded by the simulation
results and implemented the recommendations in the new construction.
HP Surface Mount Assembly Line
At HP a simulation study was undertaken to evaluate alternative production layouts and batching methods for a surface mount printed circuit assembly line based
on a projected growth in product volume/mix. As is typical in a simulation project, only about 20 percent of the time was spent building the model. The bulk of
the time was spent gathering data and validating the model. It was recognized that
the level of condence placed in the simulation results was dependent on the
degree to which model validity could be established.
Surface mount printed circuit board assembly involves placing electronic
components onto raw printed circuit boards. The process begins with solder paste
being stenciled onto the raw circuit board panel. Then the panel is transported to
sequential pick-and-place machines, where the components are placed on the
panel. The component feeders on the pick-and-place machines must be changed
depending on the components required for a particular board. This adds setup time
to the process. A third of the boards require manual placement of certain components. Once all components are placed on the board and solder pasted, the board
proceeds into an oven where the solder paste is cured, bonding the components to
the panel. Finally, the panel is cleaned and sent to testing. Inspection steps occur
at several places in the line.
The HP surface mount line was designed for a high-mix, low-volume production. More than 100 different board types are produced on the line with about 10
different types produced per day. The batch size for a product is typically less than
ve panels, with the typical panel containing ve boards.
A simulation expert built the model working closely with the process
ownersthat is, the managers, engineers, and production workers. Like many
simulations, the model was rst built as an as-is model to facilitate model validation. The model was then expanded to reect projected requirements and proposed congurations.
The validation process began early in the data-gathering phase with meetings
being held with the process owners to agree on objectives, review the assumptions, and determine data requirements. Early involvement of the process owners
gave them motivation to see that the project was successful. The rst step was to
clarify the objectives of the simulation that would guide the data-gathering and
modeling efforts. Next a rough model framework was developed, and the initial
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
218
I. Study Chapters
Part I
The McGrawHill
Companies, 2004
Study Chapters
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 8
The McGrawHill
Companies, 2004
219
output of the real process. The initial model predicted 25 shifts to complete a specic sequence of panels, while the process actually took 35 shifts. This led to
further process investigations of interruptions or other processing delays that
werent being accounted for in the model. It was discovered that feeder replacement times were underestimated in the model. A discrepancy between the model
and the actual system was also discovered in the utilization of the pick-and-place
machines. After tracking down the causes of these discrepancies and making
appropriate adjustments to the model, the simulation results were closer to the real
process. The challenge at this stage was not to yield to the temptation of making
arbitrary changes to the model just to get the desired results. Then the model would
lose its integrity and become nothing more than a sham for the real system.
The nal step was to conduct sensitivity analysis to determine how model
performance was affected in response to changes in model assumptions. By
changing the input in such a way that the impact was somewhat predictable, the
change in simulation results could be compared with intuitive guesses. Any
bizarre results such as a decrease in work in process when increasing the arrival
rate would raise an immediate ag. Input parameters were systematically changed
based on a knowledge of both the process behavior and the model operation.
Knowing just how to stress the model in the right places to test its robustness was
an art in itself. After everyone was satised that the model accurately reected the
actual operation and that it seemed to respond as would be expected to specic
changes in input, the model was ready for experimental use.
8.5 Summary
For a simulation model to be of greatest value, it must be an accurate representation of the system being modeled. Verication and validation are two activities
that should be performed with simulation models to establish their credibility.
Model verication is basically the process of debugging the model to ensure
that it accurately represents the conceptual model and that the simulation runs correctly. Verication involves mainly the modeler without the need for much input
from the customer. Verication is not an exact science, although several proven
techniques can be used.
Validating a model begins at the data-gathering stage and may not end until
the system is nally implemented and the actual system can be compared to the
model. Validation involves the customer and other stakeholders who are in a
position to provide informed feedback on whether the model accurately reects
the real system. Absolute validation is philosophically impossible, although a
high degree of face or functional validity can be established.
While formal methods may not always be used to validate a model, at least
some time devoted to careful review should be given. The secret to validation is
not so much the technique as it is the attitude. One should be as skeptical as can
be toleratedchallenging every input and questioning every output. In the end,
the decision maker should be condent in the simulation results, not because he or
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
220
I. Study Chapters
Part I
The McGrawHill
Companies, 2004
Study Chapters
she trusts the software or the modeler, but because he or she trusts the input data
and knows how the model was built.
References
Balci, Osman. Verication, Validation and Accreditation of Simulation Models. In Proceedings of the 1997 Winter Simulation Conference, ed. S. Andradottir, K. J. Healy,
D. H. Withers, and B. L. Nelson, 1997, pp. 13541.
Banks, Jerry. Simulation Evolution. IIE Solutions, November 1998, pp. 2629.
Banks, Jerry, John Carson, Barry Nelson, and David Nicol. Discrete-Event Simulation, 3rd
ed. Englewood Cliffs, NJ: Prentice-Hall, 1995.
Baxter, Lori K., and Johnson, Eric. Dont Implement before You Validate. Industrial
Engineering, February 1993, pp. 6062.
Hoover, Stewart, and Ronald Perry. Simulation: A Problem Solving Approach. Reading,
MA: Addison-Wesley, 1990.
Neelamkavil, F. Computer Simulation and Modeling. New York: John Wiley & Sons, 1987.
OConner, Kathleen. The Use of a Computer Simulation Model to Plan an Obstetrical
Renovation Project. The Fifth Annual PROMODEL Users Conference, Park City,
UT, 1994.
Sargent, Robert G. Verifying and Validating Simulation Models. In Proceedings of the
1998 Winter Simulation Conference, ed. D. J. Medeiros, E. F. Watson, J. S. Carson,
and M. S. Manivannan, 1998, pp. 12130.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
The McGrawHill
Companies, 2004
9. Simulation Output
Analysis
SIMULATION OUTPUT
ANALYSIS
9.1 Introduction
In analyzing the output from a simulation model, there is room for both rough
analysis using judgmental procedures as well as statistically based procedures for
more detailed analysis. The appropriateness of using judgmental procedures or
statistically based procedures to analyze a simulations output depends largely on
the nature of the problem, the importance of the decision, and the validity of the
input data. If you are doing a go/no-go type of analysis in which you are trying to
nd out whether a system is capable of meeting a minimum performance level,
then a simple judgmental approach may be adequate. Finding out whether a single machine or a single service agent is adequate to handle a given workload may
be easily determined by a few runs unless it looks like a close call. Even if it is a
close call, if the decision is not that important (perhaps there is a backup worker
who can easily ll in during peak periods), then more detailed analysis may not be
needed. In cases where the model relies heavily on assumptions, it is of little value
to be extremely precise in estimating model performance. It does little good to get
six decimal places of precision for an output response if the input data warrant
only precision to the nearest tens. Suppose, for example, that the arrival rate of
customers to a bank is roughly estimated to be 30 plus or minus 10 per hour. In
this situation, it is probably meaningless to try to obtain a precise estimate of teller
utilization in the facility. The precision of the input data simply doesnt warrant
any more than a rough estimate for the output.
These examples of rough estimates are in no way intended to minimize the
importance of conducting statistically responsible experiments, but rather to emphasize the fact that the average analyst or manager can gainfully use simulation
221
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
222
I. Study Chapters
Part I
9. Simulation Output
Analysis
The McGrawHill
Companies, 2004
Study Chapters
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 9
9. Simulation Output
Analysis
The McGrawHill
Companies, 2004
223
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
224
I. Study Chapters
Part I
Study Chapters
FIGURE 9.1
Example of a cycling
pseudo-random
number stream
produced by a random
number generator
with a very short
cycle length.
The McGrawHill
Companies, 2004
9. Simulation Output
Analysis
.52
.80
.31
Random number
.07
.95
generator
Seed Z0 17
.60
.25
.66
experiment but does not give us an independent replication. On the other hand, if a
different seed value is appropriately selected to initialize the random number generator, the simulation will produce different results because it will be driven by a
different segment of numbers from the random number stream. This is how the simulation experiment is replicated to collect statistically independent observations of
the simulation models output response. Recall that the random number generator
in ProModel can produce over 2.1 billion different values before it cycles.
To replicate a simulation experiment, then, the simulation model is initialized
to its starting conditions, all statistical variables for the output measures are reset,
and a new seed value is appropriately selected to start the random number generator. Each time an appropriate seed value is used to start the random number generator, the simulation produces a unique output response. Repeating the process
with several appropriate seed values produces a set of statistically independent
observations of a models output response. With most simulation software, users
need only specify how many times they wish to replicate the experiment; the software handles the details of initializing variables and ensuring that each simulation
run is driven by nonoverlapping segments of the random number cycle.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 9
The McGrawHill
Companies, 2004
9. Simulation Output
Analysis
225
Point Estimates
A point estimate is a single value estimate of a parameter of interest. Point estimates are calculated for the mean and standard deviation of the population. To
estimate the mean of the population (denoted as ), we simply calculate the aver
age of the sample values (denoted as x):
n
xi
x = i=1
n
where n is the sample size (number of observations) and xi is the value of ith
observation. The sample mean x estimates the population mean .
The standard deviation for the population (denoted as ), which is a measure
of the spread of data values in the population, is similarly estimated by calculating a standard deviation of the sample of values (denoted as s):
n
2
i=1 [x i x]
s=
n1
The sample variance s2, used to estimate the variance of the population 2, is obtained by squaring the sample standard deviation.
Lets suppose, for example, that we are interested in determining the mean or
average number of customers getting a haircut at Buddys Style Shop on Saturday
morning. Buddy opens his barbershop at 8:00 A.M. and closes at noon on Saturday. In
order to determine the exact value for the true average number of customers getting
a haircut on Saturday morning (), we would have to compute the average based on
the number of haircuts given on all Saturday mornings that Buddys Style Shop has
been and will be open (that is, the complete population of observations). Not wanting to work that hard, we decide to get an estimate of the true mean by spending the
next 12 Saturday mornings watching TV at Buddys and recording the number of
customers that get a haircut between 8:00 A.M. and 12:00 noon (Table 9.1).
TABLE 9.1 Number of Haircuts Given on 12 Saturdays
Replication (i)
1
2
3
4
5
6
7
8
9
10
11
12
Sample mean x
Sample standard deviation s
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
226
I. Study Chapters
Part I
The McGrawHill
Companies, 2004
9. Simulation Output
Analysis
Study Chapters
We have replicated the experiment 12 times and have 12 sample observations of the number of customers getting haircuts on Saturday morning
(x1 , x2 , . . . , x12 ). Note that there are many different values for the random variable
(number of haircuts) in the sample of 12 observations. So if only one replication
had been conducted and the single observation was used to estimate the true but
unknown mean number of haircuts given, the estimate could potentially be way
off. Using the observations in Table 9.1, we calculate the sample mean as follows:
12
xi
21 + 16 + 8 + + 10
= 13.67 haircuts
x = i=1 =
12
12
The sample mean of 13.67 haircuts estimates the unknown true mean value
for the number of haircuts given on Saturday at the barbershop. Taking additional
samples generally gives a more accurate estimate of the unknown . However, the
experiment would need to be replicated on each Saturday that Buddys has been
and will be open to determine the true mean or expected number of haircuts given.
In addition to estimating the population mean, we can also calculate an
estimate for the population standard deviation based on the sample size of 12
observations and a sample mean of 13.67 haircuts. This is done by calculating the
standard deviation s of the observations in Table 9.1 as follows:
12
2
i=1 [x i 13.67]
s=
= 4.21 haircuts
12 1
The sample standard deviation s provides only an estimate of the true but
unknown population standard deviation . x and s are single values; thus they are
referred to as point estimators of the population parameters and . Also, note
that x and s are random variables. As such, they will have different values if based
on another set of 12 independent observations from the barbershop.
Interval Estimates
A point estimate, by itself, gives little information about how accurately it estimates the true value of the unknown parameter. Interval estimates constructed
using x and s, on the other hand, provide information about how far off the point
estimate x might be from the true mean . The method used to determine this is
referred to as condence interval estimation.
A condence interval is a range within which we can have a certain level
and the
of condence that the true mean falls. The interval is symmetric about x,
distance that each endpoint is from x is called the half-width (hw). A condence
interval, then, is expressed as the probability P that the unknown true mean lies
within the interval x hw. The probability P is called the condence level.
If the sample observations used to compute x and s are independent and normally distributed, the following equation can be used to calculate the half-width
of a condence interval for a given level of condence:
hw =
(tn1,/2 )s
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 9
9. Simulation Output
Analysis
The McGrawHill
Companies, 2004
227
where tn1,/2 is a factor that can be obtained from the Students t table in Appendix B. The values are identied in the Students t table according to the value
of /2 and the degrees of freedom (n 1). The term is the complement of P.
That is, = 1 P and is referred to as the signicance level. The signicance
level may be thought of as the risk level or probability that will fall outside
the condence interval. Therefore, the probability that will fall within the
condence interval is 1 . Thus condence intervals are often stated as
P(x hw x + hw) = 1 and are read as the probability that the true
but unknown mean falls between the interval (x hw) to (x + hw) is equal to
1 . The condence interval is traditionally referred to as a 100(1 ) percent
condence interval.
Assuming the data from the barbershop example are independent and normally distributed (this assumption is discussed in Section 9.3), a 95 percent condence interval is constructed as follows:
Given: P = condence level = 0.95
= signicance level = 1 P = 1 0.95 = 0.05
n = sample size = 12
x = 13.67 haircuts
s = 4.21 haircuts
From the Students t table in Appendix B, we nd tn1,/2 = t11,0.025 = 2.201. The
half-width is computed as follows:
(2.201)4.21
(t11,0.025 )s
=
= 2.67 haircuts
n
12
The lower and upper limits of the 95 percent condence interval are calculated as
follows:
hw =
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
228
I. Study Chapters
Part I
9. Simulation Output
Analysis
The McGrawHill
Companies, 2004
Study Chapters
Actually, when dealing with the output from a simulation model, techniques
can sometimes reduce the variability of the output from the model without changing the expected value of the output. These are called variance reduction techniques and are covered in Chapter 10.
n
(tn1,/2 )s 2
n=
e
However, this cannot be completed because n appears on each side of the equation.
So, to estimate the number of replications needed, we replace tn1,/2 with Z /2 ,
which depends only on . The Z /2 is from the standard normal distribution, and
its value can be found in the last row of the Students t table in Appendix B. Note
that Z /2 = t,/2 , where denotes innity. The revised equation is
(Z /2 )s 2
n =
e
where n is a rough approximation for the number of replications that will provide
an adequate sample size for meeting the desired absolute error amount e and signicance level .
Before this equation can be used, an initial sample of observations must be collected to compute the sample standard deviation s of the output from the system.
Actually, when using the Z /2 value, an assumption is made that the true standard
deviation for the complete population of observations is known. Of course, is
not known. However, it is reasonable to approximate with the sample standard deviation s to get a rough approximation for the required number of replications.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 9
The McGrawHill
Companies, 2004
9. Simulation Output
Analysis
229
Number of Haircuts xi
21
16
8
11
17
16
6
14
15
16
14
10
7
9
18
13
16
8
13.06
4.28
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
230
I. Study Chapters
Part I
9. Simulation Output
Analysis
The McGrawHill
Companies, 2004
Study Chapters
(t17,0.025 )s
(2.11)4.28
= 2.13 haircuts
=
n
18
The lower and upper limits for the new 95 percent condence interval are calculated as follows:
Lower limit = x hw = 13.06 2.13 = 10.93 haircuts
Upper limit = x hw = 13.06 + 2.13 = 15.19 haircuts
It can now be asserted with 95 percent condence that the true but unknown mean
falls between 10.93 haircuts and 15.19 haircuts (10.93 haircuts 15.19).
Note that with the additional observations, the half-width of the condence
interval has indeed decreased. However, the half-width is larger than the absolute
error value planned for (e = 2.0). This is just luck, or bad luck, because the new
half-width could just as easily have been smaller. Why? First, 18 replications was
only a rough estimate of the number of observations needed. Second, the number
of haircuts given on a Saturday morning at the barbershop is a random variable.
Each collection of observations of the random variable will likely differ from pre s, and hw will also differ.
vious collections. Therefore, the values computed for x,
This is the nature of statistics and why we deal only in estimates.
We have been expressing our target amount of error e in our point estimate x
as an absolute value (hw = e). In the barbershop example, we selected an absolute
value of e = 2.00 haircuts as our target value. However, it is sometimes more convenient to work in terms of a relative error (re) value (hw = re||). This allows us
to talk about the percentage error in our point estimate in place of the absolute
error. Percentage error is the relative error multiplied by 100 (that is, 100re percent). To approximate the number of replications needed to obtain a point estimate
x with a certain percentage error, we need only change the denominator of the n
equation used earlier. The relative error version of the equation becomes
2
(z /2 )s
n = re
x
(1+re)
where re denotes the relative error. The re/(1 + re) part of the denominator is an
adjustment needed to realize the desired re value because we use x to estimate
(see Chapter 9 of Law and Kelton 2000 for details). The appeal of this approach
is that we can select a desired percentage error without prior knowledge of the
magnitude of the value of .
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 9
9. Simulation Output
Analysis
The McGrawHill
Companies, 2004
231
As an example, say that after recording the number of haircuts given at the
barbershop on 12 Saturdays (n = 12 replications of the experiment), we wish to
determine the approximate number of replications needed to estimate the mean
number of haircuts given per day with an error percentage of 17.14 percent and a
condence level of 95 percent. We apply our equation using the sample mean and
sample standard deviation from Table 9.1.
Given: P condence level 0.95
signicance level 1 P 1 0.95 0.05
Z /2 = Z 0.025 = 1.96 from Appendix B
re 0.1714
x 13.67
s 4.21
2
2
(z 0.025 )s
(1.96)4.21
n = re
= 0.1714
= 17.02 observations
x
13.67
(1+re)
(1+0.1714)
Thus n 18 observations. This is the same result computed earlier on page 229
for an absolute error of e = 2.00 haircuts. This occurred because we purposely
selected re = 0.1714 to produce a value of 2.00 for the equations denominator in
order to demonstrate the equivalency of the different methods for approximating
the number of replications needed to achieve a desired level of precision in the
The SimRunner software uses the relative error methodology to
point estimate x.
provide estimates for the number of replications needed to satisfy the specied
level of precision for a given signicance level .
In general, the accuracy of the estimates improves as the number of replications increases. However, after a point, only modest improvements in accuracy
(reductions in the half-width of the condence interval) are made through conducting additional replications of the experiment. Therefore, it is sometimes necessary to compromise on a desired level of accuracy because of the time required
to run a large number of replications of the model.
The ProModel simulation software automatically computes condence intervals. Therefore, there is really no need to estimate the sample size required for a
desired half-width using the method given in this section. Instead, the experiment
could be replicated, say, 10 times, and the half-width of the resulting condence
interval checked. If the desired half-width is not achieved, additional replications
of the experiment are made until it is. The only real advantage of estimating the
number of replications in advance is that it may save time over the trial-and-error
approach of repeatedly checking the half-width and running additional replications until the desired condence interval is achieved.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
232
I. Study Chapters
Part I
9. Simulation Output
Analysis
The McGrawHill
Companies, 2004
Study Chapters
Average xi
1
2
3
4
..
.
x1
x2
x3
x4
..
.
xn
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 9
9. Simulation Output
Analysis
The McGrawHill
Companies, 2004
233
simulated to estimate the mean time that entities wait in queues in that system.
The experiment is replicated several times to collect n independent sample observations, as was done in the barbershop example. As an entity exits the system, the
time that the entity spent waiting in queues is recorded. The waiting time is denoted by yi j in Table 9.3, where the subscript i denotes the replication from which
the observation came and the subscript j denotes the value of the counter used to
count entities as they exit the system. For example, y32 is the waiting time for the
second entity that was processed through the system in the third replication. These
values are recorded during a particular run (replication) of the model and are
listed under the column labeled Within Run Observations in Table 9.3.
Note that the within run observations for a particular replication, say the ith
replication, are not usually independent because of the correlation between
consecutive observations. For example, when the simulation starts and the rst
entity begins processing, there is no waiting time in the queue. Obviously, the
more congested the system becomes at various times throughout the simulation,
the longer entities will wait in queues. If the waiting time observed for one entity
is long, it is highly likely that the waiting time for the next entity observed is
going to be long and vice versa. Observations exhibiting this correlation between
consecutive observations are said to be autocorrelated. Furthermore, the within
run observations for a particular replication are often nonstationary in that they
do not follow an identical distribution throughout the simulation run. Therefore,
they cannot be directly used as observations for statistical methods that require independent and identically distributed observations such as those used in this
chapter.
At this point, it seems that we are a long way from getting a usable set of
observations. However, lets focus our attention on the last column in Table 9.3,
labeled Average, which contains the x1 through xn values. xi denotes the average
waiting time of the entities processed during the ith simulation run (replication)
and is computed as follows:
m
j=1 yi j
xi =
m
where m is the number of entities processed through the system and yi j is the time
that the jth entity processed through the system waited in queues during the ith
replication. Although not as formally stated, you used this equation in the last row
of the ATM spreadsheet simulation of Chapter 3 to compute the average waiting
time of the m = 25 customers (Table 3.2). An xi value for a particular replication
represents only one possible value for the mean time an entity waits in queues,
and it would be risky to make a statement about the true waiting time from a
single observation. However, Table 9.3 contains an xi value for each of the n
independent replications of the simulation. These xi values are statistically
independent if different seed values are used to initialize the random number generator for each replication (as discussed in Section 9.2.1). The xi values are often
identically distributed as well. Therefore, the sample of xi values can be used for
statistical methods requiring independent and identically distributed observations.
Thus we can use the xi values to estimate the true but unknown average time
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
234
I. Study Chapters
Part I
The McGrawHill
Companies, 2004
9. Simulation Output
Analysis
Study Chapters
(tn1,/2 )s
where tn1,/2 is a value from the Students t table in Appendix B for an level of
signicance.
ProModel automatically computes point estimates and condence intervals
using this method. Figure 9.2 presents the output produced from running the
ProModel version of the ATM simulation of Lab Chapter 3 for ve replications.
The column under the Locations section of the output report labeled Average
Minutes per Entry displays the average amount of time that customers waited in
the queue during each of the ve replications. These values correspond to the xi
values in Table 9.3. Note that in addition to the sample mean and standard deviation, ProModel also provides a 95 percent condence interval.
Sometimes the output measure being evaluated is not based on the mean, or
sum, of a collection of random values. For example, the output measure may be
the maximum number of entities that simultaneously wait in a queue. (See the
Maximum Contents column in Figure 9.2.) In such cases, the output measure
may not be normally distributed, and because it is not a sum, the central limit
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 9
The McGrawHill
Companies, 2004
9. Simulation Output
Analysis
235
FIGURE 9.2
Replication technique used on ATM simulation of Lab Chapter 3.
------------------------------------------------------------------General Report
Output from C:\Bowden\2nd Edition\ATM System Ch9.MOD [ATM System]
Date: Nov/26/2002
Time: 04:24:27 PM
------------------------------------------------------------------Scenario
: Normal Run
Replication
: All
Period
: Final Report (1000 hr to 1500 hr Elapsed: 500 hr)
Warmup Time
: 1000
Simulation Time : 1500 hr
------------------------------------------------------------------LOCATIONS
Location Scheduled
Name
Hours Capacity
--------- ----- -------ATM Queue
500
999999
ATM Queue
500
999999
ATM Queue
500
999999
ATM Queue
500
999999
ATM Queue
500
999999
ATM Queue
500
999999
ATM Queue
0
0
ATM Queue
500
999999
ATM Queue
500
999999
Total
Entries
------9903
9866
9977
10006
10187
9987.8
124.654
9833.05
10142.6
Average
Minutes
Per Entry
--------9.457
8.912
11.195
8.697
10.841
9.820
1.134
8.412
11.229
Average
Contents
-------3.122
2.930
3.723
2.900
3.681
3.271
0.402
2.772
3.771
Maximum
Contents
-------26
24
39
26
32
29.4
6.148
21.767
37.032
Current
Contents
-------18
0
4
3
0
5
7.483
-4.290
14.290
% Util
-----0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
0.0
(Rep 1)
(Rep 2)
(Rep 3)
(Rep 4)
(Rep 5)
(Average)
(Std. Dev.)
(95% C.I. Low)
(95% C.I. High)
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
236
I. Study Chapters
Part I
9. Simulation Output
Analysis
The McGrawHill
Companies, 2004
Study Chapters
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 9
9. Simulation Output
Analysis
The McGrawHill
Companies, 2004
237
runs two shifts with an hour break during each shift in which everything momentarily stops. Break and third-shift times are excluded from the model because work
always continues exactly as it left off before the break or end of shift. The length
of the simulation is determined by how long it takes to get a representative steadystate reading of the model behavior.
Nonterminating simulations can, and often do, change operating characteristics after a period of time, but usually only after enough time has elapsed to establish a steady-state condition. Take, for example, a production system that runs
10,000 units per week for 5 weeks and then increases to 15,000 units per week for
the next 10 weeks. The system would have two different steady-state periods. Oil
and gas reneries and distribution centers are additional examples of nonterminating systems.
Contrary to what one might think, a steady-state condition is not one in which
the observations are all the same, or even one for which the variation in observations is any less than during a transient condition. It means only that all observations throughout the steady-state period will have approximately the same
distribution. Once in a steady state, if the operating rules change or the rate at
which entities arrive changes, the system reverts again to a transient state until the
system has had time to start reecting the long-term behavior of the new operating circumstances. Nonterminating systems begin with a warm-up (or transient)
state and gradually move to a steady state. Once the initial transient phase has
diminished to the point where the impact of the initial condition on the systems
response is negligible, we consider it to have reached steady state.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
238
I. Study Chapters
Part I
9. Simulation Output
Analysis
The McGrawHill
Companies, 2004
Study Chapters
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 9
9. Simulation Output
Analysis
The McGrawHill
Companies, 2004
239
the system, the utilization of resources) over the period simulated, as discussed
in Section 9.3.
The answer to the question of how many replications are necessary is usually
based on the analysts desired half-width of a condence interval. As a general
guideline, begin by making 10 independent replications of the simulation and add
more replications until the desired condence interval half-width is reached. For
the barbershop example, 18 independent replications of the simulation were required to achieve the desired condence interval for the expected number of haircuts given on Saturday mornings.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
240
I. Study Chapters
Part I
The McGrawHill
Companies, 2004
9. Simulation Output
Analysis
Study Chapters
FIGURE 9.3
Behavior of models output response as it reaches steady state.
Warm-up ends
Steady state
Transient state
Averaged
output
response
measure
(y )
Distribution of output response
1
Simulation time
from the beginning of the run and use the remaining observations to estimate the
true mean response of the model.
While several methods have been developed for estimating the warm-up
time, the easiest and most straightforward approach is to run a preliminary simulation of the system, preferably with several (5 to 10) replications, average the
output values at each time step across replications, and observe at what time the
system reaches statistical stability. This usually occurs when the averaged output
response begins to atten out or a repeating pattern emerges. Plotting each data
point (averaged output response) and connecting them with a line usually helps to
identify when the averaged output response begins to atten out or begins a repeating pattern. Sometimes, however, the variation of the output response is so
large that it makes the plot erratic, and it becomes difcult to visually identify the
end of the warm-up period. Such a case is illustrated in Figure 9.4. The raw data
plot in Figure 9.4 was produced by recording a models output response for a
queues average contents during 50-hour time periods (time slices) and averaging
the output values from each time period across ve replications. In this case, the
model was initialized with several entities in the queue. Therefore, we need to
eliminate this apparent upward bias before recording observations of the queues
average contents. Table 9.4 shows this models output response for the 20 time
periods (50 hours each) for each of the ve replications. The raw data plot in Figure 9.4 was constructed using the 20 values under the yi column in Table 9.4.
When the models output response is erratic, as in Figure 9.4, it is useful
to smooth it with a moving average. A moving average is constructed by
calculating the arithmetic average of the w most recent data points (averaged
output responses) in the data set. You have to select the value of w, which is called
the moving-average window. As you increase the value of w, you increase the
smoothness of the moving average plot. An indicator for the end of the warm-up
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 9
9. Simulation Output
Analysis
The McGrawHill
Companies, 2004
241
FIGURE 9.4
SimRunner uses the Welch moving-average method to help identify the end of the warm-up period
that occurs around the third or fourth period (150 to 200 hours) for this model.
time is when the moving average plot appears to atten out. Thus the routine is to
begin with a small value of w and increase it until the resulting moving average plot
begins to atten out.
The moving-average plot in Figure 9.4 with a window of six (w = 6) helps
to identify the end of the warm-up periodwhen the moving average plot appears
to atten out at around the third or fourth period (150 to 200 hours). Therefore,
we ignore the observations up to the 200th hour and record only those after the
200th hour when we run the simulation. The moving-average plot in Figure 9.4
was constructed using the 14 values under the yi (6) column in Table 9.4 that were
computed using
w
s=w i+s
if i = w + 1, . . . , m w
2w + 1
yi (w) = i1
s=(i1) i+s
if i = 1, . . . , w
2i 1
where m denotes the total number of periods and w denotes the window of the
moving average (m = 20 and w = 6 in this example).
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
242
I. Study Chapters
Part I
The McGrawHill
Companies, 2004
9. Simulation Output
Analysis
Study Chapters
TABLE 9.4 Welch Moving Average Based on Five Replications and 20 Periods
Period (i)
Time
Total
Contents for
Period,
Totali
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
50
100
150
200
250
300
350
400
450
500
550
600
650
700
750
800
850
900
950
1000
4.41
3.15
2.58
2.92
3.13
2.51
5.09
3.15
2.79
4.07
3.42
3.90
3.63
9.12
3.88
3.16
5.34
2.84
2.65
3.27
4.06
2.95
2.50
3.07
3.28
3.07
3.77
2.89
3.40
3.62
3.74
1.47
3.77
4.25
3.54
6.91
3.17
3.54
4.64
4.68
6.37
2.94
3.07
4.48
2.34
5.45
4.44
3.63
9.78
4.50
2.46
5.75
2.14
3.83
3.08
2.70
2.47
2.33
5.18
3.39
11.72
3.52
3.14
4.79
3.32
5.15
3.58
4.43
4.13
2.38
3.08
5.34
4.24
3.19
2.94
5.64
2.73
10.14
4.97
2.95
1.71
2.86
4.32
3.47
3.19
5.34
2.63
4.15
4.48
3.42
2.49
4.19
6.36
5.91
2.51
2.57
2.68
3.20
3.93
3.46
28.27
15.42
15.61
18.73
15.26
21.52
19.51
18.25
24.58
17.99
15.19
20.65
20.14
26.30
15.95
20.98
16.39
22.05
21.37
17.75
Average
Contents
per Period,
Totali
yi =
5
5.65
3.08
3.12
3.75
3.05
4.30
3.90
3.65
4.92
3.60
3.04
4.13
4.03
5.26
3.19
4.20
3.28
4.41
4.27
3.55
Welch
Moving
Average,
yi (6)
5.65
3.95
3.73
3.84
3.94
3.82
3.86
3.83
3.84
3.93
3.89
3.99
3.99
3.96
y1
5.65
=
= 5.65
1
1
y2 (6) =
y1 + y2 + y3
5.65 + 3.08 + 3.12
=
= 3.95
3
3
y3 (6) =
y1 + y2 + y3 + y4 + y5
5.65 + 3.08 + 3.12 + 3.75 + 3.05
=
= 3.73
5
5
y6 (6) =
y1 + y2 + y3 + y4 + y5 + y6 + y7 + y8 + y9 + y10 + y11
= 3.82
11
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 9
9. Simulation Output
Analysis
The McGrawHill
Companies, 2004
243
Notice the pattern that as i increases, we average more data points together,
with the ith data point appearing in the middle of the sum in the numerator (an
equal number of data points are on each side of the center data point). This continues until we reach the (w + 1)th moving average, when we switch to the top
part of the yi (w) equation. For our example, the switch occurs at the 7th moving
average because w = 6. From this point forward, we average the 2w + 1 closest
data points. For our example, we average the 2(6) + 1 = 13 closest data points
(the ith data point plus the w = 6 closest data points above it and the w = 6 closest data point below it in Table 9.4 as follows:
y1 + y2 + y3 + y4 + y5 + y6 + y7 + y8 + y9 + y10 + y11 + y12 + y13
13
y7 (6) = 3.86
y7 (6) =
y8 + y9 + y10 + y11 + y12 + y13 + y14 + y15 + y16 + y17 + y18 + y19 + y20
13
y14 (6) = 3.96
y14 (6) =
Eventually we run out of data and have to stop. The stopping point occurs when
i = m w. In our case with m = 20 periods and w = 6, we stopped when i =
20 6 = 14.
The development of this graphical method for estimating the end of the
warm-up time is attributed to Welch (Law and Kelton 2000). This method is
sometimes referred to as the Welch moving-average method and is implemented
in SimRunner. Note that when applying the method that the length of each replication should be relatively long and the replications should allow even rarely
occurring events such as infrequent downtimes to occur many times. Law and
Kelton (2000) recommend that w not exceed a value greater than about m/4. To
determine a satisfactory warm-up time using Welchs method, one or more key
output response variables, such as the average number of entities in a queue or the
average utilization of a resource, should be monitored for successive periods.
Once these variables begin to exhibit steady state, a good practice to follow would
be to extend the warm-up period by 20 to 30 percent. This approach is simple,
conservative, and usually satisfactory. The danger is in underestimating the
warm-up period, not overestimating it.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
244
I. Study Chapters
Part I
9. Simulation Output
Analysis
The McGrawHill
Companies, 2004
Study Chapters
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 9
245
FIGURE 9.5
Individual statistics on
the output measure are
computed for each
batch interval.
The McGrawHill
Companies, 2004
9. Simulation Output
Analysis
Warm-up ends
Output
response
measure
(y )
Batch
Batch
Batch
Batch
Batch
interval 1 interval 2 interval 3 interval 4 interval 5
Simulation time
Average xi
1
2
3
4
5
..
.
x1
x2
x3
x4
x5
..
.
xn
should be based on time. If the output measure is based on observations (like the
waiting time of entities in a queue), then the batch interval is typically based on the
number of observations.
Table 9.5 details how a single, long simulation run (one replication), like the
one shown in Figure 9.5, is partitioned into batch intervals for the purpose of obtaining (approximately) independent and identically distributed observations of a
simulation models output response. Note the similarity between Table 9.3 of
Section 9.3 and Table 9.5. As in Table 9.3, the observations represent the time an
entity waited in queues during the simulation. The waiting time for an entity is
denoted by yi j , where the subscript i denotes the interval of time (batch interval)
from which the observation came and the subscript j denotes the value of the
counter used to count entities as they exit the system during a particular batch interval. For example, y23 is the waiting time for the third entity that was processed
through the system during the second batch interval of time.
Because these observations are all from a single run (replication), they are
not statistically independent. For example, the waiting time of the mth entity exiting the system during the rst batch interval, denoted y1m , is correlated with the
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
246
I. Study Chapters
Part I
The McGrawHill
Companies, 2004
9. Simulation Output
Analysis
Study Chapters
waiting time of the rst entity to exit the system during the second batch interval,
denoted y21 . This is because if the waiting time observed for one entity is long, it
is likely that the waiting time for the next entity observed is going to be long, and
vice versa. Therefore, adjacent observations in Table 9.5 will usually be autocorrelated. However, the value of y14 can be somewhat uncorrelated with the
value of y24 if they are spaced far enough apart such that the conditions that resulted in the waiting time value of y14 occurred so long ago that they have little or
no inuence on the waiting time value of y24 . Therefore, most of the observations
within the interior of one batch interval can become relatively uncorrelated with
most of the observations in the interior of other batch intervals, provided they are
spaced sufciently far apart. Thus the goal is to extend the batch interval length
until there is very little correlation (you cannot totally eliminate it) between the
observations appearing in different batch intervals. When this occurs, it is reasonable to assume that observations within one batch interval are independent of the
observations within other batch intervals.
With the independence assumption in hand, the observations within a batch
interval can be treated in the same manner as we treated observations within a
replication in Table 9.3. Therefore, the values in the Average column in Table 9.5,
which represent the average amount of time the entities processed during the
ith batch interval waited in queues, are computed as follows:
m
j=1 yi j
xi =
m
where m is the number of entities processed through the system during the batch interval of time and yi j is the time that the jth entity processed through the system
waited in queues during the ith batch interval. The xi values in the Average column
are approximately independent and identically distributed and can be used to compute a sample mean for estimating the true average time an entity waits in queues:
n
xi
x = i=1
n
The sample standard deviation is also computed as before:
n
2
i=1 [x i x]
s=
n1
And, if we assume that the observations (x1 through xn) are normally distributed (or
at least approximately normally distributed), a condence interval is computed by
x
(tn1,/2 )s
where tn1,/2 is a value from the Students t table in Appendix B for an level of
signicance.
ProModel automatically computes point estimates and condence intervals
using the above method. Figure 9.6 is a ProModel output report from a single,
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 9
9. Simulation Output
Analysis
The McGrawHill
Companies, 2004
247
FIGURE 9.6
Batch means technique used on ATM simulation in Lab Chapter 3. Warm-up period is from 0 to 1,000 hours. Each
batch interval is 500 hours in length.
----------------------------------------------------------------General Report
Output from C:\Bowden\2nd Edition\ATM System Ch9.MOD [ATM System]
Date: Nov/26/2002
Time: 04:44:24 PM
----------------------------------------------------------------Scenario
: Normal Run
Replication
: 1 of 1
Period
: All
Warmup Time
: 1000 hr
Simulation Time : 3500 hr
----------------------------------------------------------------LOCATIONS
Average
Location Scheduled
Total
Minutes Average Maximum Current
Name
Hours Capacity Entries Per Entry Contents Contents Contents % Util
--------- --------- -------- ------- --------- -------- -------- -------- -----ATM Queue
500
999999
9903
9.457
3.122
26
18
0.0 (Batch 1)
ATM Queue
500
999999
10065
9.789
3.284
24
0
0.0 (Batch 2)
ATM Queue
500
999999
9815
8.630
2.823
23
16
0.0 (Batch 3)
ATM Queue
500
999999
9868
8.894
2.925
24
1
0.0 (Batch 4)
ATM Queue
500
999999
10090
12.615
4.242
34
0
0.0 (Batch 5)
ATM Queue
500
999999 9948.2
9.877
3.279
26.2
7
0.0 (Average)
ATM Queue
0
0 122.441
1.596
0.567
4.494
9.165
0.0 (Std. Dev.)
ATM Queue
500
999999 9796.19
7.895
2.575
20.620
-4.378
0.0 (95% C.I. Low)
ATM Queue
500
999999 10100.2
11.860
3.983
31.779
18.378
0.0 (95% C.I. High)
long run of the ATM simulation in Lab Chapter 3 with the output divided into ve
batch intervals of 500 hours each after a warm-up time of 1,000 hours. The column under the Locations section of the output report labeled Average Minutes
Per Entry displays the average amount of time that customer entities waited in
the queue during each of the ve batch intervals. These values correspond to the
xi values of Table 9.5. Note that the 95 percent condence interval automatically
computed by ProModel in Figure 9.6 is comparable, though not identical, to the
95 percent condence interval in Figure 9.2 for the same output statistic.
Establishing the batch interval length such that the observations x1 , x2 , . . . , xn
of Table 9.5 (assembled after the simulation reaches steady state) are approximately independent is difcult and time-consuming. There is no foolproof method
for doing this. However, if you can generate a large number of observations, say
n 100, you can gain some insight about the independence of the observations
by estimating the autocorrelation between adjacent observations (lag-1 autocorrelation). See Chapter 6 for an introduction to tests for independence and autocorrelation plots. The observations are treated as being independent if their lag-1
autocorrelation is zero. Unfortunately, current methods available for estimating the
lag-1 autocorrelation are not very accurate. Thus we may be persuaded that our observations are almost independent if the estimated lag-1 autocorrelation computed
from a large number of observations falls between 0.20 to +0.20. The word
almost is emphasized because there is really no such thing as almost independent
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
248
I. Study Chapters
Part I
9. Simulation Output
Analysis
The McGrawHill
Companies, 2004
Study Chapters
(the observations are independent or they are not). What we are indicating here is
that we are leaning toward calling the observations independent when the estimate
of the lag-1 autocorrelation is between 0.20 and +0.20. Recall that autocorrelation values fall between 1. Before we elaborate on this idea for determining if an
acceptable batch interval length has been used to derive the observations, lets talk
about positive autocorrelation versus negative autocorrelation.
Positive autocorrelation is a bigger enemy to us than negative autocorrelation
because our sample standard deviation s will be biased low if the observations
from which it is computed are positively correlated. This would result in a falsely
narrow condence interval (smaller half-width), leading us to believe that we have
a better estimate of the mean than we actually have. Negatively correlated observations have the opposite effect, producing a falsely wide condence interval
(larger half-width), leading us to believe that we have a worse estimate of the mean
than we actually have. A negative autocorrelation may lead us to waste time collecting additional observations to get a more precise (narrow) condence interval
but will not result in a hasty decision. Therefore, an emphasis is placed on guarding against positive autocorrelation. We present the following procedure adapted
from Banks et al. (2001) for estimating an acceptable batch interval length.
For the observations x1 , x2 , . . . , xn derived from n batches of data assembled
after the simulation has researched steady state as outlined in Table 9.5, compute
an estimate of the lag-1 autocorrelation 1 using
n1
(xi x)(x
i+1 x)
1 = i=1 2
s (n 1)
where s 2 is the sample variance (sample standard deviation squared) and x is the
sample mean of the n observations. Recall the recommendation that n should be
at least 100 observations. If the estimated lag-1 autocorrelation is not between
0.20 and +0.20, then extend the original batch interval length by 50 to 100 percent, rerun the simulation to collect a new set of n observations, and check the
lag-1 autocorrelation between the new x1 to xn observations. This would be repeated until the estimated lag-1 autocorrelation falls between 0.20 and +0.20
(or you give up trying to get within the range). Note that falling below 0.20 is
less worrisome than exceeding +0.20.
Achieving 0.20 1 0.20: Upon achieving an estimated lag-1
autocorrelation within the desired range, rebatch the data by combining
adjacent batch intervals of data into a larger batch. This produces a smaller
number of observations that are based on a larger batch interval length. The
lag-1 autocorrelation for this new set of observations will likely be closer to
zero than the lag-1 autocorrelation of the previous set of observations
because the new set of observations is based on a larger batch interval
length. Note that at this point, you are not rerunning the simulation with the
new, longer batch interval but are rebatching the output data from the last
run. For example, if a batch interval length of 125 hours produced 100
observations (x1 to x100 ) with an estimated lag-1 autocorrelation 1 = 0.15,
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 9
9. Simulation Output
Analysis
The McGrawHill
Companies, 2004
249
you might combine four contiguous batches of data into one batch with a
length of 500 hours (125 hours 4 500 hours). Rebatching the data
with the 500-hour batch interval length would leave you 25 observations (x1
to x25 ) from which to construct the condence interval. We recommend that
you rebatch the data into 10 to 30 batch intervals.
Not Achieving 0.20 1 0.20: If obtaining an estimated lag-1
autocorrelation within the desired range becomes impossible because you
cannot continue extending the length of the simulation run, then rebatch the
data from your last run into no more than about 10 batch intervals and
construct the condence interval. Interpret the condence interval with the
apprehension that the observations may be signicantly correlated.
Lab Chapter 9 provides an opportunity for you to gain experience applying
these criteria to a ProModel simulation experiment. However, remember that
there is no universally accepted method for determining an appropriate batch interval length. See Banks et al. (2001) for additional details and variations on this
approach, such as a concluding hypothesis test for independence on the nal set
of observations. The danger is in setting the batch interval length too short, not too
long. This is the reason for extending the length of the batch interval in the last
step. A starting point for setting the initial batch interval length from which to
begin the process of evaluating the lag-1 autocorrelation estimates is provided in
Section 9.6.3.
In summary, the statistical methods in this chapter are applied to the data
compiled from batch intervals in the same manner they were applied to the data
compiled from replications. And, as before, the accuracy of point and interval estimates generally improves as the sample size (number of batch intervals) increases. In this case we increase the number of batch intervals by extending the
length of the simulation run. As a general guideline, the simulation should be run
long enough to create around 10 batch intervals and possibly more, depending on
the desired condence interval half-width. If a trade-off must be made between
the number of batch intervals and the batch interval length, then err on the side of
increasing the batch interval length. It is better to have a few independent observations than to have several autocorrelated observations when using the statistical
methods presented in this text.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
250
I. Study Chapters
Part I
The McGrawHill
Companies, 2004
9. Simulation Output
Analysis
Study Chapters
FIGURE 9.7
Replication 3
Batch intervals
compared with
replications.
Replication 2
Replication 1
Batch
Batch
interval 1
Batch
interval 2
Batch
interval 3
Simulation time
of sampling method (replication or interval batching) used. If running independent replications, it is usually a good idea to run the simulation long enough past
the warm-up point to let every type of event (including rare ones) happen many
times and, if practical, several hundred times. Remember, the longer the model is
run, the more condent you can become that the results represent the steady-state
behavior. A guideline for determining the initial run length for the interval batching method is much the same as for the replication method. Essentially, the starting point for selecting the length of each batch interval is the length of time chosen to run the simulation past the warm-up period when using the replication
method. The total run length is the sum of the time for each batch interval plus the
initial warm-up time. This guideline is illustrated in Figure 9.7.
The choice between running independent replications or batch intervals can
be made based on how long it takes the simulation to reach steady state. If the
warm-up period is short (that is, it doesnt take long to simulate through the warmup period), then running replications is preferred over batch intervals because running replications guarantees that the observations are independent. However, if
the warm-up period is long, then the batch interval method is preferred because
the time required to get through the warm-up period is incurred only once, which
reduces the amount of time to collect the necessary number of observations for a
desired condence interval. Also, because a single, long run is made, chances are
that a better estimate of the models true steady-state behavior is obtained.
9.7 Summary
Simulation experiments can range from a few replications (runs) to a large number of replications. While ballpark decisions require little analysis, more precise
decision making requires more careful analysis and extensive experimentation.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 9
9. Simulation Output
Analysis
The McGrawHill
Companies, 2004
251
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
252
I. Study Chapters
Part I
9. Simulation Output
Analysis
The McGrawHill
Companies, 2004
Study Chapters
15. Given the ve batch means for the Average Minutes Per Entry of
customer entities at the ATM queue location in Figure 9.6, estimate the
lag-1 autocorrelation 1 . Note that ve observations are woefully
inadequate for obtaining an accurate estimate of the lag-1
autocorrelation. Normally you will want to base the estimate on at least
100 observations. The question is designed to give you some experience
using the 1 equation so that you will understand what the Stat::Fit
software is doing when you use it in Lab Chapter 9 to crunch through
an example with 100 observations.
16. Construct a Welch moving average with a window of 2 (w = 2) using
the data in Table 9.4 and compare it to the Welch moving average with
a window of 6 (w = 6) presented in Table 9.4.
References
Banks, Jerry; John S. Carson, II; Barry L. Nelson; and David M. Nicol. Discrete-Event
System Simulation. New Jersey: Prentice-Hall, 2001.
Bateman, Robert E.; Royce O. Bowden; Thomas J. Gogg; Charles R. Harrell; and Jack
R. A. Mott. System Improvement Using Simulation. Orem, UT: PROMODEL Corp.,
1997.
Hines, William W., and Douglas C. Montgomery. Probability and Statistics in Engineering
and Management Science. New York: John Wiley and Sons, 1990.
Law, Averill M., and David W. Kelton. Simulation Modeling and Analysis. New York:
McGraw-Hill, 2000.
Montgomery, Douglas C. Design and Analysis of Experiments. New York: John Wiley &
Sons, 1991.
Petersen, Roger G. Design and Analysis of Experiments. New York: Marcel Dekker, 1985.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
The McGrawHill
Companies, 2004
10 COMPARING SYSTEMS
The method that proceeds without analysis is like the groping of a blind man.
Socrates
10.1 Introduction
In many cases, simulations are conducted to compare two or more alternative designs of a system with the goal of identifying the superior system relative to some
performance measure. Comparing alternative system designs requires careful
analysis to ensure that differences being observed are attributable to actual differences in performance and not to statistical variation. This is where running either
multiple replications or batches is required. Suppose, for example, that method A
for deploying resources yields a throughput of 100 entities for a given time period
while method B results in 110 entities for the same period. Is it valid to conclude
that method B is better than method A, or might additional replications actually
lead to the opposite conclusion?
You can evaluate alternative congurations or operating policies by performing several replications of each alternative and comparing the average results
from the replications. Statistical methods for making these types of comparisons
are called hypotheses tests. For these tests, a hypothesis is rst formulated (for
example, that methods A and B both result in the same throughput) and then a test
is made to see whether the results of the simulation lead us to reject the hypothesis. The outcome of the simulation runs may cause us to reject the hypothesis that
methods A and B both result in equal throughput capabilities and conclude that the
throughput does indeed depend on which method is used.
This chapter extends the material presented in Chapter 9 by providing statistical methods that can be used to compare the output of different simulation models that represent competing designs of a system. The concepts behind hypothesis
testing are introduced in Section 10.2. Section 10.3 addresses the case when two
253
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
254
I. Study Chapters
Part I
The McGrawHill
Companies, 2004
Study Chapters
alternative system designs are to be compared, and Section 10.4 considers the
case when more than two alternative system designs are to be compared. Additionally, a technique called common random numbers is described in Section 10.5
that can sometimes improve the accuracy of the comparisons.
.72
4.56
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 10
Comparing Systems
The McGrawHill
Companies, 2004
255
Suppose that Strategy 1 and Strategy 2 are the two buffer allocation strategies
proposed by the production control staff. We wish to identify the strategy that
maximizes the throughput of the production system (number of parts completed
per hour). Of course, the possibility exists that there is no signicant difference in
the performance of the two candidate strategies. That is to say, the mean throughput of the two proposed strategies is equal. A starting point for our problem is to
formulate our hypotheses concerning the mean throughput for the production
system under the two buffer allocation strategies. Next we work out the details of
setting up our experiments with the simulation models built to evaluate each strategy. For example, we may decide to estimate the true mean performance of each
strategy (1 and 2) by simulating each strategy for 16 days (24 hours per day)
past the warm-up period and replicating the simulation 10 times. After we run
experiments, we would use the simulation output to evaluate the hypotheses
concerning the mean throughput for the production system under the two buffer
allocation strategies.
In general, a null hypothesis, denoted H0, is drafted to state that the value of 1
is not signicantly different than the value of 2 at the level of signicance. An
alternate hypothesis, denoted H1, is drafted to oppose the null hypothesis H0. For
example, H1 could state that 1 and 2 are different at the level of signicance.
Stated more formally:
H0 : 1 = 2
H1 : 1 =
2
or equivalently H0 : 1 2 = 0
or equivalently H1 : 1 2 = 0
In the context of the example problem, the null hypothesis H0 states that the
mean throughputs of the system due to Strategy 1 and Strategy 2 do not differ. The
alternate hypothesis H1 states that the mean throughputs of the system due to
Strategy 1 and Strategy 2 do differ. Hypothesis testing methods are designed such
that the burden of proof is on us to demonstrate that H0 is not true. Therefore, if
our analysis of the data from our experiments leads us to reject H0, we can be condent that there is a signicant difference between the two population means. In
our example problem, the output from the simulation model for Strategy 1 represents possible throughput observations from one population, and the output from
the simulation model for Strategy 2 represents possible throughput observations
from another population.
The level of signicance in these hypotheses refers to the probability of
making a Type I error. A Type I error occurs when we reject H0 in favor of H1
when in fact H0 is true. Typically is set at a value of 0.05 or 0.01. However,
the choice is yours, and it depends on how small you want the probability of
making a Type I error to be. A Type II error occurs when we fail to reject H0 in
favor of H1 when in fact H1 is true. The probability of making a Type II error is
denoted as . Hypothesis testing methods are designed such that the probability
of making a Type II error, , is as small as possible for a given value of . The
relationship between and is that increases as decreases. Therefore, we
should be careful not to make too small.
We will test these hypotheses using a condence interval approach to
determine if we should reject or fail to reject the null hypothesis in favor of the
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
256
I. Study Chapters
Part I
The McGrawHill
Companies, 2004
Study Chapters
alternative hypothesis. The reason for using the condence interval method is that
it is equivalent to conducting a two-tailed test of hypothesis with the added benet of indicating the magnitude of the difference between 1 and 2 if they are in
fact signicantly different. The rst step of this procedure is to construct a condence interval to estimate the difference between the two means (1 2 ). This
can be done in different ways depending on how the simulation experiments are
conducted (we will discuss this later). For now, lets express the condence interval on the difference between the two means as
P[(x1 x2 ) hw 1 2 (x1 x2 ) + hw] = 1
where hw denotes the half-width of the condence interval. Notice the similarities between this condence interval expression and the one given on page 227 in
Chapter 9. Here we have replaced x with x1 x2 and with 1 2 .
If the two population means are the same, then 1 2 = 0, which is our
null hypothesis H0. If H0 is true, our condence interval should include zero with
a probability of 1 . This leads to the following rule for deciding whether to reject or fail to reject H0. If the condence interval includes zero, we fail to reject H0
and conclude that the value of 1 is not signicantly different than the value of 2
at the level of signicance (the mean throughput of Strategy 1 is not signicantly different than the mean throughput of Strategy 2). However, if the condence interval does not include zero, we reject H0 and conclude that the value of
1 is signicantly different than the value of 2 at the level of signicance
(throughput values for Strategy 1 and Strategy 2 are signicantly different).
Figure 10.2(a) illustrates the case when the condence interval contains zero,
leading us to fail to reject the null hypothesis H0 and conclude that there is no signicant difference between 1 and 2. The failure to obtain sufcient evidence to
pick one alternative over another may be due to the fact that there really is no difference, or it may be a result of the variance in the observed outcomes being too
high to be conclusive. At this point, either additional replications may be run or one
of several variance reduction techniques might be employed (see Section 10.5).
Figure 10.2(b) illustrates the case when the condence interval is completely to the
FIGURE 10.2
Three possible
positions of a
condence interval
relative to zero.
(a)
(b)
(c)
[------|------]
[------|------]
Fail to
reject H0
Reject H0
[------|------]
Reject H0
1 2 0
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
The McGrawHill
Companies, 2004
Chapter 10
257
Comparing Systems
left of zero, leading us to reject H0. This case suggests that 1 2 < 0 or, equivalently, 1 < 2 . Figure 10.2(c) illustrates the case when the condence interval
is completely to the right of zero, leading us to also reject H0. This case suggests
that 1 2 > 0 or, equivalently, 1 > 2 . These rules are commonly used in
practice to make statements about how the population means differ
(1 > 2 or 1 < 2 ) when the condence interval does not include zero (Banks
et al. 2001; Hoover and Perry 1989).
(B)
Strategy 1
Throughput x1
(C)
Strategy 2
Throughput x2
54.48
57.36
54.81
56.20
54.83
57.69
58.33
57.19
56.84
55.29
56.01
54.08
52.14
53.49
55.49
55.00
54.88
54.47
54.93
55.84
56.30
1.37
1.89
54.63
1.17
1.36
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
258
I. Study Chapters
Part I
The McGrawHill
Companies, 2004
Study Chapters
because a unique segment (stream) of random numbers from the random number
generator was used for each replication. The same is true for the 10 observations in
column C (Strategy 2 Throughput). The use of random number streams is discussed in Chapters 3 and 9 and later in this chapter. At this point we are assuming
that the observations are also normally distributed. The reasonableness of assuming that the output produced by our simulation models is normally distributed is
discussed at length in Chapter 9. For this data set, we should also point out that two
different sets of random numbers were used to simulate the 10 replications of each
strategy. Therefore, the observations in column B are independent of the observations in column C. Stated another way, the two columns of observations are not
correlated. Therefore, the observations are independent within a population (strategy) and between populations (strategies). This is an important distinction and will
be employed later to help us choose between different methods for computing the
condence intervals used to compare the two strategies.
From the observations in Table 10.1 of the throughput produced by each strategy, it is not obvious which strategy yields the higher throughput. Inspection of the
summary statistics indicates that Strategy 1 produced a higher mean throughput
for the system; however, the sample variance for Strategy 1 was higher than for
Strategy 2. Recall that the variance provides a measure of the variability of the data
and is obtained by squaring the standard deviation. Equations for computing the
sample mean x,
sample variance s2, and sample standard deviation s are given in
Chapter 9. Because of this variation, we should be careful when making conclusions about the population of throughput values (1 and 2) by only inspecting the
point estimates (x1 and x2 ). We will avoid the temptation and use the output from
the 10 replications of each simulation model along with a condence interval to
make a more informed decision.
We will use an = 0.05 level of signicance to compare the two candidate
strategies using the following hypotheses:
H0 : 1 2 = 0
H1 : 1 2 = 0
where the subscripts 1 and 2 denote Strategy 1 and Strategy 2, respectively. As
stated earlier, there are two common methods for constructing a condence
interval for evaluating hypotheses. The rst method is referred to as the Welch
condence interval (Law and Kelton 2000; Miller 1986) and is a modied twosample-t condence interval. The second method is the paired-t condence interval (Miller et al. 1990). Weve chosen to present these two methods because their
statistical assumptions are more easily satised than are the assumptions for other
condence interval methods.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 10
The McGrawHill
Companies, 2004
Comparing Systems
259
Table 10.1 are independent and are assumed normal. However, the Welch condence interval method does not require that the number of samples drawn from
one population (n1) equal the number of samples drawn from the other population
(n2) as we did in the buffer allocation example. Therefore, if you have more observations for one candidate system than for the other candidate system, then by
all means use them. Additionally, this approach does not require that the two populations have equal variances (12 = 22 = 2 ) as do other approaches. This is
useful because we seldom know the true value of the variance of a population.
Thus we are not required to judge the equality of the variances based on the sample variances we compute for each population (s12 and s22 ) before using the Welch
condence interval method.
The Welch condence interval for an level of signicance is
P[(x1 x2 ) hw 1 2 (x1 x2 ) + hw] = 1
where x1 and x2 represent the sample means used to estimate the population
means 1 and 2; hw denotes the half-width of the condence interval and is
computed by
s2
s2
hw = tdf,/2 1 + 2
n1
n2
where df (degrees of freedom) is estimated by
2
2
s1 n 1 + s22 n 2
df 2
2
s12 n 1 (n 1 1) + s22 n 2 (n 2 1)
and tdf,/2 is a factor obtained from the Students t table in Appendix B based on
the value of /2 and the estimated degrees of freedom. Note that the degrees of
freedom term in the Students t table is an integer value. Given that the estimated
degrees of freedom will seldom be an integer value, you will have to use interpolation to compute the tdf,/2 value.
For the example buffer allocation problem with an = 0.05 level of signicance, we use these equations and data from Table 10.1 to compute
df
[1.89/10 + 1.36/10]2
17.5
[1.89/10]2 /(10 1) + [1.36/10]2 /(10 1)
and
hw = t17.5,0.025
1.89 1.36
+
= 2.106 0.325 = 1.20 parts per hour
10
10
where tdf,/2 = t17.5,0.025 = 2.106 is determined from Students t table in Appendix B by interpolation. Now the 95 percent condence interval is
(x1 x2 ) hw 1 2 (x1 x2 ) + hw
(56.30 54.63) 1.20 1 2 (56.30 54.63) + 1.20
0.47 1 2 2.87
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
260
I. Study Chapters
Part I
The McGrawHill
Companies, 2004
Study Chapters
j=1
x(12) j
n
n
j=1
x(12) j x(12)
n1
2
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 10
The McGrawHill
Companies, 2004
261
Comparing Systems
(tn1,/2 )s(12)
where tn1,/2 is a factor that can be obtained from the Students t table in Appendix B based on the value of /2 and the degrees of freedom (n 1). Thus the
paired-t condence interval for an level of signicance is
P x(12) hw (12) x(12) + hw = 1
Notice that this is basically the same condence interval expression presented in
Chapter 9 with x(12) replacing x and (12) replacing .
Lets create Table 10.2 by restructuring Table 10.1 to conform to our new
paired notation and paired-t method before computing the condence interval
necessary for testing the hypotheses
H0 : 1 2 = 0
H1 : 1 2 = 0
(B)
(C)
Replication ( j)
Strategy 1
Throughput x1 j
Strategy 2
Throughput x2 j
(D)
Throughput
Difference (B C)
x(12) j = x1 j x2 j
1
2
3
4
5
6
7
8
9
10
54.48
57.36
54.81
56.20
54.83
57.69
58.33
57.19
56.84
55.29
56.01
54.08
52.14
53.49
55.49
55.00
54.88
54.47
54.93
55.84
1.53
3.28
2.67
2.71
0.66
2.69
3.45
2.72
1.91
0.55
1.67
1.85
3.42
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
262
I. Study Chapters
Part I
The McGrawHill
Companies, 2004
Study Chapters
(t9,0.025 )1.85
(2.262)1.85
=
= 1.32 parts per hour
10
10
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 10
The McGrawHill
Companies, 2004
263
Comparing Systems
where i and i are between 1 and K and i < i . The null hypothesis H0 states that
the means from the K populations (mean output of the K different simulation
models) are not different, and the alternative hypothesis H1 states that at least one
pair of the means are different.
The Bonferroni approach is very similar to the two condence interval
methods presented in Section 10.3 in that it is based on computing condence
intervals to determine if the true mean performance of one system (i ) is significantly different than the true mean performance of another system (i ). In fact,
either the paired-t condence interval or the Welch condence interval can be
used with the Bonferroni approach. However, we will describe it in the context of
using paired-t condence intervals, noting that the paired-t condence interval
method can be used when the observations across populations are either independent or correlated.
The Bonferroni method is implemented by constructing a series of condence
intervals to compare all system designs to each other (all pairwise comparisons).
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
264
I. Study Chapters
Part I
The McGrawHill
Companies, 2004
Study Chapters
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 10
The McGrawHill
Companies, 2004
265
Comparing Systems
our conclusions leaves much to be desired. To combat this, we simply lower the values of the individual signicance levels (1 = 2 = 3 = = m ) so their sum
is not so large. However, this does not come without a price, as we shall see later.
One way to assign values to the individual signicance levels is to rst establish an overall level of signicance and then divide it by the number of pairwise comparisons. That is,
i =
K (K 1)/2
for i = 1, 2, 3, . . . , K (K 1)/2
Note, however, that it is not required that the individual signicance levels be assigned the same value. This is useful in cases where the decision maker wants to
place different levels of signicance on certain comparisons.
Practically speaking, the Bonferroni inequality limits the number of system designs that can be reasonably compared to about ve designs or less. This is because
controlling the overall signicance level for the test requires the assignment
of small values to the individual signicance levels (1 = 2 = 3 = = m )
if more than ve designs are compared. This presents a problem because the width
of a condence interval quickly increases as the level of signicance is reduced.
Recall that the width of a condence interval provides a measure of the accuracy
of the estimate. Therefore, we pay for gains in the overall condence of our test by
reducing the accuracy of our individual estimates (wide condence intervals).
When accurate estimates (tight condence intervals) are desired, we recommend
not using the Bonferroni approach when comparing more than ve system designs.
For comparing more than ve system designs, we recommend that the analysis of
variance technique be used in conjunction with perhaps the Fishers least signicant difference test. These methods are presented in Section 10.4.2.
Lets return to the buffer allocation example from the previous section and
apply the Bonferroni approach using paired-t condence intervals. In this case,
the production control staff has devised three buffer allocation strategies to compare. And, as before, we wish to determine if there are signicant differences
between the throughput levels (number of parts completed per hour) achieved
by the strategies. Although we will be working with individual condence
intervals, the hypotheses for the overall level of signicance are
H0 : 1 = 2 = 3 =
H1 : 1 =
2 or 1 = 3 or 2 = 3
where the subscripts 1, 2, and 3 denote Strategy 1, Strategy 2, and Strategy 3,
respectively.
To evaluate these hypotheses, we estimated the performance of the three
strategies by simulating the use of each strategy for 16 days (24 hours per day)
past the warm-up period. And, as before, the simulation was replicated 10 times
for each strategy. The average hourly throughput achieved by each strategy is
shown in Table 10.3.
The evaluation of the three buffer allocation strategies (K = 3) requires
that three [3(3 1)/2] pairwise comparisons be made. The three pairwise
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
266
I. Study Chapters
Part I
The McGrawHill
Companies, 2004
Study Chapters
(B)
Strategy 1
Throughput
x1 j
(C)
Strategy 2
Throughput
x2 j
(D)
Strategy 3
Throughput
x3 j
(E)
Difference (B C)
Strategy 1 Strategy 2
(F)
Difference (B D)
Strategy 1 Strategy 3
(G)
Difference (C D)
Strategy 2 Strategy 3
x(12) j
x(13) j
x(23) j
1
2
3
4
5
6
7
8
9
10
54.48
57.36
54.81
56.20
54.83
57.69
58.33
57.19
56.84
55.29
56.01
54.08
52.14
53.49
55.49
55.00
54.88
54.47
54.93
55.84
57.22
56.95
58.30
56.11
57.00
57.83
56.99
57.64
58.07
57.81
1.53
3.28
2.67
2.71
0.66
2.69
3.45
2.72
1.91
0.55
2.74
0.41
3.49
0.09
2.17
0.14
1.34
0.45
1.23
2.52
1.21
2.87
6.16
2.62
1.51
2.83
2.11
3.17
3.14
1.97
1.67
1.85
1.09
1.58
2.76
1.37
comparisons are shown in columns E, F, and G of Table 10.3. Also shown in Table
10.3 are the sample means x(ii ) and sample standard deviations s(ii ) for each
pairwise comparison.
Lets say that we wish to use an overall signicance level of = 0.06 to evaluate our hypotheses. For the individual levels of signicance, lets set 1 = 2 =
3 = 0.02 by using the equation
i =
0.06
=
= 0.02
3
3
for i = 1, 2, 3
The computation of the three paired-t condence intervals using the method outlined in Section 10.3.2 and data from Table 10.3 follows:
Comparing (12) : 1 = 0.02
tn1,1 /2 = t9,0.01 = 2.821 from Appendix B
(t9,0.01 )s(12)
(2.821)1.85
=
n
10
hw = 1.65 parts per hour
hw =
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 10
The McGrawHill
Companies, 2004
Comparing Systems
267
n
10
hw = 1.41 parts per hour
hw =
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
268
I. Study Chapters
Part I
The McGrawHill
Companies, 2004
Study Chapters
where i and i are between 1 and K and i < i . After dening some new terminology, we will formulate the hypotheses differently to conform to the statistical
model used in this section.
An experimental unit is the system to which treatments are applied. The simulation model of the production system is the experimental unit for the buffer
allocation example. A treatment is a generic term for a variable of interest and a
factor is a category of the treatment. We will consider only the single-factor case
with K levels. Each factor level corresponds to a different system design. For the
buffer allocation example, there are three factor levelsStrategy 1, Strategy 2,
and Strategy 3. Treatments are applied to the experimental unit by running the
simulation model with a specied factor level (strategy).
An experimental design is a plan that causes a systematic and efcient application of treatments to an experimental unit. We will consider the completely
randomized (CR) designthe simplest experimental design. The primary assumption required for the CR design is that experimental units (simulation models)
are homogeneous with respect to the response (models output) before the treatment is applied. For simulation experiments, this is usually the case because a
models logic should remain constant except to change the level of the factor
under investigation. We rst specify a test of hypothesis and signicance level,
say an value of 0.05, before running experiments. The null hypothesis for the
buffer allocation problem would be that the mean throughputs due to the application of treatments (Strategies 1, 2, and 3) do not differ. The alternate hypothesis
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
The McGrawHill
Companies, 2004
Chapter 10
269
Comparing Systems
Replication (j)
Strategy 1
Throughput
(x1 j )
Strategy 2
Throughput
(x2 j )
Strategy 3
Throughput
(x3 j )
1
2
3
4
5
6
7
8
9
10
54.48
57.36
54.81
56.20
54.83
57.69
58.33
57.19
56.84
55.29
56.01
54.08
52.14
53.49
55.49
55.00
54.88
54.47
54.93
55.84
57.22
56.95
58.30
56.11
57.00
57.83
56.99
57.64
58.07
57.81
563.02
546.33
573.92
56.30
54.63
57.39
Sum xi =
n
j=1 xi j
10
j=1 xi j , for
n
Sample mean xi =
j=1 xi j
i = 1, 2, 3
10
=
j=1 xi j
10
, for i = 1, 2, 3
states that the mean throughputs due to the application of treatments (Strategies 1,
2, and 3) differ among at least one pair of strategies.
We will use a balanced CR design to help us conduct this test of hypothesis.
In a balanced design, the same number of observations are collected for each factor level. Therefore, we executed 10 simulation runs to produce 10 observations
of throughput for each strategy. Table 10.4 presents the experimental results and
summary statistics for this problem. The response variable (xi j ) is the observed
throughput for the treatment (strategy). The subscript i refers to the factor level
(Strategy 1, 2, or 3) and j refers to an observation (output from replication j) for
that factor level. For example, the mean throughput response of the simulation
model for the seventh replication of Strategy 2 is 54.88 in Table 10.4. Parameters
for this balanced CR design are
Number of factor levels = number of alternative system designs = K = 3
Number of observations for each factor level = n = 10
Total number of observations = N = n K = (10)3 = 30
Inspection of the summary statistics presented in Table 10.4 indicates that
Strategy 3 produced the highest mean throughput and Strategy 2 the lowest.
Again, we should not jump to conclusions without a careful analysis of the
experimental data. Therefore, we will use analysis of variance (ANOVA) in conjunction with a multiple comparison test to guide our decision.
Analysis of Variance
Analysis of variance (ANOVA) allows us to partition the total variation in the output response from the simulated system into two componentsvariation due to
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
270
I. Study Chapters
Part I
The McGrawHill
Companies, 2004
Study Chapters
the effect of the treatments and variation due to experimental error (the inherent
variability in the simulated system). For this problem case, we are interested in
knowing if the variation due to the treatment is sufcient to conclude that the performance of one strategy is signicantly different than the other with respect to
mean throughput of the system. We assume that the observations are drawn from
normally distributed populations and that they are independent within a strategy
and between strategies. Therefore, the variance reduction technique based on
common random numbers (CRN) presented in Section 10.5 cannot be used with
this method.
The xed-effects model is the underlying linear statistical model used for the
analysis because the levels of the factor are xed and we will consider each possible factor level. The xed-effects model is written as
for i = 1, 2, 3, . . . , K
xi j = + i + i j
for j = 1, 2, 3, . . . , n
where i is the effect of the ith treatment (ith strategy in our example) as a deviation from the overall (common to all treatments) population mean and i j is the
error associated with this observation. In the context of simulation, the i j term
represents the random variation of the response xi j that occurred during the jth
replication of the ith treatment. Assumptions for the xed-effects model are that
the sum of all i equals zero and that the error terms i j are independent and normally distributed with a mean of zero and common variance. There are methods
for testing the reasonableness of the normality and common variance assumptions. However, the procedure presented in this section is reported to be somewhat
insensitive to small violations of these assumptions (Miller et al. 1990). Specically, for the buffer allocation example, we are testing the equality of three
treatment effects (Strategies 1, 2, and 3) to determine if there are statistically signicant differences among them. Therefore, our hypotheses are written as
H0 : 1 = 2 = 3 = 0
H1 : i =
0 for at least one i,
for i = 1, 2, 3
Basically, the previous null hypothesis that the K population means are
all equal (1 = 2 = 3 = = K = ) is replaced by the null hypothesis
1 = 2 = 3 = = K = 0 for the xed-effects model. Likewise, the alternative hypothesis that at least two of the population means are unequal is replaced
by i = 0 for at least one i. Because only one factor is considered in this problem,
a simple one-way analysis of variance is used to determine FCALC, the test statistic that will be used for the hypothesis test. If the computed FCALC value exceeds
a threshold value called the critical value, denoted FCRITICAL, we shall reject the
null hypothesis that states that the treatment effects do not differ and conclude that
there are statistically signicant differences among the treatments (strategies in
our example problem).
To help us with the hypothesis test, lets summarize the experimental results
shown in Table 10.4 for the example problem. The rst summary statistic that we
will compute is called the sum of squares (SSi) and is calculated for the ANOVA
for each factor level (Strategies 1, 2, and 3 in this case). In a balanced design
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 10
The McGrawHill
Companies, 2004
271
Comparing Systems
where the number of observations n for each factor level is a constant, the sum of
squares is calculated using the formula
n
2
x
i
j
j=1
n
2
SSi =
for i = 1, 2, 3, . . . , K
j=1 x i j
n
For this example, the sums of squares are
10
2
j=1 x 1 j
10
2
SS1 =
j=1 x 1 j
10
SS1 = [(54.48)2 + (57.36)2 + + (55.29)2 ]
(563.02)2
= 16.98
10
SS2 = 12.23
SS3 = 3.90
The grand total of the N observations (N = n K ) collected from the output response of the simulated system is computed by
Grand total = x.. =
K n
i=1
j=1
xi j =
K
i=1
xi
The overall mean of the N observations collected from the output response of
the simulated system is computed by
K n
x..
i=1
j=1 x i j
Overall mean = x..
=
=
N
N
Using the data in Table 10.4 for the buffer allocation example, these statistics are
Grand total = x.. =
3
i=1
x..
1,683.27
=
= 56.11
N
30
Our analysis is simplied because a balanced design was used (equal observations for each factor level). We are now ready to dene the computational
formulas for the ANOVA table elements (for a balanced design) needed to
conduct the hypothesis test. As we do, we will construct the ANOVA table for the
buffer allocation example. The computational formulas for the ANOVA table
elements are
Degrees of freedom total (corrected) = df(total corrected) = N 1
= 30 1 = 29
Degrees of freedom treatment = df(treatment) = K 1 = 3 1 = 2
Degrees of freedom error = df(error) = N K = 30 3 = 27
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
272
I. Study Chapters
Part I
The McGrawHill
Companies, 2004
Study Chapters
and
Sum of squares error = SSE =
K
i=1
K
2
x
i=1 i
x..2
K
(1,683.27)2
1
= 38.62
((563.02)2 + (546.33)2 + (573.92)2 )
3
10
SST
38.62
=
= 19.31
df(treatment)
2
SSE
33.11
=
= 1.23
df(error)
27
and nally
Calculated F statistic = FCALC =
MST
19.31
=
= 15.70
MSE
1.23
Table 10.5 presents the ANOVA table for this problem. We will compare the value
of FCALC with a value from the F table in Appendix C to determine whether to
reject or fail to reject the null hypothesis H0 : 1 = 2 = 3 = 0. The values
obtained from the F table in Appendix C are referred to as critical values and
are determined by F(df(treatment), df(error); ). For this problem, F(2,27; 0.05) = 3.35 =
FCRITICAL , using a signicance level () of 0.05. Therefore, we will reject H0 since
FCALC > FCRITICAL at the = 0.05 level of signicance. If we believe the data in
Table 10.4 satisfy the assumptions of the xed-effects model, then we would conclude that the buffer allocation strategy (treatment) signicantly affects the mean
Degrees of
Freedom
Sum of
Squares
Mean
Square
Total (corrected)
Treatment (strategies)
Error
N 1 = 29
K1=2
N K = 27
SSTC = 71.73
SST = 38.62
SSE = 33.11
MST = 19.31
MSE = 1.23
FCALC
15.70
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 10
The McGrawHill
Companies, 2004
Comparing Systems
273
throughput of the system. We now have evidence that at least one strategy produces
better results than the other strategies. Next, a multiple comparison test will be
conducted to determine which strategy (or strategies) causes the signicance.
Multiple Comparison Test
Our nal task is to conduct a multiple comparison test. The hypothesis test suggested that not all strategies are the same with respect to throughput, but it did not
identify which strategies performed differently. We will use Fishers least signicant difference (LSD) test to identify which strategies performed differently. It is
generally recommended to conduct a hypothesis test prior to the LSD test to determine if one or more pairs of treatments are signicantly different. If the hypothesis test failed to reject the null hypothesis, suggesting that all i were the
same, then the LSD test would not be performed. Likewise, if we reject the null
hypothesis, we should then perform the LSD test. Because we rst performed a
hypothesis test, the subsequent LSD test is often called a protected LSD test.
The LSD test requires the calculation of a test statistic used to evaluate all
pairwise comparisons of the sample mean from each population (x1 , x2 ,
x3 , . . . , x K ). In our example buffer allocation problem, we are dealing with the
sample mean throughput computed from the output of our simulation models for
the three strategies (x1 , x2 , x3 ). Therefore, we will make three pairwise comparisons of the sample means for our example, recalling that the number of pairwise
comparisons for K candidate designs is computed by K (K 1)/2. The LSD test
statistic is calculated as
2(MSE)
LSD() = t(df(error),/2)
n
The decision rule states that if the difference in the sample mean response values
exceeds the LSD test statistic, then the population mean response values are signicantly different at a given level of signicance. Mathematically, the decision
rule is written as
If |xi xi | > LSD(), then i and i are signicantly different at the
level of signicance.
For this problem, the LSD test statistic is determined at the = 0.05 level of
signicance:
2(MSE)
2(1.23)
LSD(0.05) = t27,0.025
= 2.052
= 1.02
n
10
Table 10.6 presents the results of the three pairwise comparisons for the LSD
analysis. With 95 percent condence, we conclude that each pair of means is different (1 = 2 , 1 = 3 , and 2 = 3 ). We may be inclined to believe that the
best strategy is Strategy 3, the second best strategy is Strategy 1, and the worst
strategy is Strategy 2.
Recall that the Bonferroni approach in Section 10.4.1 did not detect a signicant difference between Strategy 1 (1 ) and Strategy 3 (3 ). One possible
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
274
I. Study Chapters
Part I
The McGrawHill
Companies, 2004
Study Chapters
Strategy 1
x1 = 56.30
Strategy 3
x3 = 57.39
|x2 x3 | = 2.76
Signicant
(2.76 > 1.02)
|x1 x3 | = 1.09
Signicant
(1.09 > 1.02)
Strategy 1
x1 = 56.30
|x1 x2 | = 1.67
Signicant
(1.67 > 1.02)
explanation is that the LSD test is considered to be more liberal in that it will indicate a difference before the more conservative Bonferroni approach. Perhaps if
the paired-t condence intervals had been used in conjunction with common
random numbers (which is perfectly acceptable because the paired-t method
does not require that observations be independent between populations), then the
Bonferroni approach would have also indicated a difference. We are not suggesting here that the Bonferroni approach is in error (or that the LSD test is in
error). It could be that there really is no difference between the performances of
Strategy 1 and Strategy 3 or that we have not collected enough observations to
be conclusive.
There are several multiple comparison tests from which to choose. Other
tests include Tukeys honestly signicant difference (HSD) test, Bayes LSD
(BLSD) test, and a test by Scheffe. The LSD and BLSD tests are considered to be
liberal in that they will indicate a difference between i and i before the more
conservative Scheffe test. A book by Petersen (1985) provides more information
on multiple comparison tests.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 10
FIGURE 10.3
Relationship between
factors (decision
variables) and output
responses.
The McGrawHill
Companies, 2004
275
Comparing Systems
Simulation
Output responses
model
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
276
I. Study Chapters
Part I
The McGrawHill
Companies, 2004
Study Chapters
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 10
FIGURE 10.4
Unique seed value
assigned for each
replication.
Comparing Systems
The McGrawHill
Companies, 2004
277
Stream
Rep.1
Seed 9
.83
.
.
.
.12
Rep. 2
Seed 5
.93
.
.
.
.79
Rep. 3
Seed 3
.28
.
.
.
The goal is to use the exact random number from the stream for the exact
purpose in each simulated system. To help achieve this goal, the random number
stream can be seeded at the beginning of each independent replication to keep it
synchronized across simulations of each system. For example, in Figure 10.4, the
rst replication starts with a seed value of 9, the second replication starts with a
seed value of 5, the third with 3, and so on. If the same seed values for each replication are used to simulate each alternative system, then the same stream of random numbers will drive each of the systems. This seems simple enough. However, care has to be taken not to pick a seed value that places us in a location on
the stream that has already been used to drive the simulation in a previous replication. If this were to happen, the results from replicating the simulation of a system would not be independent because segments of the random number stream
would have been shared between replications, and this cannot be tolerated.
Therefore, some simulation software provides a CRN option that, when selected,
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
278
I. Study Chapters
Part I
The McGrawHill
Companies, 2004
Study Chapters
FIGURE 10.5
Unique random
number stream
assigned to each
stochastic element
in system.
Stream 1
Stream 2
Stream 5
Stream 7
Rep. 1
Seed 99
.83
.
.
.
Rep. 1
Seed 51
.27
.
.
.
Rep. 1
Seed 89
.56
.
.
.
Rep. 1
Seed 67
.25
.
.
.
.12
.19
.71
.99
Rep. 2
Seed 75
.93
.
.
.
Rep. 2
Seed 33
.87
.
.
.
Rep. 2
Seed 7
.45
.
.
.
Rep. 2
Seed 23
.69
.
.
.
.79
.35
.74
.42
Rep. 3
Seed 3
.28
.
.
.
Rep. 3
Seed 79
.21
.
.
.
Rep. 3
Seed 49
.53
.
.
.
Rep. 3
Seed 13
.82
.
.
.
Service time
distribution
Machine
1
Service time
distribution
Machine
2
Service time
distribution
Machine
3
Machine
4
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 10
The McGrawHill
Companies, 2004
Comparing Systems
279
not specify an initial seed value for a stream that is used, ProModel will use the
same seed number as the stream number (stream 3 uses the third seed). A detailed
explanation of how random number generators work and how they produce
unique streams of random numbers is provided in Chapter 3.
Complete synchronization of the random numbers across different models is
sometimes difcult to achieve. Therefore, we often settle for partial synchronization. At the very least, it is a good idea to set up two streams with one stream
of random numbers used to generate an entitys arrival pattern and the other
stream of random numbers used to generate all other activities in the model.
That way, activities added to the model will not inadvertently alter the arrival
pattern because they do not affect the sample values generated from the arrival
distribution.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
280
I. Study Chapters
Part I
The McGrawHill
Companies, 2004
Study Chapters
(B)
(C)
Replication (j)
Strategy 1
Throughput x1 j
Strategy 2
Throughput x2 j
(D)
Throughput
Difference (B C)
x(12) j = x1 j x2 j
1
2
3
4
5
6
7
8
9
10
79.05
54.96
51.23
88.74
56.43
70.42
35.71
58.12
57.77
45.08
75.09
51.09
49.09
88.01
53.34
67.54
34.87
54.24
55.03
42.55
3.96
3.87
2.14
0.73
3.09
2.88
0.84
3.88
2.74
2.53
2.67
1.16
1.35
and
hw =
(t9,0.025 )s(12)
(2.262)1.16
=
= 0.83 parts per hour
n
10
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 10
Comparing Systems
The McGrawHill
Companies, 2004
281
10.6 Summary
An important point to make here is that simulation, by itself, does not solve a
problem. Simulation merely provides a means to evaluate proposed solutions by
estimating how they behave. The user of the simulation model has the responsibility to generate candidate solutions either manually or by use of automatic
optimization techniques and to correctly measure the utility of the solutions based
on the output from the simulation. This chapter presented several statistical
methods for comparing the output produced by simulation models representing
candidate solutions or designs.
When comparing two candidate system designs, we recommend using either
the Welch condence interval method or the paired-t condence interval. Also, a
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
282
I. Study Chapters
Part I
The McGrawHill
Companies, 2004
Study Chapters
Design 1
Design 2
Design 3
Design 4
1
2
3
4
5
6
7
8
9
10
11
12
53.9872
58.4636
55.5300
56.3602
53.8864
57.2620
56.9196
55.7004
55.3685
56.9589
55.0892
55.4580
58.1365
57.6060
58.5968
55.9631
58.3555
57.0748
56.0899
59.8942
57.5491
58.0945
59.2632
57.4509
58.5438
57.3973
57.1040
58.7105
58.0406
56.9654
57.2882
57.3548
58.2188
59.5975
60.5354
57.9982
60.1208
59.6515
60.5279
58.1981
60.3144
59.1815
58.3103
61.6756
59.6011
60.0836
61.1175
59.5142
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 10
Comparing Systems
The McGrawHill
Companies, 2004
283
References
Banks, Jerry; John S. Carson; Berry L. Nelson; and David M. Nicol. Discrete-Event System Simulation. Englewood Cliffs, NJ: Prentice Hall, 2001.
Bateman, Robert E.; Royce O. Bowden; Thomas J. Gogg; Charles R. Harrell; and Jack
R. A. Mott. System Improvement Using Simulation. Orem, UT: PROMODEL Corp.,
1997.
Goldsman, David, and Berry L. Nelson. Comparing Systems via Simulation. Chapter 8
in Handbook of Simulation. New York: John Wiley and Sons, 1998.
Hines, William W., and Douglas C. Montgomery. Probability and Statistics in Engineering
and Management Science. New York: John Wiley & Sons, 1990.
Hoover, Stewart V., and Ronald F. Perry. Simulation: A Problem-Solving Approach.
Reading, MA: Addison-Wesley, 1989.
Law, Averill M., and David W. Kelton. Simulation Modeling and Analysis. New York:
McGraw-Hill, 2000.
Miller, Irwin R.; John E. Freund; and Richard Johnson. Probability and Statistics for
Engineers. Englewood Cliffs, NJ: Prentice Hall, 1990.
Miller, Rupert G. Beyond ANOVA, Basics of Applied Statistics, New York: Wiley, 1986.
Montgomery, Douglas C. Design and Analysis of Experiments. New York: John Wiley &
Sons, 1991.
Petersen, Roger G. Design and Analysis of Experiments. New York: Marcel Dekker, 1985.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
The McGrawHill
Companies, 2004
11 SIMULATION
OPTIMIZATION
Man is a goal seeking animal. His life only has meaning if he is reaching out
and striving for his goals.
Aristotle
11.1 Introduction
Simulation models of systems are built for many reasons. Some models are built
to gain a better understanding of a system, to forecast the output of a system, or to
compare one system to another. If the reason for building simulation models is to
nd answers to questions like What are the optimal settings for
to minimize (or maximize)
? then optimization is the appropriate technology to
combine with simulation. Optimization is the process of trying different combinations of values for the variables that can be controlled to seek the combination of
values that provides the most desirable output from the simulation model.
For convenience, let us think of the simulation model as a black box that
imitates the actual system. When inputs are presented to the black box, it produces
output that estimates how the actual system responds. In our question, the rst
blank represents the inputs to the simulation model that are controllable by the
decision maker. These inputs are often called decision variables or factors. The
second blank represents the performance measures of interest that are computed
from the stochastic output of the simulation model when the decision variables are
set to specic values (Figure 11.1). In the question, What is the optimal number
of material handling devices needed to minimize the time that workstations are
starved for material? the decision variable is the number of material handling devices and the performance measure computed from the output of the simulation
model is the amount of time that workstations are starved. The objective, then, is
to seek the optimal value for each decision variable that minimizes, or maximizes,
the expected value of the performance measure(s) of interest. The performance
285
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
286
I. Study Chapters
Part I
The McGrawHill
Companies, 2004
Study Chapters
FIGURE 11.1
Relationship between
optimization algorithm
and simulation model.
Optimization
algorithm
Simulation
model
Output responses
measure is traditionally called the objective function. Note that the expected value
of the objective function is estimated by averaging the models output over multiple replications or batch intervals. The simulation optimization problem is more
formally stated as
Min or Max E[ f (X 1 , X 2 , . . . , X n )]
Subject to
Lower Boundi X i Upper Boundi
for i = 1, 2, . . . , n
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
The McGrawHill
Companies, 2004
287
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
288
I. Study Chapters
Part I
The McGrawHill
Companies, 2004
Study Chapters
FIGURE 11.2
SimRunner plots the
output responses
generated by a
ProModel simulation
model as it seeks the
optimal solution,
which occurred at the
highest peak.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
The McGrawHill
Companies, 2004
289
optimization module for the GASP IV simulation software package. Pegden and
Gately (1980) later developed another optimization module for use with the
SLAM simulation software package. Their optimization packages were based on
a variant of a direct search method developed by Hooke and Jeeves (1961). After
solving several problems, Pegden and Gately concluded that their packages
extended the capabilities of the simulation language by providing for automatic
optimization of decision.
The direct search algorithms available today for simulation optimization are
much better than those available in the late 1970s. Using these newer algorithms,
the SimRunner simulation optimization tool was developed in 1995. Following
SimRunner, two other simulation software vendors soon added an optimization
feature to their products. These products are OptQuest96, which was introduced
in 1996 to be used with simulation models built with Micro Saint software, and
WITNESS Optimizer, which was introduced in 1997 to be used with simulation
models built with Witness software. The optimization module in OptQuest96 is
based on scatter search, which has links to Tabu Search and the popular evolutionary algorithm called the Genetic Algorithm (Glover 1994; Glover et al. 1996).
WITNESS Optimizer is based on a search algorithm called Simulated Annealing
(Markt and Mayer 1997). Today most major simulation software packages include
an optimization feature.
SimRunner has an optimization module and a module for determining the
required sample size (replications) and a models warm-up period (in the case of
a steady-state analysis). The optimization module can optimize integer and real
decision variables. The design of the optimization module in SimRunner was
inuenced by optima-seeking techniques such as Tabu Search (Glover 1990) and
evolutionary algorithms (Fogel 1992; Goldberg 1989; Schwefel 1981), though it
most closely resembles an evolutionary algorithm (SimRunner 1996b).
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
290
I. Study Chapters
Part I
The McGrawHill
Companies, 2004
Study Chapters
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
The McGrawHill
Companies, 2004
291
and Mollaghasemi (1991) suggest that GAs and Simulated Annealing are the
algorithms of choice when dealing with a large number of decision variables.
Tompkins and Azadivar (1995) recommend using GAs when the optimization problem involves qualitative (logical) decision variables. The authors have extensively
researched the use of genetic algorithms, evolutionary programming, and evolution strategies for solving manufacturing simulation optimization problems and
simulation-based machine learning problems (Bowden 1995; Bowden, Neppalli,
and Calvert 1995; Bowden, Hall, and Usher 1996; Hall, Bowden, and Usher 1996).
Reports have also appeared in the trade journal literature on how the EAbased optimizer in SimRunner helped to solve real-world problems. For example,
IBM; Sverdrup Facilities, Inc.; and Baystate Health Systems report benets from
using SimRunner as a decision support tool (Akbay 1996). The simulation
group at Lockheed Martin used SimRunner to help determine the most efcient
lot sizes for parts and when the parts should be released to the system to meet
schedules (Anonymous 1996).
Measured response
Ackleys function with noise and the ESs progress over eight generations.
20
18
16
14
12
10
8
6
4
2
0
10
Ackley's Function
Generation 1
Generation 2
Generation 3
Generation 4
Generation 5
Generation 6
Generation 7
Generation 8
5
0
Decision variable X
10
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
292
I. Study Chapters
Part I
The McGrawHill
Companies, 2004
Study Chapters
it by sampling from a uniform (1, 1) distribution. This simulates the variation that
can occur in the output from a stochastic simulation model. The function is shown
with a single decision variable X that takes on real values between 10 and 10. The
response surface has a minimum expected value of zero, occurring when X is equal
to zero, and a maximum expected value of 19.37 when X is equal to 9.54 or
+9.54. Ackleys function is multimodal and thus has several locally optimal solutions (optima) that occur at each of the low points (assuming a minimization problem) on the response surface. However, the local optimum that occurs when X is
equal to zero is the global optimum (the lowest possible response). This is a useful
test function because search techniques can prematurely converge and end their
search at one of the many optima before nding the global optimum.
According to Step 1 of the four-step process outlined in Section 11.4, an initial
population of solutions to the problem is generated by distributing them throughout
the solution space. Using a variant of Schwefels (1981) Evolution Strategy (ES)
with two parent solutions and 10 offspring solutions, 10 different values for the decision variable between 10 and 10 are randomly picked to represent an initial offspring population of 10 solutions. However, to make the search for the optimal solution more challenging, the 10 solutions in the initial offspring population are
placed far from the global optimum to see if the algorithm can avoid being trapped
by one of the many local optima. Therefore, the 10 solutions for the rst generation
were randomly picked between 10 and 8. So the test is to see if the population
of 10 solutions can evolve from one generation to the next to nd the global optimal solution without being trapped by one of the many local optima.
Figure 11.3 illustrates the progress that the ES made by following the fourstep process from Section 11.4. To avoid complicating the graph, only the
responses for the two best solutions (parents) in each generation are plotted on the
response surface. Clearly, the process of selecting the best solutions and applying
idealized genetic operators allows the algorithm to focus its search toward the
optimal solution from one generation to the next. Although the ES samples many
of the local optima, it quickly identies the region of the global optimum and is
beginning to hone in on the optimal solution by the eighth generation. Notice that
in the sixth generation, the ES has placed solutions to the right side of the search
space (X > 0) even though it was forced to start its search at the far left side of the
solution space. This ability of an evolutionary algorithm allows it to conduct a
more globally oriented search.
When a search for the optimal solution is conducted in a noisy simulation
environment, care should be taken in measuring the response generated for a
given input (solution) to the model. This means that to get an estimate of the expected value of the response, multiple observations (replications) of a solutions
performance should be averaged. But to test the idea that EAs can deal with noisy
response surfaces, the previous search was conducted using only one observation
of a solutions performance. Therefore, the potential existed for the algorithm to
become confused and prematurely converge to one of the many local optima
because of the noisy response surface. Obviously, this was not the case with the
EA in this example.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
The McGrawHill
Companies, 2004
293
The authors are not advocating that analysts can forget about determining the
number of replications needed to satisfactorily estimate the expected value of the
response. However, to effectively conduct a search for the optimal solution, an
algorithm must be able to deal with noisy response surfaces and the resulting
uncertainties that exist even when several observations (replications) are used to
estimate a solutions true performance.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
294
I. Study Chapters
Part I
The McGrawHill
Companies, 2004
Study Chapters
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
The McGrawHill
Companies, 2004
295
the number of decision variables or their range of values increases the size of the
search space, which can make it more difcult and time-consuming to identify the
optimal solution. As a rule, include only those decision variables known to significantly affect the output of the simulation model and judiciously dene the range
of possible values for each decision variable. Also, care should be taken when
dening the lower and upper bounds of the decision variables to ensure that a
combination of values will not be created that lead to a solution not envisioned
when the model was built.
Step 3. After selecting the decision variables, construct the objective function to
measure the utility of the solutions tested by the EA. Actually, the foundation for
the objective function would have already been established when the goals for the
simulation project were set. For example, if the goal of the modeling project is to
nd ways to minimize a customers waiting time in a bank, then the objective
function should measure an entitys (customers) waiting time in the bank. The
objective function is built using terms taken from the output report generated at
the end of the simulation run. Objective function terms can be based on entity
statistics, location statistics, resource statistics, variable statistics, and so on. The
user species whether a term is to be minimized or maximized as well as the
overall weighting of that term in the objective function. Some terms may be more
or less important to the user than other terms. Remember that as terms are added
to the objective function, the complexity of the search space may increase, which
makes a more difcult optimization problem. From a statistical point of view,
single-term objective functions are also preferable to multiterm objective functions. Therefore, strive to keep the objective function as specic as possible.
The objective function is a random variable, and a set of initial experiments
should be conducted to estimate its variability (standard deviation). Note that there
is a possibility that the objective functions standard deviation differs from one
solution to the next. Therefore, the required number of replications necessary to estimate the expected value of the objective function may change from one solution
to the next. Thus the objective functions standard deviation should be measured
for several different solutions and the highest standard deviation recorded used to
compute the number of replications necessary to estimate the expected value of the
objective function. When selecting the set of test solutions, choose solutions that
are very different from one another. For example, form solutions by setting the
decision variables to their lower bounds, middle values, or upper bounds.
A better approach for controlling the number of replications used to estimate
the expected value of the objective function for a given solution would be to incorporate a rule into the model that schedules additional replications until the estimate
reaches a desired level of precision (condence interval half-width). Using this
technique can help to avoid running too many replications for some solutions and
too few replications for others.
Step 4. Select the size of the EAs population (number of solutions) and begin
the search. The size of the population of solutions used to conduct the search
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
296
I. Study Chapters
Part I
The McGrawHill
Companies, 2004
Study Chapters
affects both the likelihood that the algorithm will locate the optimal solution and
the time required to conduct the search. In general, as the population size is
increased, the algorithm nds better solutions. However, increasing the population size generally increases the time required to conduct the search. Therefore, a
balance must be struck between nding the optimum and the amount of available
time to conduct the search.
Step 5. After the EAs search has concluded (or halted due to time constraints),
the analyst should study the solutions found by the algorithm. In addition to the
best solution discovered, the algorithm usually nds many other competitive
solutions. A good practice is to rank each solution evaluated based on its utility as
measured by the objective function. Next, select the most highly competitive
solutions and, if necessary, make additional model replications of those solutions
to get better estimates of their true utility. And, if necessary, refer to Chapter 10
for background on statistical techniques that can help you make a nal decision
between competing solutions. Also, keep in mind that the database of solutions
evaluated by the EA represents a rich source of information about the behavior, or
response surface, of the simulation model. Sorting and graphing the solutions can
help you interpret the meaning of the data and gain a better understanding of
how the system behaves.
If the general procedure presented is followed, chances are that a good course
of action will be identied. This general procedure is easily carried out using
ProModel simulation products. Analysts can use SimRunner to help
Determine the length of time and warm-up period (if applicable) for
running a model.
Determine the required number of replications for obtaining estimates
with a specied level of precision and condence.
Search for the optimal values for the important decision variables.
Even though it is easy to use SimRunner and other modern optimizers, do
not fall into the trap of letting them become the decision maker. Study the top
solutions found by the optimizers as you might study the performance records of
different cars for a possible purchase. Kick their tires, look under their hoods, and
drive them around the block before buying. Always remember that the optimizer
is not the decision maker. It only suggest a possible course of action. It is the
users responsibility to make the nal decision.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
The McGrawHill
Companies, 2004
297
that disruptions (machine failures, line imbalances, quality problems, or the like)
will shut down the production line. Several strategies have been developed for
determining the amount of buffer storage needed between workstations. However,
these strategies are often developed based on simplifying assumptions, made for
mathematical convenience, that rarely hold true for real production systems.
One way to avoid oversimplifying a problem for the sake of mathematical
convenience is to build a simulation model of the production system and use it to
help identify the amount of buffer storage space needed between workstations.
However, the number of possible solutions to the buffer allocation problem grows
rapidly as the size of the production system (number of possible buffer storage
areas and their possible sizes) increases, making it impractical to evaluate all solutions. In such cases, it is helpful to use simulation optimization software like
SimRunner to identify a set of candidate solutions.
This example is loosely based on the example production system presented
in Chapter 10. It gives readers insight into how to formulate simulation optimization problems when using SimRunner. The example is not fully solved. Its completion is left as an exercise in Lab Chapter 11.
.72
4.56
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
298
I. Study Chapters
Part I
The McGrawHill
Companies, 2004
Study Chapters
next machine, where it waits to be processed. However, if the buffer is full, the
part cannot move forward and remains on the machine until a space becomes
available in the buffer. Furthermore, the machine is blocked and no other parts can
move to the machine for processing. The part exits the system after being
processed by the fourth machine. Note that parts are selected from the buffers to
be processed by a machine in a rst-in, rst-out order. The processing time at each
machine is exponentially distributed with a mean of 1.0 minute, 1.3 minutes,
0.7 minute, and 1.0 minute for machines one, two, three, and four, respectively.
The time to move parts from one location to the next is negligible.
For this problem, three decision variables describe how buffer space is allocated (one decision variable for each buffer to signify the number of parts that can
be stored in the buffer). The Goal is to nd the optimal value for each decision
variable to maximize the prot made from the sale of the parts. The manufacturer
collects $10 per part produced. The limitation is that each unit of space provided
for a part in a buffer costs $1,000. So the buffer storage has to be strategically
allocated to maximize the throughput of the system. Throughput will be measured
as the number of parts completed during a 30-day period.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
The McGrawHill
Companies, 2004
299
because it could prevent the optimization algorithm from exploring regions in the
search space that may contain good solutions.
With this information, it is decided to set the upper bound for the capacity of
each buffer to 15 parts. Therefore, the bounds for each decision variable are
1 Q 1 15
1 Q 2 15
1 Q 3 15
Given that each of the three decision variable has 15 different values, there are
153, or 3,375, unique solutions to the problem.
Step 3. Here the objective function is formulized. The model was built to investigate buffer allocation strategies to maximize the throughput of the system.
Given that the manufacturer collects $10 per part produced and that each unit of
space provided for a part in a buffer costs $1,000, the objective function for the
optimization could be stated as
Maximize [$10(Throughput) $1,000(Q 1 + Q 2 + Q 3 )]
where Throughput is the total number of parts produced during a 30-day
period.
Next, initial experiments are conducted to estimate the variability of the
objective function in order to determine the number of replications the EA-based
optimization algorithm will use to estimate the expected value of the objective
function for each solution it evaluates. While doing this, it was also noticed that
the throughput level increased very little for buffer capacities beyond a value of
nine. Therefore, it was decided to change the upper bound for each decision variable to nine. This resulted in a search space of 93, or 729, unique solutions, a
reduction of 2,646 solutions from the original formulation. This will likely reduce
the search time.
Step 4. Select the size of the population that the EA-based optimization algorithm
will use to conduct its search. SimRunner allows the user to select an optimization
prole that inuences the degree of thoroughness used to search for the optimal
solution. The three optimization proles are aggressive, moderate, and cautious,
which correspond to EA population sizes of small, medium, and large. The aggressive prole generally results in a quick search for locally optimal solutions and is
used when computer time is limited. The cautious prole species that a more
thorough search for the global optimum be conducted and is used when computer
time is plentiful. At this point, the analyst knows the amount of time required to
evaluate a solution. Only one more piece of information is needed to determine
how long it will take the algorithm to conduct its search. That is the fraction of the
729 solutions the algorithm will evaluate before converging to a nal solution.
Unfortunately, there is no way of knowing this in advance. With time running out
before a recommendation for the system must be given to management, the analyst elects to use a small population size by selecting SimRunners aggressive optimization prole.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
300
I. Study Chapters
Part I
The McGrawHill
Companies, 2004
Study Chapters
Step 5. After the search concludes, the analyst selects for further evaluation some
of the top solutions found by the optimization algorithm. Note that this does not
necessarily mean that only those solutions with the best objective function values
are chosen, because the analyst should conduct both a quantitative and a qualitative
analysis. On the quantitative side, statistical procedures presented in Chapter 10
are used to gain a better understanding of the relative differences in performance
between the candidate solutions. On the qualitative side, one solution may be
preferred over another based on factors such as ease of implementation.
Figure 11.5 illustrates the results from a SimRunner optimization of the buffer
allocation problem using an aggressive optimization prole. The warm-up time for
the simulation was set to 10 days, with each day representing a 24-hour production
period. After the warm-up time of 10 days (240 hours), the system is simulated for
an additional 30 days (720 hours) to determine throughput. The estimate for the expected value of the objective function was based on ve replications of the simulation. The smoother line that is always at the top of the Performance Measures Plot
in Figure 11.5 represents the value of the objective function for the best solution
identied by SimRunner during the optimization. Notice the rapid improvement in
the value of the objective function during the early part of the search as SimRunner
identies better buffer capacities. The other, more irregular line represents the
value of the objective function for all the solutions that SimRunner tried.
SimRunners best solution to the problem species a Buffer 1 capacity of
nine, a Buffer 2 capacity of seven, and a Buffer 3 capacity of three and was the
33rd solution (experiment) evaluated by SimRunner. The best solution is located
at the top of the table in Figure 11.5. SimRunner sorts the solutions in the table
FIGURE 11.5
SimRunner results for
the buffer allocation
problem.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
The McGrawHill
Companies, 2004
301
from best to worst. SimRunner evaluated 82 out of the possible 729 solutions.
Note that there is no guarantee that SimRunners best solution is in fact the optimum solution to the problem. However, it is likely to be one of the better solutions
to the problem and could be the optimum one.
The last two columns in the SimRunner table shown in Figure 11.5 display
the lower and upper bounds of a 95 percent condence interval for each solution
evaluated. Notice that there is signicant overlap between the condence intervals. Although this is not a formal hypothesis-testing procedure, the overlapping
condence intervals suggest the possibility that there is not a signicant difference in the performance of the top solutions displayed in the table. Therefore, it
would be wise to run additional replications of the favorite solutions from the list
and/or use one of the hypothesis-testing procedures in Chapter 10 before selecting
a particular solution as the best. The real value here is that SimRunner automatically conducted the search, without the analyst having to hover over it, and reported back several good solutions to the problem for the analyst to consider.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
302
I. Study Chapters
Part I
The McGrawHill
Companies, 2004
Study Chapters
FIGURE 11.6
The two-stage pull
production system.
Trigger 1
Customer
demand
Assembly line
Assembly line
Stage
One
WIP
Stage One
process
Trigger 2
Stage
Two
WIP
Raw
materials
Stage Two
line
Kanban posts
Final product
Component
Subassembly
Kanban card
Figure 11.6 illustrates the relationship of the processes in the two-stage pull
production system of interest that produces several different types of parts.
Customer demand for the nal product causes containers of subassemblies to be
pulled from the Stage One WIP location to the assembly lines. As each container is
withdrawn from the Stage One WIP location, a production-ordering kanban card
representing the number of subassemblies in a container is sent to the kanban post
for the Stage One processing system. When the number of kanban cards for a given
subassembly meets its trigger value, the necessary component parts are pulled from
the Stage Two WIP to create the subassemblies. Upon completing the Stage One
process, subassemblies are loaded into containers, the corresponding kanban card
is attached to the container, and both are sent to Stage One WIP. The container and
card remain in the Stage One WIP location until pulled to an assembly line.
In Stage Two, workers process raw materials to ll the Stage Two WIP
location as component parts are pulled from it by Stage One. As component parts
are withdrawn from the Stage Two WIP location and placed into the Stage One
process, a production-ordering kanban card representing the quantity of component parts in a container is sent to the kanban post for the Stage Two line. When
the number of kanban cards for a given component part meets a trigger value,
production orders equal to the trigger value are issued to the Stage Two line. As
workers move completed orders of component parts from the Stage Two line to
WIP, the corresponding kanban cards follow the component parts to the Stage
Two WIP location.
While an overall reduction in WIP is sought, production planners desire a solution (kanban cards and trigger values for each stage) that gives preference to
minimizing the containers in the Stage One WIP location. This requirement is due
to space limitations.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
The McGrawHill
Companies, 2004
303
demand
coefcient
Monthly setups
Kanbans =
Container capacity
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
304
I. Study Chapters
Part I
The McGrawHill
Companies, 2004
Study Chapters
where the safety coefcient represents the amount of WIP needed in the system.
Production planners assumed a safety coefcient that resulted in one day of WIP
for each part type. Additionally, they decided to use one setup per day for each
part type. Although this equation provides an estimate of the minimum number of
kanban cards, it does not address trigger values. Therefore, trigger values for the
Stage Two line were set at the expected number of containers consumed for each
part type in one day. The Toyota equation recommended using a total of 243
kanban cards. The details of the calculation are omitted for brevity. When evaluated in the simulation model, this solution yielded a performance score of 35.140
(using the performance function dened in Section 11.7.2) based on four independent simulation runs (replications).
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
The McGrawHill
Companies, 2004
305
FIGURE 11.7
39
Optimization module
Toyota
38
Measured response
37
36
35
34
33
0
50
100
150
Generation
Kanban Cards
Performance Scores
110
243
37.922
35.115
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
306
I. Study Chapters
Part I
The McGrawHill
Companies, 2004
Study Chapters
11.8 Summary
In recent years, major advances have been made in the development of userfriendly simulation software packages. However, progress in developing simulation output analysis tools has been especially slow in the area of simulation
optimization because conducting simulation optimization with traditional techniques has been as much an art as a science (Greenwood, Rees, and Crouch 1993).
There are such an overwhelming number of traditional techniques that only individuals with extensive backgrounds in statistics and optimization theory have
realized the benets of integrating simulation and optimization concepts. Using
newer optimization techniques, it is now possible to narrow the gap with userfriendly, yet powerful, tools that allow analysts to combine simulation and
optimization for improved decision support. SimRunner is one such tool.
Our purpose is not to argue that evolutionary algorithms are the panacea
for solving simulation optimization problems. Rather, our purpose is to introduce
the reader to evolutionary algorithms by illustrating how they work and how to
use them for simulation optimization and to expose the reader to the wealth of
literature that demonstrates that evolutionary algorithms are a viable choice for
reliable optimization.
There will always be debate on what are the best techniques to use for simulation optimization. Debate is welcome as it results in better understanding of the
real issues and leads to the development of better solution procedures. It must be
remembered, however, that the practical issue is not that the optimization technique guarantees that it locates the optimal solution in the shortest amount of time
for all possible problems that it may encounter; rather, it is that the optimization
technique consistently nds good solutions to problems that are better than the
solutions analysts are nding on their own. Newer techniques such as evolutionary algorithms and scatter search meet this requirement because they have proved
robust in their ability to solve a wide variety of problems, and their ease of use
makes them a practical choice for simulation optimization today (Boesel et al.
2001; Brady and Bowden 2001).
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
The McGrawHill
Companies, 2004
307
References
Ackley, D. A Connectionist Machine for Genetic Hill Climbing. Boston, MA: Kluwer, 1987.
Akbay, K. Using Simulation Optimization to Find the Best Solution. IIE Solutions, May
1996, pp. 2429.
Anonymous, Lockheed Martin. IIE Solutions, December, 1996, pp. SS48SS49.
Azadivar, F. A Tutorial on Simulation Optimization. 1992 Winter Simulation Conference. Arlington Virginia, ed. Swain, J., D. Goldsman, R. Crain and J. Wilson. Institute
of Electrical and Electronics Engineers, Piscataway, NJ: 1992, pp. 198204.
Bck, T.; T. Beielstein; B. Naujoks; and J. Heistermann. Evolutionary Algorithms for
the Optimization of Simulation Models Using PVM. Euro PVM 1995Second
European PVM 1995, Users Group Meeting. Hermes, Paris, ed. Dongarra, J., M.
Gengler, B. Tourancheau and X. Vigouroux, 1995, pp. 277282.
Bck, T., and H.-P. Schwefel. An Overview of Evolutionary Algorithms for Parameter
Optimization. Evolutionary Computation 1, no. 1 (1993), pp. 123.
Barton, R., and J. Ivey. Nelder-Mead Simplex Modications for Simulation Optimization.
Management Science 42, no. 7 (1996), pp. 95473.
Biethahn, J., and V. Nissen. Combinations of Simulation and Evolutionary Algorithms in
Management Science and Economics. Annals of Operations Research 52 (1994),
pp. 183208.
Boesel, J.; Bowden, R. O.; Glover, F.; and J. P. Kelly. Future of Simulation Optimization.
Proceedings of the 2001 Winter Simulation Conference, 2001, pp. 146669.
Bowden, R. O. Genetic Algorithm Based Machine Learning Applied to the Dynamic
Routing of Discrete Parts. Ph.D. Dissertation, Department of Industrial Engineering,
Mississippi State University, 1992.
Bowden, R. The Evolution of Manufacturing Control Knowledge Using Reinforcement
Learning. 1995 Annual International Conference on Industry, Engineering, and
Management Systems, Cocoa Beach, FL, ed. G. Lee. 1995, pp. 41015.
Bowden, R., and S. Bullington. An Evolutionary Algorithm for Discovering Manufacturing Control Strategies. In Evolutionary Algorithms in Management Applications,
ed. Biethahn, J. and V. Nissen. Berlin: Springer, 1995, pp. 12438.
Bowden, R., and S. Bullington. Development of Manufacturing Control Strategies Using
Unsupervised Learning. IIE Transactions 28 (1996), pp. 31931.
Bowden, R.; J. Hall; and J. Usher. Integration of Evolutionary Programming and Simulation to Optimize a Pull Production System. Computers and Industrial Engineering
31, no. 1/2 (1996), pp. 21720.
Bowden, R.; R. Neppalli; and A. Calvert. A Robust Method for Determining Good Combinations of Queue Priority Rules. Fourth International Industrial Engineering Research Conference, Nashville, TN, ed. Schmeiser, B. and R. Uzsoy. Norcross, GA:
IIE, 1995, pp. 87480.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
308
I. Study Chapters
Part I
The McGrawHill
Companies, 2004
Study Chapters
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
The McGrawHill
Companies, 2004
309
SimRunner Online Software Help. Ypsilanti, MI: Decision Science, Inc., 1996b.
SimRunner Users Guide ProModel Edition. Ypsilanti, MI: Decision Science, Inc., 1996a.
Stuckman, B.; G. Evans; and M. Mollaghasemi. Comparison of Global Search Methods
for Design Optimization Using Simulation. 1991 Winter Simulation Conference,
Phoenix, AZ, ed. Nelson, B., W. Kelton and G. Clark. Piscataway, NJ: Institute of
Electrical and Electronics Engineers, 1991, pp. 93744.
Tompkins, G., and F. Azadivar. Genetic Algorithms in Optimizing Simulated Systems.
1991 Winter Simulation Conference, Arlington, VA, ed. Alexopoulos, C., K. Kang,
W. Lilegdon, and D. Goldsman. Piscataway, NJ: Institute of Electrical and Electronics Engineers, 1995, pp. 75762.
Usher, J., and R. Bowden. The Application of Genetic Algorithms to Operation Sequencing for Use in Computer-Aided Process Planning. Computers and Industrial Engineering Journal 30, no. 4 (1996), pp. 9991013.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
The McGrawHill
Companies, 2004
12. Modeling
Manufacturing Systems
12 MODELING
MANUFACTURING
SYSTEMS
We no longer have the luxury of time to tune and debug new manufacturing
systems on the oor, since the expected economic life of a new system, before
major revision will be required, has become frighteningly short.
Conway and Maxwell
12.1 Introduction
In Chapter 7 we discussed general procedures for modeling the basic operation of
manufacturing and service systems. In this chapter we discuss design and operating issues that are more specic to manufacturing systems. Different applications
of simulation in manufacturing are presented together with how specic manufacturing issues are addressed in a simulation model. Most manufacturing
systems have material handling systems that, in some instances, have a major
impact on overall system performance. We touch on a few general issues related
to material handling systems in this chapter; however, a more thorough treatment
of material handling systems is given in Chapter 13.
Manufacturing systems are processing systems in which raw materials are
transformed into nished products through a series of operations performed at
workstations. In the rush to get new manufacturing systems on line, engineers and
planners often become overly preoccupied with the processes and methods without fully considering the overall coordination of system components.
Many layout and improvement decisions in manufacturing are left to chance
or are driven by the latest management fad with little knowledge of how much improvement will result or whether a decision will result in any improvement at all.
For example, work-in-process (WIP) reduction that is espoused by just-in-time
(JIT) often disrupts operations because it merely uncovers the rocks (variability,
long setups, or the like) hidden beneath the inventory water level that necessitated
the WIP in the rst place. To accurately predict the effect of lowering WIP levels
311
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
312
I. Study Chapters
Part I
12. Modeling
Manufacturing Systems
The McGrawHill
Companies, 2004
Study Chapters
requires sonar capability. Ideally you would like to identify and remove production rocks before arbitrarily lowering inventory levels and exposing production to
these hidden problems. Unfortunately, note Hopp and Spearman (2001), JIT,
as described in the American literature, offers neither sonar (models that predict
the effects of system changes) nor a sense of the relative economics of level
reduction versus rock removal.
Another popular management technique is the theory of constraints. In this
approach, a constraint or bottleneck is identied and a best-guess solution is
implemented, aimed at either eliminating that particular constraint or at least mitigating the effects of the constraint. The implemented solution is then evaluated
and, if the impact was underestimated, another solution is attempted. As one manufacturing manager expressed, Contraint-based management cant quantify
investment justication or develop a remedial action plan (Berdine 1993). It is
merely a trial-and-error technique in which a best-guess solution is implemented
with the hope that enough improvement is realized to justify the cost of the solution. Even Demings plandocheckact (PDCA) cycle of process improvement
implicitly prescribes checking performance after implementation. What the
PDCA cycle lacks is an evaluation step to test or simulate the plan before it is
implemented. While eventually leading to a better solution, this trial-and-error
approach ends up being costly, time-consuming, and disruptive to implement.
In this chapter we address the following topics:
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 12
12. Modeling
Manufacturing Systems
The McGrawHill
Companies, 2004
313
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
314
I. Study Chapters
Part I
12. Modeling
Manufacturing Systems
The McGrawHill
Companies, 2004
Study Chapters
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 12
The McGrawHill
Companies, 2004
12. Modeling
Manufacturing Systems
315
As companies move toward greater vertical and horizontal integration and look
for ways to improve the entire value chain, simulation will continue to be an essential tool for effectively planning the production and delivery processes.
Simulation in manufacturing covers the range from real-time cell control
to long-range technology assessment, where it is used to assess the feasibility
of new technologies prior to committing capital funds and corporate resources.
Figure 12.1 illustrates this broad range of planning horizons.
Simulations used to make short-term decisions usually require more detailed
models with a closer resemblance to current operations than what would be found in
long-term decision-making models. Sometimes the model is an exact replica of the
current system and even captures the current state of the system. This is true in realtime control and detailed scheduling applications. As simulation is used for more
long-term decisions, the models may have little or no resemblance to current operations. The model resolution becomes coarser, usually because higher-level decisions
are being made and data are too fuzzy and unreliable that far out into the future.
Simulation helps evaluate the performance of alternative designs and the effectiveness of alternative operating policies. A list of typical design and operational
decisions for which simulation might be used in manufacturing includes the
following:
Design Decisions
Job
sequencing
Seconds
Hours
Weeks
Months
System Technology
configuassessration
ment
12 years 25 years
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
316
I. Study Chapters
Part I
12. Modeling
Manufacturing Systems
The McGrawHill
Companies, 2004
Study Chapters
4.
5.
6.
7.
8.
9.
10.
11.
12.
1.
2.
3.
4.
5.
6.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 12
12. Modeling
Manufacturing Systems
The McGrawHill
Companies, 2004
317
include
Methods analysis.
Plant layout.
Batch sizing.
Production control.
Inventory control.
Supply chain planning.
Production scheduling.
Real-time control.
Emulation.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
318
FIGURE 12.2
I. Study Chapters
Part I
The McGrawHill
Companies, 2004
12. Modeling
Manufacturing Systems
Study Chapters
Automated
Axes of automation.
Semiautomated
Manual
Automated
Operations
n
io
at
rm Semiautomated
fo
In
Manual
Manual
Material handling
Semiautomated
Automated
Automation itself does not solve performance problems. If automation is unsuited for the application, poorly designed, or improperly implemented and integrated, then the system is not likely to succeed. The best approach to automation
is to rst simplify, then systematize, and nally automate. Some companies nd
that after simplifying and systematizing their processes, the perceived need to
automate disappears.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
12. Modeling
Manufacturing Systems
Chapter 12
The McGrawHill
Companies, 2004
319
FIGURE 12.3
Material ow system.
Receiving
Manufacturing
Shipping
Department A
Department B
Department C
Cell A
Cell B
Cell C
Station A
Station B
Station C
Input buffer
Machine
Output buffer
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
320
I. Study Chapters
Part I
12. Modeling
Manufacturing Systems
The McGrawHill
Companies, 2004
Study Chapters
FIGURE 12.4
Comparison between
(a) process layout,
(b) product layout, and
(c) cell layout.
Part
A
(a) Process
layout
Part
B
(b) Product
layout
Single
product
(c) Cell
layout
Family 1
Family 2
product layout is best. Often a combination of topologies is used within the same
facility to accommodate mixed requirements. For example, a facility might
be predominately a job shop with a few manufacturing cells. In general, the more
the topology resembles a product layout where the ow can be streamlined, the
greater the efciencies that can be achieved.
Perhaps the greatest impact simulation can have on plant layout comes from
designing the process itself, before the layout is even planned. Because the
process plan and method selection provide the basis for designing the layout,
having a well-designed process means that the layout will likely not require major
changes once the best layout is found for the optimized process.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 12
The McGrawHill
Companies, 2004
12. Modeling
Manufacturing Systems
FIGURE 12.5
321
To
production
Illustration of
(a) production batch,
(b) move batch, and
(c) process batch.
Station
1
Station
2
Station
3
and moved together from one workstation to another. A production batch is often
broken down into smaller move batches. This practice is called lot splitting. The
move batch need not be constant from location to location. In some batch manufacturing systems, for example, a technique called overlapped production is used
to minimize machine idle time and reduce work in process. In overlapped production, a move batch arrives at a workstation where parts are individually
processed. Then instead of accumulating the entire batch before moving on, parts
are sent on individually or in smaller quantities to prevent the next workstation
from being idle while waiting for the entire batch. The process batch is the quantity of parts that are processed simultaneously at a particular operation and usually
consists of a single part. The relationship between these batch types is illustrated
in Figure 12.5.
Deciding which size to use for each particular batch type is usually based on
economic trade-offs between in-process inventory costs and economies of scale
associated with larger batch sizes. Larger batch sizes usually result in lower setup
costs, handling costs, and processing costs. Several commands are provided in ProModel for modeling batching operations such as GROUP, COMBINE, JOIN and LOAD.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
322
I. Study Chapters
Part I
12. Modeling
Manufacturing Systems
The McGrawHill
Companies, 2004
Study Chapters
Push Control
A push system is one in which production is driven by workstation capacity and
material availability. Each workstation seeks to produce as much as it can, pushing nished work forward to the next workstation. In cases where the system is
unconstrained by demand (the demand exceeds system capacity), material can
be pushed without restraint. Usually, however, there is some synchronization of
push with demand. In make-to-stock production, the triggering mechanism is a
drop in nished goods inventory. In make-to-order or assemble-to-order production, a master production schedule drives production by scheduling through a
material requirements planning (MRP) or other backward or forward scheduling
system.
MRP systems determine how much each station should produce for a given
period. Unfortunately, once planned, MRP is not designed to respond to disruptions and breakdowns that occur for that period. Consequently, stations continue
to produce inventory as planned, regardless of whether downstream stations can
absorb the inventory. Push systems such as those resulting from MRP tend to
build up work in process (WIP), creating high inventory-carrying costs and long
ow times.
Pull Control
At the other extreme of a push system is a pull system, in which downstream demand triggers each preceding station to produce a part with no more than one or
two parts at a station at any given time (see Figure 12.6). Pull systems are often
associated with the just-in-time (JIT) or lean manufacturing philosophy, which
advocates the reduction of inventories to a minimum. The basic principle of JIT is
to continuously reduce scrap, lot sizes, inventory, and throughput time as well as
eliminate the waste associated with nonvalue added activities such as material
handling, storage, machine setup, and rework.
FIGURE 12.6
Push versus pull
system.
Station
1
Station
2
Station
3
Station
1
Station
2
Station
3
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 12
12. Modeling
Manufacturing Systems
The McGrawHill
Companies, 2004
323
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
324
I. Study Chapters
Part I
12. Modeling
Manufacturing Systems
The McGrawHill
Companies, 2004
Study Chapters
the inventory needed in front of the bottleneck operation to ensure that it is never
starved for parts. The rope represents the tying or synchronizing of the production
release rate to the rate of the bottleneck operation.
As noted earlier in this chapter, a weakness in constraint-based management is
the inability to predict the level of improvement that can be achieved by eliminating a bottleneck. Simulation can provide this evaluative capability missing in
constraint-based management. It can also help detect oating bottlenecks for situations where the bottleneck changes depending on product mix and production rates.
Simulating DBR control can help determine where bottlenecks are and how much
buffer is needed prior to bottleneck operations to achieve optimum results.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 12
12. Modeling
Manufacturing Systems
The McGrawHill
Companies, 2004
325
and so forth). The modeler can experiment with different replenishment strategies
and usage policies until the best plan is found that meets the established criteria.
Using simulation modeling over traditional, analytic modeling for inventory
planning provides several benets (Browne 1994):
Greater accuracy because actual, observed demand patterns and irregular
inventory replenishments can be modeled.
Greater exibility because the model can be tailored to t the situation
rather than forcing the situation to t the limitations of an existing model.
Easier to model because complex formulas that attempt to capture the
entire problem are replaced with simple arithmetic expressions describing
basic cause-and-effect relationships.
Easier to understand because demand patterns and usage conditions are
more descriptive of how the inventory control system actually works.
Results reect information similar to what would be observed from
operating the actual inventory control system.
More informative output that shows the dynamics of inventory conditions
over time and provides a summary of supply, demand, and shortages.
More suitable for management because it provides what if capability so
alternative scenarios can be evaluated and compared. Charts and graphs
are provided that management can readily understand.
Centralized versus Decentralized Storage. Another inventory-related issue has
to do with the positioning of inventory. Traditionally, inventory was placed in centralized locations for better control. Unfortunately, centralized inventory creates
excessive handling of material and increased response times. More recent trends
are toward decentralized, point-of-use storage, with parts kept where they are
needed. This eliminates needless handling and dramatically reduces response
time. Simulation can effectively assess which method works best and where to
strategically place storage areas to support production requirements.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
326
I. Study Chapters
Part I
The McGrawHill
Companies, 2004
12. Modeling
Manufacturing Systems
Study Chapters
The decision of which job to process next is usually based one of the following
rules:
Rule
Shortest processing time
Earliest due date
First-come, rst-served
First-in-system, rst-served
Slack per remaining operations
Denition
Select the job having the least processing time.
Select the job that is due the soonest.
Select the job that has been waiting the longest for this
workstation.
Select the job that has been in the shop the longest.
Select the job with the smallest ratio of slack to operations
remaining to be performed.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 12
12. Modeling
Manufacturing Systems
The McGrawHill
Companies, 2004
327
12.5.9 Emulation
A special use of simulation in manufacturing, particularly in automated systems,
has been in the area of hardware emulation. As an emulator, simulation takes
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
328
I. Study Chapters
Part I
12. Modeling
Manufacturing Systems
The McGrawHill
Companies, 2004
Study Chapters
inputs from the actual control system (such as programmable controllers or microcomputers), mimics the behavior that would take place in the actual system,
and then provides feedback signals to the control system. The feedback signals are
synthetically created rather than coming from the actual hardware devices.
In using simulation for hardware emulation, the control system is essentially
plugged into the model instead of the actual system. The hardware devices are
then emulated in the simulation model. In this way simulation is used to test,
debug, and even rene the actual control system before any or all of the hardware
has been installed. Emulation can signicantly reduce the time to start up new systems and implement changes to automation.
Emulation has the same real-time requirements as when simulation is used
for real-time control. The simulation clock must be synchronized with the real
clock to mimic actual machining and move times.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 12
12. Modeling
Manufacturing Systems
The McGrawHill
Companies, 2004
329
unload times can be tacked onto the operation time. A precaution here is
in semiautomatic machines where an operator is required to load and/or
unload the machine but is not required to operate the machine. If the
operator is nearly always available, or if the load and unload activities
are automated, this may not be a problem.
Model them as a movement or handling activity. If, as just described, an
operator is required to load and unload the machine but the operator is
not always available, the load and unload activities should be modeled as
a separate move activity to and from the location. To be accurate, the part
should not enter or leave the machine until the operator is available. In
ProModel, this would be dened as a routing from the input buffer to the
machine and then from the machine to the output buffer using the
operator as the movement resource.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
330
I. Study Chapters
Part I
The McGrawHill
Companies, 2004
12. Modeling
Manufacturing Systems
Study Chapters
FIGURE 12.7
Transfer line system.
Machine
Buffer
Station
Line
placed directly on the station xture. In-line transfer machines have a load station
at one end and an unload station at the other end. Some in-line transfer machines
are coupled together to form a transfer line (see Figure 12.7) in which parts are
placed on system pallets. This provides buffering (nonsynchronization) and even
recirculation of pallets if the line is a closed loop.
Another issue is nding the optimum number of pallets in a closed, nonsynchronous pallet system. Such a system is characterized by a xed number of pallets that continually recirculate through the system. Obviously, the system should
have enough pallets to at least ll every workstation in the system but not so many
that they ll every position in the line (this would result in a gridlock). Generally,
productivity increases as the number of pallets increases up to a certain point,
beyond which productivity levels off and then actually begins to decrease. Studies have shown that the optimal point tends to be close to the sum of all of the
workstation positions plus one-half of the buffer pallet positions.
A typical analysis might be to nd the necessary buffer sizes to ensure that
the system is unaffected by individual failures at least 95 percent of the time. A
similar study might nd the necessary buffer sizes to protect the operation against
the longest tool change time of a downstream operation.
Stations may be modeled individually or collectively, depending on the level
of detail required in the model. Often a series of stations can be modeled as a single location. Operations in a transfer machine can be modeled as a simple operation time if an entire machine or block of synchronous stations is modeled as a
single location. Otherwise, the operation time specication is a bit tricky because
it depends on all stations nishing their operation at the same time. One might initially be inclined simply to assign the time of the slowest operation to every station. Unfortunately, this does not account for the synchronization of operations.
Usually, synchronization requires a timer to be set up for each station that
represents the operation for all stations. In ProModel this is done by dening an
activated subroutine that increments a global variable representing the cycle
completion after waiting for the cycle time. Entities at each station wait for the
variable to be incremented using a WAIT UNTIL statement.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
The McGrawHill
Companies, 2004
12. Modeling
Manufacturing Systems
Chapter 12
331
FIGURE 12.9
Quantity of ow when
(a) rate is constant and
(b) rate is changing.
Rate of flow
f
Rate of flow
f
t1
t2
Q F * (t2 t1)
Time
t1
t2
t1
Q =
f dt
2
(a)
(b)
Time
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
332
I. Study Chapters
Part I
12. Modeling
Manufacturing Systems
The McGrawHill
Companies, 2004
Study Chapters
12.7 Summary
In this chapter we focused on the issues and techniques for modeling manufacturing systems. Terminology common to manufacturing systems was presented and
design issues were discussed. Different applications of simulation in manufacturing were described with examples of each. Issues related to modeling each type of
system were explained and suggestions offered on how system elements for each
system type might be represented in a model.
References
Askin, Ronald G., and Charles R. Standridge. Modeling and Analysis of Manufacturing
Systems. New York: John Wiley & Sons, 1993.
Berdine, Robert A. FMS: Fumbled Manufacturing Startups? Manufacturing Engineering, July 1993, p. 104.
Bevans, J. P. First, Choose an FMS Simulator. American Machinist, May 1982,
pp. 14445.
Blackburn, J., and R. Millen. Perspectives on Flexibility in Manufacturing: Hardware
versus Software. In Modelling and Design of Flexible Manufacturing Systems, ed.
Andrew Kusiak, pp. 99109. Amsterdam: Elsevier, 1986.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 12
12. Modeling
Manufacturing Systems
The McGrawHill
Companies, 2004
333
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
334
I. Study Chapters
Part I
12. Modeling
Manufacturing Systems
The McGrawHill
Companies, 2004
Study Chapters
Suzaki, Kiyoshi. The New Manufacturing Challenge: Techniques for Continuous Improvement. New York: Free Press, 1987.
Wang, Hunglin, and Hsu Pin Wang. Determining the Number of Kanbans: A Step Toward
Non-Stock Production. International Journal of Production Research 28, no. 11
(1990), pp. 210115.
Wick, C. Advances in Machining Centers. Manufacturing Engineering, October 1987,
p. 24.
Zisk, B. Flexibility Is Key to Automated Material Transport System for Manufacturing
Cells. Industrial Engineering, November 1983, p. 60.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
The McGrawHill
Companies, 2004
13 MODELING MATERIAL
HANDLING SYSTEMS
Small changes can produce big results, but the areas of highest leverage are
often the least obvious.
Peter Senge
13.1 Introduction
Material handling systems utilize resources to move entities from one location to
another. While material handling systems are not uncommon in service systems,
they are found mainly in manufacturing systems. Apple (1977) notes that material
handling can account for up to 80 percent of production activity. On average, 50
percent of a companys operation costs are material handling costs (Meyers 1993).
Given the impact of material handling on productivity and operation costs, the
importance of making the right material handling decisions should be clear.
This chapter examines simulation techniques for modeling material handling
systems. Material handling systems represent one of the most complicated factors, yet, in many instances, the most important element, in a simulation model.
Conveyor systems and automatic guided vehicle systems often constitute the
backbone of the material ow process. Because a basic knowledge of material
handling technologies and decision variables is essential to modeling material
handling systems, we will briey describe the operating characteristics of each
type of material handling system.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
336
I. Study Chapters
Part I
The McGrawHill
Companies, 2004
Study Chapters
list of 10 principles published by the Material Handling Institute as a guide to designing or modifying material handling systems:
1. Planning principle: The plan is the prescribed course of action and how
to get there. At a minimum it should dene what, where, and when so
that the how and who can be determined.
2. Standardization principle: Material handling methods, equipment,
controls, and software should be standardized to minimize variety and
customization.
3. Work principle: Material handling work (volume weight or count per
unit of time distance) should be minimized. Shorten distances and
use gravity where possible.
4. Ergonomic principle: Human factors (physical and mental) and safety
must be considered in the design of material handling tasks and
equipment.
5. Unit load principle: Unit loads should be appropriately sized to achieve
the material ow and inventory objectives.
6. Space utilization principle: Effective and efcient use should be made
of all available (cubic) space. Look at overhead handling systems.
7. System principle: The movement and storage system should be fully
integrated to form a coordinated, operational system that spans receiving,
inspection, storage, production, assembly, packaging, unitizing, order
selection, shipping, transportation, and the handling of returns.
8. Automation principle: Material handling operations should be
mechanized and/or automated where feasible to improve operational
efciency, increase responsiveness, improve consistency and
predictability, decrease operating costs, and eliminate repetitive or
potentially unsafe manual labor.
9. Environmental principle: Environmental impact and energy
consumption should be considered as criteria when designing or
selecting alternative equipment and material handling systems.
10. Life cycle cost principle: A thorough economic analysis should account
for the entire life cycle of all material handling equipment and resulting
systems.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 13
The McGrawHill
Companies, 2004
337
are traditionally classied into one of the following categories (Tompkins et al.
1996):
1.
2.
3.
4.
5.
6.
7.
Conveyors.
Industrial vehicles.
Automated storage/retrieval systems.
Carousels.
Automatic guided vehicle systems.
Cranes and hoists.
Robots.
Missing from this list is hand carrying, which still is practiced widely if for no
other purpose than to load and unload machines.
13.4 Conveyors
A conveyor is a track, rail, chain, belt, or some other device that provides continuous movement of loads over a xed path. Conveyors are generally used for highvolume movement over short to medium distances. Some overhead or towline
systems move material over longer distances. Overhead systems most often move
parts individually on carriers, especially if the parts are large. Floor-mounted conveyors usually move multiple items at a time in a box or container. Conveyor speeds
generally range from 20 to 80 feet per minute, with high-speed sortation conveyors
reaching speeds of up to 500 fpm in general merchandising operations.
Conveyors may be either gravity or powered conveyors. Gravity conveyors
are easily modeled as queues because that is their principal function. The more
challenging conveyors to model are powered conveyors, which come in a variety
of types.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
338
I. Study Chapters
Part I
The McGrawHill
Companies, 2004
Study Chapters
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 13
The McGrawHill
Companies, 2004
339
suited for applications where precise handling is unimportant. Tow carts are relatively inexpensive compared to powered vehicles; consequently, many of them
can be added to a system to increase throughput and be used for accumulation.
An underoor towline uses a tow chain in a trough under the oor. The chain
moves continuously, and cart movement is controlled by extending a drive pin
from the cart down into the chain. At specic points along the guideway,
computer-operated stopping mechanisms raise the drive pins to halt cart movement. One advantage of this system is that it can provide some automatic buffering with stationary carts along the track.
Towline systems operate much like power-and-free systems, and, in fact,
some towline systems are simply power-and-free systems that have been inverted.
By using the oor to support the weight, heavier loads can be transported.
Monorail Conveyors. Automatic monorail systems have self-powered carriers
that can move at speeds of up to 300 fpm. In this respect, a monorail system is
more of a discrete unit movement system than a conveyor system. Travel can be
disengaged automatically in accumulation as the leading beaver tail contacts
the limit switch on the carrier in front of it.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
340
I. Study Chapters
Part I
The McGrawHill
Companies, 2004
Study Chapters
Fixed Spacing
Random Spacing
Accumulation
Nonaccumulation
Power-and-free, towline
Roller, monorail
Trolley, sortation
Belt, chain
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 13
The McGrawHill
Companies, 2004
341
other). A powered belt conveyor is a typical example of a nonaccumulation conveyor with random spacing. Parts may be interjected anywhere on the belt, and
when the belt stops, all parts stop.
Performance Measures and Decision Variables. Throughput capacity, delivery
time, and queue lengths (for accumulation conveyors) are performance measures in
conveyor system simulation. Several issues that are addressed in conveyor analysis include conveyor speed, accumulation capacity, and the number of carriers.
Questions to Be Answered. Common questions that simulation can help answer
in designing and operating a conveyor system are
What is the minimum conveyor speed that still meets throughput
requirements?
What is the throughput capacity of the conveyor?
What is the load delivery time for different activity levels?
How much queuing is needed on accumulation conveyors?
How many carriers are needed on a trolley or power-and-free conveyor?
What is the optimum number of pallets that maximizes productivity in a
recirculating conveyor?
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
342
I. Study Chapters
Part I
The McGrawHill
Companies, 2004
Study Chapters
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 13
The McGrawHill
Companies, 2004
343
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
344
I. Study Chapters
Part I
The McGrawHill
Companies, 2004
Study Chapters
density is required and longer-term storage is needed, multidepth or deep lane storage systems are used. AS/RSs that use pallet storage racks are usually referred to as
unit-load systems, while bin storage rack systems are known as miniload systems.
The throughput capacity of an AS/RS is a function of the rack conguration
and speed of the S/R machine. Throughput is measured in terms of how many single or dual cycles can be performed per hour. A single cycle is measured as the
average time required to pick up a load at the pickup station, store the load in a
rack location, and return to the pickup station. A dual cycle is the time required to
pick up a load at the input station, store the load in a rack location, retrieve a load
from another rack location, and deliver the load to the output station. Obviously,
there is considerable variation in cycle times due to the number of different possible rack locations that can be accessed. If the AS/RS is a stand-alone system with
no critical interface with another system, average times are adequate for designing the system. If, however, the system interfaces with other systems such as
front-end or remote-picking stations, it is important to take into account the variability in cycle times.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 13
The McGrawHill
Companies, 2004
345
locations, making the cycle time extremely difcult to calculate. The problem is
further compounded by mixed single (pickup and store or retrieve and deposit)
and dual (pickup, storage, retrieval, and deposit) cycles. Activity zoning, in which
items are stored in assigned zones based on frequency of use, also complicates
cycle time calculations. The easiest way to accurately determine the cycle time
for an AS/RS is by using a computer to enumerate the possible movements of the
S/R machine from the pickup and delivery (P&D) stand to every rack or bin
location. This produces an empirical distribution for the single cycle time. For
dual cycle times, an intermediate cycle timethe time to go from any location to
any other locationmust be determined. For a rack 10 tiers high and 40 bays
long, this can be 400 400, or 160,000, calculations! Because of the many
calculations, sometimes a large sample size is used to develop the distribution.
Most suppliers of automated storage/retrieval systems have computer programs
for calculating cycle times that can be generated based on a dened conguration
in a matter of minutes.
Analytical Cycle Time Calculations
Analytical solutions have been derived for calculating system throughput based
on a given aisle conguration. Such solutions often rely on simplied assumptions about the operation of the system. Bozer and White (1984), for example,
derive an equation for estimating single and dual cycle times assuming (1) randomized storage, (2) equal rack opening sizes, (3) P&D location at the base level
on one end, (4) constant horizontal and vertical speeds, and (5) simultaneous horizontal and vertical rack movement. In actual practice, rack openings are seldom
of equal size, and horizontal and vertical accelerations can have a signicant
inuence on throughput capacity.
While analytical solutions to throughput estimation may provide a rough
approximation for simple congurations, other congurations become extremely
difcult to estimate. In addition, there are control strategies that may improve
throughput rate, such as retrieving the load on the order list that is closest to the
load just stored or storing a load in an opening that is closest to the next load to
be retrieved. Finally, analytical solutions provide not a distribution of cycle times
but merely a single expected time, which is inadequate for analyzing AS/RSs that
interface with other systems.
AS/RS with Picking Operations
Whereas some AS/RSs (especially unit-load systems) have loads that are not
captive to the system, many systems (particularly miniload systems) deliver bins
or pallets either to the end of the aisle or to a remote area where material is
picked from the bin or pallet, which is then returned for storage. Remote picking
is usually achieved by linking a conveyor system to the AS/RS where loads are
delivered to remote picking stations. In this way, containers stored in any aisle
can be delivered to any workstation. This permits entire orders to be picked at a
single station and eliminates the two-step process of picking followed by order
consolidation.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
346
I. Study Chapters
Part I
The McGrawHill
Companies, 2004
Study Chapters
Where picking takes place is an important issue achieving the highest productivity from both AS/RS and picker. Both are expensive resources, and it is
undesirable to have either one waiting on the other.
Number of aisles.
Number of S/R machines.
Rack conguration (bays and tiers).
Bay or column width.
Tier or row height.
Input point(s).
Output point(s).
Zone boundaries and activity prole if activity zoning is utilized.
S/R machine speed and acceleration/deceleration.
Pickup and deposit times.
Downtime and repair time characteristics.
At a simple level, an AS/RS move time may be modeled by taking a time from a
probability distribution that approximates the time to store or retrieve a load. More
precise modeling incorporates the actual crane (horizontal) and lift (vertical)
speeds. Each movement usually has a different speed and distance to travel, which
means that movement along one axis is complete before movement along the other
axis. From a modeling standpoint, it is usually necessary to calculate and model
only the longest move time.
In modeling an AS/RS, the storage capacity is usually not a consideration and
the actual inventory of the system is not modeled. It would require lots of overhead to model the complete inventory in a rack with 60,000 pallet locations.
Because only the activity is of interest in the simulation, actual inventory is
ignored. In fact, it is usually not even necessary to model specic stock keeping
units (SKUs) being stored or retrieved, but only to distinguish between load types
insofar as they affect routing and subsequent operations.
Common performance measures that are of interest in simulating an AS/RS
include
S/R machine utilization.
Response time.
Throughput capability.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 13
The McGrawHill
Companies, 2004
347
13.7 Carousels
One class of storage and retrieval systems is a carousel. Carousels are essentially
moving racks that bring the material to the retriever (operator or robot) rather than
sending a retriever to the rack location.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
348
I. Study Chapters
Part I
The McGrawHill
Companies, 2004
Study Chapters
response times, carousels may have capacity considerations. The current contents
may even affect response time, especially if the carousel is used to store multiple
bins of the same item such as WIP storage. Unlike large AS/RSs, storage capacity
may be an important issue in modeling carousel systems.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 13
The McGrawHill
Companies, 2004
349
In zone blocking, the guide path is divided into various zones (segments) that
allow no more than one vehicle at a time. Zones can be set up using a variety of
different sensing and communication techniques. When one vehicle enters a zone,
other vehicles are prevented from entering until the current vehicle occupying the
zone leaves. Once the vehicle leaves, any vehicle waiting for access to the freed
zone can resume travel.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
350
I. Study Chapters
Part I
The McGrawHill
Companies, 2004
Study Chapters
The expression
2
Distancei j
Avg. speedi j
implies that the time required to travel to the point of pickup (empty travel time) is
the same as the time required to travel to the point of deposit (full travel time). This
assumption provides only an estimate of the time required to travel to a point of
pickup because it is uncertain where a vehicle will be coming from. In most cases,
this should be a conservative estimate because (1) vehicles usually follow a work
search routine in which the closest loads are picked up rst and (2) vehicles
frequently travel faster when empty than when full. A more accurate way of calculating empty load travel for complex systems is to use a compound weightedaveraging technique that considers all possible empty moves together with their
probabilities (Fitzgerald 1985).
Longest-waiting load.
Closest waiting load.
Highest-priority load.
Most loads waiting at a location.
Vehicle Parking Rules. If a transporter delivers a part and no other parts are
waiting for pickup, a decision must be made relative to the deployment of the
transporter. For example, the vehicle can remain where it is, or it can be sent to a
more strategic location where it is likely to be needed next. If several transport
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 13
The McGrawHill
Companies, 2004
351
vehicles are idle, it may be desirable to have a prioritized list for a vehicle to follow for alternative parking preferences.
Work Zoning. In some cases it may be desirable to keep a vehicle captive to a
particular area of production and not allow it to leave this area unless it has work
to deliver elsewhere. In this case, the transporter must be given a zone boundary
within which it is capable of operating.
Resource utilization.
Load throughput rate.
Response time.
Vehicle congestion.
In addition to the obvious purpose of simulating an AGVS to nd out if the number of vehicles is sufcient or excessive, simulation can also be used to determine
the following:
Number of vehicles.
Work search rules.
Park search rules.
Placement of crossover and bypass paths.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
352
I. Study Chapters
Part I
The McGrawHill
Companies, 2004
Study Chapters
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 13
The McGrawHill
Companies, 2004
353
Response time.
Percentage of time blocked by another crane.
Decision variables that become the basis for experimentation include
Work search rules.
Park search rules.
Multiple-crane priority rules.
Typical questions addressed in a simulation model involving cranes include
Which task assignment rules maximize crane utilization?
What idle crane parking strategy minimizes response time?
How much time are cranes blocked in a multicrane system?
13.10 Robots
Robots are programmable, multifunctional manipulators used for handling material or manipulating a tool such as a welder to process material. Robots are often
classied based on the type of coordinate system on which they are based: cylindrical, cartesian, or revolute. The choice of robot coordinate system depends on
the application. Cylindrical or polar coordinate robots are generally more appropriate for machine loading. Cartesian coordinate robots are easier to equip with
tactile sensors for assembly work. Revolute or anthropomorphic coordinate robots have the most degrees of freedom (freedom of movement) and are especially
suited for use as a processing tool, such as for welding or painting. Because cartesian or gantry robots can be modeled easily as cranes, our discussion will focus on
cylindrical and revolute robots. When used for handling, cylindrical or revolute
robots are generally used to handle a medium level of movement activity over
very short distances, usually to perform pick-and-place or load/unload functions.
Robots generally move parts individually rather than in a consolidated load.
One of the applications of simulation is in designing the cell control logic for
a robotic work cell. A robotic work cell may be a machining, assembly, inspection, or a combination cell. Robotic cells are characterized by a robot with 3 to
5 degrees of freedom surrounded by workstations. The workstation is fed parts
by an input conveyor or other accumulation device, and parts exit from the cell
on a similar device. Each workstation usually has one or more buffer positions to
which parts are brought if the workstation is busy. Like all cellular manufacturing, a robotic cell usually handles more than one part type, and each part type
may not have to be routed through the same sequence of workstations. In addition to part handling, the robot may be required to handle tooling or xtures.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
354
I. Study Chapters
Part I
The McGrawHill
Companies, 2004
Study Chapters
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 13
The McGrawHill
Companies, 2004
355
13.11 Summary
Material handling systems can be one of the most difcult elements to model
using simulation simply because of the sheer complexity. At the same time, the
material handling system is often the critical element in the smooth operation of a
manufacturing or warehousing system. The material handling system should be
designed rst using estimates of resource requirements and operating parameters
(speed, move capacity, and so forth). Simulation can then help verify design decisions and ne-tune the design.
In modeling material handling systems, it is advisable to simplify wherever
possible so models dont become too complex. A major challenge in modeling
conveyor systems comes when multiple merging and branching occur. A challenging issue in modeling discrete part movement devices, such as AGVs, is how
to manage their deployment in order to get maximum utilization and meet production goals.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
356
I. Study Chapters
Part I
The McGrawHill
Companies, 2004
Study Chapters
References
Apple, J. M. Plant Layout and Material Handling. 3rd ed. N.Y.: Ronald Press, 1977.
Askin, Ronald G., and C. R. Standridge. Modeling and Analysis of Manufacturing Systems.
New York: John Wiley & Sons, 1993.
Automatic Monorail Systems. Material Handling Engineering, May 1988, p. 95.
Bozer, Y. A., and J. A. White. Travel-Time Models for Automated Storage/Retrieval
Systems. IIE Transactions 16, no. 4 (1984), pp. 32938.
Fitzgerald, K. R. How to Estimate the Number of AGVs You Need. Modern Materials
Handling, October 1985, p. 79.
Henriksen, J., and T. Schriber. Simplied Approaches to Modeling Accumulation and
Non-Accumulating Conveyor Systems. In Proceedings of the 1986 Winter Simulation Conference, ed. J. Wilson, J. Henricksen, and S. Roberts. Piscataway, NJ: Institute of Electrical and Electronics Engineers, 1986.
Meyers, Fred E. Plant Layout and Material Handling. Englewood Cliffs, NJ: Regents/
Prentice Hall, 1993.
Pritsker, A. A. B. Introduction to Simulation and Slam II. West Lafayette, IN: Systems
Publishing Corporation, 1986, p. 600.
Tompkins, J. A.; J. A. White; Y. A. Bozer; E. H. Frazelle; J. M. A. Tanchoco; and
J. Trevino. Facilities Planning. 2nd ed. New York: John Wiley & Sons, 1996.
Zisk, B. Flexibility Is Key to Automated Material Transport System for Manufacturing
Cells. Industrial Engineering, November 1983, p. 60.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
The McGrawHill
Companies, 2004
14 MODELING SERVICE
SYSTEMS
No matter which line you move to, the other line always moves faster.
Unknown
14.1 Introduction
A service system is a processing system in which one or more services are provided to customers. Entities (customers, patients, paperwork) are routed through
a series of processing areas (check-in, order, service, payment) where resources
(service agents, doctors, cashiers) provide some service. Service systems exhibit
unique characteristics that are not found in manufacturing systems. Sasser, Olsen,
and Wyckoff (1978) identify four distinct characteristics of services that distinguish them from products that are manufactured:
1.
2.
3.
4.
These characteristics pose great challenges for service system design and management, particularly in the areas of process design and stafng. Having discussed general modeling procedures common to both manufacturing and service
system simulation in Chapter 7, and specic modeling procedures unique to
manufacturing systems in Chapter 12, in this chapter we discuss design and
operating considerations that are more specic to service systems. A description
is given of major classes of service systems. To provide an idea of how simulation might be performed in a service industry, a call center simulation example is
presented.
357
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
358
I. Study Chapters
Part I
The McGrawHill
Companies, 2004
Study Chapters
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 14
The McGrawHill
Companies, 2004
359
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
360
I. Study Chapters
Part I
The McGrawHill
Companies, 2004
Study Chapters
powerful strategic and competitive weapon. Here are some typical internal performance measures that can be evaluated using simulation:
Service time.
Waiting time.
Queue lengths.
Resource utilization.
Service level (the percentage of customers who can be promptly serviced,
without any waiting).
Abandonment rate (the percentage of impatient customers who leave the
system).
This entire realm of support processes presents a major area of potential application for simulation. Similar to the problem of dealing with excess inventory in
manufacturing systems, customers, paperwork, and information often sit idle in
service systems while waiting to be processed. In fact, the total waiting time for
entities in service processes often exceeds 90 percent of the total ow time.
The types of questions that simulation helps answer in service systems can be
categorized as being either design related or management related. Here are some
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 14
The McGrawHill
Companies, 2004
361
1.
2.
3.
4.
5.
6.
7.
8.
Management Decisions
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
362
I. Study Chapters
Part I
The McGrawHill
Companies, 2004
Study Chapters
Payment.
Wash.
Rinse.
Dry.
Vacuum.
Interior cleaning.
How many ways can cars be processed through the car wash? Each activity could
be done at separate places, or any and even all of them could be combined at a station. This is because of the exibility of the resources that perform each operation
and the imparticularity of the space requirements for most of the activities. Many
of the activities could also be performed in any sequence. Payment, vacuuming,
and interior cleaning could be done in almost any order. The only order that
possibly could not change easily is washing, rinsing, and drying. The other consideration with many service processes is that not all entities receive the same
services. A car wash customer, for example, may forgo getting vacuum service or
interior cleaning. Thus it is apparent that the mix of activities in service processes
can vary greatly.
Simulation helps in process design by allowing different processing sequences
and combinations to be tried to nd the best process ow. Modeling process ow is
relatively simple in most service industries. It is only when shifts, resource pools,
and preemptive tasks get involved that it starts to become more challenging.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 14
The McGrawHill
Companies, 2004
363
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
364
I. Study Chapters
Part I
The McGrawHill
Companies, 2004
Study Chapters
service force, or the servicing policies and procedures can be modied to run
additional experiments.
Service factory.
Pure service shop.
Retail service store.
Professional service.
Telephonic service.
Delivery service.
Transportation service.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 14
The McGrawHill
Companies, 2004
365
usually have both front-room and back-room activities with total service being
provided in a matter of minutes. Customization is done by selecting from a menu
of options previously dened by the provider. Waiting time and service time are
two primary factors in selecting the provider. Convenience of location is another
important consideration. Customer commitment to the provider is low because
there are usually alternative providers just as conveniently located.
Examples include banks (branch operations), restaurants, copy centers, barbers, check-in counters of airlines, hotels, and car rental agencies.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
366
I. Study Chapters
Part I
The McGrawHill
Companies, 2004
Study Chapters
process. For large items such as furniture or appliances, the customer may have to
order and pay for the merchandise rst. The delivery of the product may take
place later.
Examples include department stores, grocery stores, hardware stores, and
convenience stores.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 14
The McGrawHill
Companies, 2004
367
process, then the service time is short. If the service is a technical support process,
then the service time may be long or the call may require a callback after some
research.
Examples include technical support services (hotlines) for software or hardware, mail-order services, and airline and hotel reservations.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
368
I. Study Chapters
Part I
The McGrawHill
Companies, 2004
Study Chapters
14.7.1 Background
Society Banks Information Technology and Operations (ITO) group offers a help
desk service to customers of the ITO function. This service is offered to both internal and external customers, handling over 12,000 calls per month. The client
services help desk provides technical support and information on a variety of technical topics including resetting passwords, ordering PC equipment, requesting
phone installation, ordering extra copies of internal reports, and reporting mainframe and network problems. The help desk acts as the primary source of communication between ITO and its customers. It interacts with authority groups within
ITO by providing work and support when requested by a customer.
The old client services help desk process consisted of (1) a mainframe help
desk, (2) a phone/local area network help desk, and (3) a PC help desk. Each of
the three operated separately with separate phone numbers, operators, and facilities. All calls were received by their respective help desk operators, who manually
logged all information about the problem and the customer, and then proceeded to
pass the problem on to an authority group or expert for resolution.
Because of acquisitions, the increased use of information technologies, and the
passing of time, Societys help desk process had become fragmented and layered
with bureaucracy. This made the help desk a good candidate for a process redesign.
It was determined that the current operation did not have a set of clearly dened
goals, other than to provide a help desk service. The organizational boundaries of the
current process were often obscured by the fact that much of the different help desks
work overlapped and was consistently being handed off. There were no process performance measures in the old process, only measures of call volume. A proposal was
made to consolidate the help desk functions. The proposal also called for the introduction of automation to enhance the speed and accuracy of the services.
Time Period
7 A.M.11 A.M.
11 A.M.2 P.M.
2 P.M.5 P.M.
5 P.M.8 P.M.
Average
Password Reset
Device Reset
Inquiries
Percent
Level 1
Percent
Level 1A
Percent
Level 2
11.7%
8.8
7.7
8.6
25.7%
29.0
27.8
36.5
8.2%
10.9
11.1
17.8
45.6%
48.7
46.6
62.9
4.6%
3.6
4.4
3.7
47.3%
44.3
45.8
32.2
9.9%
27.5%
9.9%
47.3%
4.3%
48.4%
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 14
The McGrawHill
Companies, 2004
369
call level. Level 1 calls are resolved immediately by the help desk, Level 1A calls
are resolved later by the help desk, and Level 2 calls are handed off to an authority group for resolution.
Historically, calls averaged 2.5 minutes, lasting anywhere from 30 seconds
to 25 minutes. Periodically, follow-up work is required after calls that ranges from
1 to 10 minutes. Overall, the help desk service abandonment rate was 4 to 12 percent (as measured by the percentage of calls abandoned), depending on stafng
levels.
The help desk process was broken down into its individual work steps and
owners of each work step were identied. Then a owchart that described the
process was developed (Figure 14.1). From the owchart, a computer simulation
model was developed of the old operation, which was validated by comparing
actual performance of the help desk with that of the simulations output. During
the 10-day test period, the simulation model produced results consistent with
those of the actual performance. The user of the model was able to dene such
model parameters as daily call volume and stafng levels through the use of the
models Interact Box, which provided sensitivity analysis.
Joint requirements planning (JRP) sessions allowed the project team to collect information about likes, dislikes, needs, and improvement suggestions from
users, customers, and executives. This information claried the target goals of the
process along with its operational scope. Suggestions were collected and prioritized from the JRP sessions for improving the help desk process. Internal benchmarking was also performed using Societys customer service help desk as a reference for performance and operational ideas.
FIGURE 14.1
Customer problem,
query, or change
Flow diagram of
client services.
Automated problem
recognition
Help desk
Client notification
and escalation
Automated warning
recognition
Customer self-service
Automated resolution
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
370
I. Study Chapters
Part I
The McGrawHill
Companies, 2004
Study Chapters
A target process was dened as providing a single-source help desk (onestop shopping approach) for ITO customers with performance targets of
90 percent of calls to be answered by ve rings.
Less than 2 percent abandonment rate.
Other goals of the target process were to enhance the users perception of the help
desk and to signicantly reduce the time required to resolve a customers request.
A combination of radical redesign ideas (reengineering) and incremental change
ideas (TQM) formed the nucleus of a target help desk process. The redesigned
process implemented the following changes:
Consolidate three help desks into one central help desk.
Create a consistent means of problem/request logging and resolution.
Introduce automation for receiving, queuing, and categorizing calls for
resetting terminals.
Capture information pertaining to the call once at the source, and, if the
call is handed off, have the information passed along also.
Use existing technologies to create a hierarchy of problem resolution
where approximately 60 percent of problems can be resolved immediately
without using the operators and approximately 15 percent of the calls can
be resolved immediately by the operators.
Create an automated warning and problem recognition system that detects
and corrects mainframe problems before they occur.
The original simulation model was revisited to better understand the current
customer service level and what potential impact software changes, automation,
and consolidation would have on the stafng and equipment needs and operational capacity. Simulation results could also be used to manage the expectations
for potential outcomes of the target process implementation.
Immediate benet was gained from the use of this application of simulation to
better understand the old operational interrelationships between stafng, call volume, and customer service. Figure 14.2 shows how much the abandonment rate
Calls abandoned
FIGURE 14.2
0.20
0.18
0.16
0.14
0.12
0.10
0.08
0.06
0.04
0.02
0
7
8
Operators staffed
10
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 14
The McGrawHill
Companies, 2004
371
will change when the average daily call volume or the number of operators varies.
The importance of this graph is realized when one notices that it becomes increasingly harder to lower the abandonment rate once the number of operators increases
above seven. Above this point, the help desk can easily handle substantial
increases in average daily call volume while maintaining approximately the same
abandonment rate.
After modeling and analyzing the current process, the project team evaluated
the following operational alternatives using the simulation model:
The option to select from a number of different shift schedules so that
stafng can easily be varied from current levels.
The introduction of the automated voice response unit and its ability to
both receive and place calls automatically.
The ability of the automated voice response unit to handle device resets,
password resets, and system inquiries.
The incorporation of the PC and LAN help desks so that clients with
PC-related problems can have their calls routed directly to an available
expert via the automated voice response unit.
The ability to change the response time of the ASIM problem-logging
system.
Additionally, two alternative stafng schedules were proposed. The alternative schedules attempt to better match the time at which operators are available for
answering calls to the time the calls are arriving. The two alternative schedules
reduce effort hours by up to 8 percent while maintaining current service levels.
Additional results related to the Alternative Operations simulation model were
The automated voice response unit will permit approximately 75 percent
of PC-related calls to be answered immediately by a PC expert directly.
Using Figure 14.2, the automated voice response units ability to aid in
reducing the abandonment rate can be ascertained simply by estimating
the reduction in the number of calls routed to help desk operators and
nding the appropriate point on the chart for a given number of operators.
Improving the response time of ASIM will noticeably affect the operation
when stafng levels are low and call volumes are high. For example, with
ve operators on staff and average call volume of 650 calls per day, a
25 percent improvement in the response time of ASIM resulted in a
reduction in the abandonment rate of approximately 2 percent.
14.7.3 Results
The nonlinear relationship between the abandonment rate and the number of operators on duty (see Figure 14.2) indicates the difculty in greatly improving performance once the abandonment rate drops below 5 percent. Results generated
from the validated simulation model compare the impact of the proposed stafng
changes with that of the current stafng levels. In addition, the analysis of the
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
372
I. Study Chapters
Part I
The McGrawHill
Companies, 2004
Study Chapters
effect of the automated voice response unit can be predicted before implementation so that the best alternative can be identied.
The introduction of simulation to help desk operations has shown that it can
be a powerful and effective management tool that should be utilized to better
achieve operational goals and to understand the impact of changes. As the automation project continues to be implemented, the simulation model can greatly
aid management and the project team members by allowing them to intelligently
predict how each new phase will affect the help desk.
14.8 Summary
Service systems provide a unique challenge in simulation modeling, largely due
to the human element involved. Service systems have a high human content in the
process. The customer is often involved in the process and, in many cases, is the
actual entity being processed. In this chapter we discussed the aspects that should
be considered when modeling service systems and suggested ways in which
different situations might be modeled. We also discussed the different types of
service systems and addressed the modeling issues associated with each. The example case study showed how uid service systems can be.
References
Aran, M. M., and K. Kang. Design of a Fast Food Restaurant Simulation Model. Simulation. Norcross, GA: Industrial Engineering and Management Press, 1987.
Collier, D. A. The Service/Quality Solution. Milwaukee: ASQC Quality Press, 1994.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
I. Study Chapters
Chapter 14
The McGrawHill
Companies, 2004
373
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
1. Introduction to ProModel
6.0
The McGrawHill
Companies, 2004
INTRODUCTION TO
PROMODEL 6.0
Imagination is the beginning of creation. You imagine what you desire, you will
what you imagine and at last you create what you will.
George Bernard Shaw
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
378
II. Labs
1. Introduction to ProModel
6.0
Part II
The McGrawHill
Companies, 2004
Labs
Runtime/evaluation package.
Standard package.
Student package.
Network package.
FIGURE L1.1
ProModel opening screen (student package).
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
1. Introduction to ProModel
6.0
Lab 1
The McGrawHill
Companies, 2004
379
ProModels opening screen (student package) is shown in Figure L1.1. There are
six items (buttons) in the opening menu:
1. Open a model: Allows models created earlier to be opened.
2. Install model package: Copies to the specied destination directory all of
the les contained in a model package le.
3. Run demo model: Allows one of several example models packed with
the software to be run.
4. www.promodel.com: Allows the user to connect to the PROMODEL
Corporation home page on the World Wide Web.
5. SimRunner: This new addition to the ProModel product line evaluates
your existing simulation models and performs tests to nd better ways
to achieve desired results. A design of experiment methodology is used
in SimRunner. For a detailed description of SimRunner, please refer to
Lab 11.
6. Stat::Fit: This module allows continuous and/or discrete distributions to
be tted to a set of input data automatically. For a detailed discussion on
the modeling of input data distribution, please refer to Lab 6.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
380
II. Labs
Part II
1. Introduction to ProModel
6.0
The McGrawHill
Companies, 2004
Labs
FIGURE L1.3
Customer waiting time statistics with one customer service agent.
FIGURE L1.4
California Cellular with two customer service agents.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
Lab 1
1. Introduction to ProModel
6.0
The McGrawHill
Companies, 2004
381
FIGURE L1.5
Customer waiting time statistics with two customer service agents.
FIGURE L1.6
Number of calls waiting with one customer service agent.
FIGURE L1.7
Number of calls waiting with two customer service agents.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
382
II. Labs
1. Introduction to ProModel
6.0
Part II
The McGrawHill
Companies, 2004
Labs
FIGURE L1.8
Graph of number of customers waiting versus simulation run time.
L1.3 Exercises
1. How do you open an existing simulation model?
2. What is SimRunner? How can you use it in your simulation analysis?
3. What does the Stat::Fit package do? Do you need it when building a
simulation model?
4. At the most, how many locations, entities, and types of resources can be
modeled using the student version of ProModel?
5. Open the Manufacturing Cost model from the Demos subdirectory and
run the model three different times to nd out whether one, two, or three
operators are optimal for minimizing the cost per part (the cost per part is
displayed on the scoreboard during the simulation). Selecting Model
Parameters, you can change the number of operators from the Simulation
menu by double-clicking on the rst parameter (number of operators)
and entering 1, 2, or 3. Then select Run from the Model Parameters
dialog. Each simulation will run for 15 hours.
6. Without knowing how the model was constructed, can you give a rational
explanation for the number of operators that resulted in the least cost?
7. Go to the ProModel website on the Internet (www.promodel.com). What
are some of the successful real-world applications of the ProModel
software? Is ProModel applied only to manufacturing problems?
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
The McGrawHill
Companies, 2004
PROMODEL WORLD
VIEW, MENU,
AND TUTORIAL
I only wish that ordinary people had an unlimited capacity for doing harm; then
they might have an unlimited power for doing good.
Socrates (469399 B.C.)
In this lab, Section L2.1 introduces you to various commands in the ProModel
menu. In Section L2.2 we discuss the basic modeling elements in a ProModel
model le. Section L2.3 discusses some of the innovative features of ProModel.
Section L2.4 refers to a short tutorial on ProModel in a PowerPoint presentation
format. Some of the material describing the use and features of ProModel
has been taken from the ProModel User Guide as well as ProModels online help
system.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
384
II. Labs
Part II
The McGrawHill
Companies, 2004
Labs
FIGURE L2.1
The title and the menu
bars.
FIGURE L2.2
The File menu.
The menu bar, just below the title bar (Figure L2.1), is used to call up menus,
or lists of tasks. The menu bar of the ProModel screen displays the commands you
use to work with ProModel. Some of the items in the menu, like File, Edit, View,
Tools, Window, and Help, are common to most Windows applications. Others
such as Build, Simulation, and Output provide commands specic to programming in ProModel. In the following sections we describe all the menu commands
and the tasks within each menu.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
Lab 2
FIGURE L2.3
FIGURE L2.4
The McGrawHill
Companies, 2004
385
vary according to the currently selected window. The Edit menu is active only
when a model le is open.
Processing
Arrivals
Optional Modules
Path Networks
Resources
Shifts
Cost
Attributes
Variables
Arrays Streams
Macros
Subroutines
Arrival Cycles
Table Functions
User Distributions
External Files
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
386
Part II
The McGrawHill
Companies, 2004
Labs
FIGURE L2.5
General Information dialog box.
In addition, two more modules are available in the Build menu: General Information and Background Graphics.
General Information. This dialog box (Figure L2.5) allows the user to specify
the name of the model, the default time unit, the distance unit, and the graphic
library to be used. The models initialization and termination logic can also be
specied using this dialog box. A Notes window allows the user to save information such as the analysts name, the revision date, any assumptions made about the
model, and so forth. These notes can also be displayed at the beginning of a simulation run.
Background Graphics. The Background Graphics module (Figure L2.6) allows
the user to create a unique background for the model using the tools in the graphics editor. An existing background can also be imported from another application
such as AutoCAD.
Generally, most graphics objects are laid out in front of the grid. Large objects as well as imported backgrounds are placed behind the grid.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
Lab 2
The McGrawHill
Companies, 2004
387
FIGURE L2.6
Background Graphics dialog box.
FIGURE L2.7
FIGURE L2.8
FIGURE L2.9
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
388
II. Labs
Part II
The McGrawHill
Companies, 2004
Labs
FIGURE L2.11
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
The McGrawHill
Companies, 2004
Lab 2
FIGURE L2.12
FIGURE L2.13
FIGURE L2.14
FIGURE L2.15
389
Common procedures.
Overview of ProModel.
Main menus.
Shortcut menu.
Model elements (entities, resources, locations, path networks, and so on).
Logic elements (variables, attributes, arrays, expressions, and so forth).
Statements (GET, JOIN, and the like).
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
390
II. Labs
Part II
The McGrawHill
Companies, 2004
Labs
Functions.
Customer supporttelephone, pager, fax, e-mail, online le transfer,
and so on.
To quickly learn what is new in ProModel Version 6.0, go to the Help
Index menu and type new features for a description of the latest features of the
product.
Locations
Entities
Arrivals
Processing
Click Build from the menu bar to access these modeling elements (Figure L2.4).
The ProModel Student Version 6.0 limits the user to no more than 20 locations, ve entity types, ve resource types, ve attributes, and 10 RTI parameters
in a simulation model. If more capability is required for special projects, ask your
instructor to contact the PROMODEL corporation about faculty or network versions of the software.
L2.2.1 Locations
Locations represent xed places in the system where entities are routed for processing, delay, storage, decision making, or some other activity. We need some
type of receiving locations to hold incoming entities. We also need processing
locations where entities have value added to them. To build locations:
a. Left-click on the desired location icon in the Graphics toolbox. Left-click
in the layout window where you want the location to appear.
b. A record is automatically created for the location in the Locations edit
table (Figure L2.16).
c. Clicking in the appropriate box and typing in the desired changes can
now change the name, units, capacity, and so on. Note that in Lab 3 we
will actually ll in this information for an example model.
L2.2.2 Entities
Anything that a model can process is called an entity. Some examples are parts or
widgets in a factory, patients in a hospital, customers in a bank or a grocery store,
and travelers calling in for airline reservations.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
Lab 2
The McGrawHill
Companies, 2004
391
FIGURE L2.16
The Locations edit screen.
To build entities:
a. Left-click on the desired entity graphic in the Entity Graphics toolbox.
b. A record will automatically be created in the Entities edit table
(Figure L2.17).
c. Moving the slide bar in the toolbox can then change the name. Note that
in Lab 3 we will actually ll in this information for an example model.
L2.2.3 Arrivals
The mechanism for dening how entities enter the system is called arrivals. Entities can arrive singly or in batches. The number of entities arriving at a time is
called the batch size (Qty each). The time between the arrivals of successive entities is called interarrival time (Frequency). The total number of batches of arrivals
is termed Occurrences. The batch size, time between successive arrivals, and total
number of batches can be either constants or random (statistical distributions).
Also, the rst time that the arrival pattern is to begin is termed First Time.
To create arrivals:
a. Left-click on the entity name in the toolbox and left-click on the location
where you would like the entities to arrive (Figure L2.18).
b. Enter the various required data about the arrival process. Note that in
Lab 3 we will actually ll in this information for an example model.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
392
Part II
The McGrawHill
Companies, 2004
Labs
FIGURE L2.17
The Entities edit table.
FIGURE L2.18
The Arrivals edit table.
L2.2.4 Processing
Processing describes the operations that take place at a location, such as the
amount of time an entity spends there, the resources it needs to complete processing, and anything else that happens at the location, including selecting an entitys
next destination.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
Lab 2
The McGrawHill
Companies, 2004
393
FIGURE L2.19
The Process edit table.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
394
II. Labs
Part II
The McGrawHill
Companies, 2004
Labs
FIGURE L2.20
The Logic Builder tool
menu.
When the Logic Builder is open from a logic window, it remains on the screen
until you click the Close button or close the logic window or table from which it
was invoked. This allows you to enter multiple statements in a logic window and
even move around to other logic windows without having to constantly close and
reopen the Logic Builder. However, the Logic Builder closes automatically after
pasting to a eld in a dialog box or edit table or to an expression eld because you
must right-click anyway to use the Logic Builder in another eld.
You can move to another logic window or eld while the Logic Builder is still
up by right clicking in that eld or logic window. The Logic Builder is then reset
with only valid statements and elements for that eld or window, and it will paste
the logic you build into that eld or window. Some of the commonly used logic
statements available in ProModel are as follows:
WAIT:
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
Lab 2
The McGrawHill
Companies, 2004
395
INC:
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
396
FIGURE L2.22
Dynamic Plot edit
table.
FIGURE L2.23
Dynamic Plot of the
current value of WIP.
II. Labs
Part II
Labs
The McGrawHill
Companies, 2004
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
Lab 2
The McGrawHill
Companies, 2004
397
Right-Click Menus
The right-click menu for the graphic display is available for panels 1, 2, and 3, and
the main panel. When you right-click in any of these panels, the right-click menu
appears.
Panels 1, 2, and 3
Move Up: Places the graph in the main panel.
Clear Data: Removes the factor and its graph from panel 1, 2, or 3 and
the main panel. If you created a multi-line graph, Clear Data removes
the selected line from the graph and does not disturb the remaining graph
lines.
Line Color: Allows you to assign a specic line color to the graph.
Background Color: Allows you to dene a specic background color for
panels 1, 2, and 3.
Main Panel
Clear All Data: Removes all factors and graphs from panels 1, 2, 3, and
the main panel.
Remove Line 1, 2, 3: Deletes a specic line from the main panel.
Line Color: Allows you to assign a specic line color to the graph.
Background Color: Allows you to dene a specic background color for
panels 1, 2, and 3.
Grid Color: Allows you to assign a specic line color to the grid.
L2.3.3 Customize
Customize
You can add direct links to applications and les right on your ProModel toolbar.
Create a link to open a spreadsheet, a text document, or your favorite calculator
(Figure L2.24).
To create or modify your Custom Tools menu, select Tools Customize
from your ProModel menu bar. This will pull up the Custom Tools dialog window.
The Custom Tools dialog window allows you to add, delete, edit, or rearrange the
menu items that appear on the Tools drop-down menu in ProModel.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
398
II. Labs
Part II
The McGrawHill
Companies, 2004
Labs
FIGURE L2.24
Adding Calculator to the Customized Tools menu.
FIGURE L2.25
The QuickBar task bar.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
Lab 2
The McGrawHill
Companies, 2004
399
FIGURE L2.26
Tutorial on ProModel.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
400
II. Labs
Part II
The McGrawHill
Companies, 2004
Labs
6.
7.
8.
9.
10.
Resources
Processing
Arrivals
Run simulation
View output
L2.5 Exercises
1. Identify the ProModel menu where you will nd the following items:
a. Save As
b. Delete
c. View Trace
d. Shifts
e. Index
f. General Information
g. Options
h. Printer Setup
i. Processing
j. Scenarios
k. Tile
l. Zoom
2. Which of the following is not a valid ProModel menu or submenu item?
a. AutoBuild
b. Whats This?
c. Merge
d. Merge Documents
e. Snap to Grid
f. Normal
g. Paste
h. Print Preview
i. View Text
3. Some of the following are not valid ProModel element names. Which ones?
a. Activities
b. Locations
c. Conveyors
d. Queues
e. Shifts
f. Station
g. Server
h. Schedules
i. Arrivals
j. Expressions
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
Lab 2
4.
5.
6.
7.
The McGrawHill
Companies, 2004
401
k. Variables
l. Create
What are some of the valid logic statements used in ProModel?
What are some of the differences between the following logic
statements:
a. Wait versus Wait Until.
b. Move versus Move For.
c. Pause versus Stop.
d. View versus Graphic.
e. Split versus Ungroup.
Describe the functions of the following items in the ProModel Edit menu:
a. Delete
b. Insert
c. Append
d. Move
e. Move to
f. Copy Record
g. Paste Record
Describe the differences between the following items in the ProModel
View menu:
a. Zoom vs. Zoom to Fit Layout
b. Show Grid vs. Snap to Grid
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
3. Running a ProModel
Simulation
The McGrawHill
Companies, 2004
RUNNING A PROMODEL
SIMULATION
As far as the laws of mathematics refer to reality, they are not certain; and as far
as they are certain, they do not refer to reality.
Albert Einstein
The objective is to simulate the system to determine the expected waiting time for
customers in the queue (the average time customers wait in line for the ATM) and
the expected time in the system (the average time customers wait in the queue plus
the average time it takes them to complete their transaction at the ATM).
This is the same ATM system simulated by spreadsheet in Chapter 3 but with
a different objective. The Chapter 3 objective was to simulate only the rst 25 customers arriving to the system. Now no such restriction has been applied. This new
403
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
404
II. Labs
Part II
3. Running a ProModel
Simulation
The McGrawHill
Companies, 2004
Labs
objective provides an opportunity for comparing the simulated results with those
computed using queuing theory, which was presented in Section 2.9.3.
Queuing theory allows us to compute the exact values for the expected time
that customers wait in the queue and in the system. Given that queuing theory can
be used to get exact answers, why are we using simulation to estimate the two expected values? There are two parts to the answer. First, it gives us an opportunity
to measure the accuracy of simulation by comparing the simulation output with
the exact results produced using queuing theory. Second, most systems of interest
are too complex to be modeled with the mathematical equations of queuing theory. In those cases, good estimates from simulation are valuable commodities
when faced with expensive decisions.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
Lab 3
FIGURE L3.1
ATM simulation in
progress. System
events are animated
and key performance
measures are
dynamically updated.
FIGURE L3.2
The simulation clock
shows 1 hour and
20 minutes (80
minutes) to process
the rst 25 customers
arriving to the system.
The simulation takes
only a second of your
time to process 25
customers.
FIGURE L3.3
ATM simulation on its
15,000th customer and
approaching steady
state.
3. Running a ProModel
Simulation
The McGrawHill
Companies, 2004
405
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
406
II. Labs
3. Running a ProModel
Simulation
Part II
The McGrawHill
Companies, 2004
Labs
FIGURE L3.4
ATM simulation after
reaching steady state.
FIGURE L3.5
ATM simulation at its
end.
collects a multitude of statistics over the course of the simulation. You will learn
this feature in Lab 4.
The simulation required only a minute of your time to process 19,496 customers. In Lab 4, you will begin to see how easy it is to build models using ProModel as compared to building them with spreadsheets.
L3.2 Exercises
1. The values obtained for average time in queue and average time in
system from the ATM ProModel simulation of the rst 25 customers
processed represent a third set of observations that can be combined
with the observations for the same performance measures presented in
Table 3.3 of Section 3.5.3 in Chapter 3 that were derived from two
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
3. Running a ProModel
Simulation
Lab 3
The McGrawHill
Companies, 2004
407
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
The McGrawHill
Companies, 2004
BUILDING YOUR
FIRST MODEL
Knowing is not enough; we must apply. Willing is not enough; we must do.
Johann von Goethe
In this lab we build our rst simulation model using ProModel. In Section L4.1
we describe some of the basic concepts of building your rst ProModel simulation
model. Section L4.2 introduces the concept of queue in ProModel. Section L4.3
lets us build a model with multiple locations and multiple entities. In Section L4.4
we show how to modify an existing model and add more locations to it. Finally,
in Section L4.5 we show how variability in arrival time and customer service time
affect the performance of the system.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
410
II. Labs
Part II
The McGrawHill
Companies, 2004
Labs
FIGURE L4.1
General Information
for the Fantastic Dan
simulation model.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
Lab 4
FIGURE L4.2
Dening locations Waiting_ for_Barber and BarberDan.
FIGURE L4.3
The Graphics panel.
The McGrawHill
Companies, 2004
411
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
412
II. Labs
Part II
The McGrawHill
Companies, 2004
Labs
FIGURE L4.4
Dene the entityCustomer.
FIGURE L4.5
Process and Routing tables for Fantastic Dan model.
FIGURE L4.6
Process and Routing tables for Fantastic Dan model in text format.
To dene the haircut time, click Operation in the Process table. Click the button with the hammer symbol. A new window named Logic Builder opens up.
Select the command Wait. The ProModel expression Wait causes the customer (entity) to be delayed for a specied amount of time. This is how processing times are
modeled.
Click Build Expression. In the Logic window, select Distribution Function
(Figure L4.7). In the Distribution Function window, select Uniform distribution.
Click Mean and select 9. Click Half-Range and select 1. Click Return. Click
Paste. Close the Logic Builder window. Close the Operation window.
Finally the customers leave the barbershop. They are routed to a default location called EXIT in ProModel. When entities (or customers) are routed to the
EXIT location, they are in effect disposed from the system. All the information associated with the disposed entity is deleted from the computers memory to conserve space.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
Lab 4
The McGrawHill
Companies, 2004
413
FIGURE L4.7
The Logic Builder
menu.
ProModel Expression
Uniform
Triangular
Exponential
Normal
U (mean, half-range)
T (minimum, mode, maximum)
E (mean)
N (mean, std. dev.)
The distribution functions are built into ProModel and generate random values based on the specied distribution. Some of the commonly used distribution
functions are shown in Table 4.1.
Now we will dene the entity arrival process, as in Figure L4.8.
Next we will dene some of the simulation optionsthat is, run time, number of replications, warm-up time, unit of time, and clock precision (Figure L4.9).
The run time is the number of hours the simulation model will be run. The number of replications refers to number of times the simulation model will be run
(each time the model will run for an amount of time specied by run hours). The
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
414
II. Labs
Part II
The McGrawHill
Companies, 2004
Labs
FIGURE L4.8
Customer arrival table.
FIGURE L4.9
Denition of
simulation run
options.
warm-up time refers to the amount of time to let the simulation model run to
achieve steady-state behavior. Statistics are usually collected after the warm-up
period is over. The run time begins at the end of the warm-up period. The unit of
time used in the model can be seconds, minutes, or hours. The clock precision
refers to the precision in the time unit used to measure all simulation event
timings.
Let us select the Run option from the Simulation Options menu (or click
F10). Figure L4.10 shows a screen shot during run time. The button in the middle
of the scroll bar at the top controls the speed of the simulation run. Pull it right to
increase the speed and left to decrease the speed.
After the simulation runs to its completion, the user is prompted, Do you
want to see the results? (Figure L4.11). Click Yes. Figures L4.12 and L4.13 are
part of the results that are automatically generated by ProModel in the 3DR
(three-dimensional report) Output Viewer.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
Lab 4
The McGrawHill
Companies, 2004
415
FIGURE L4.10
Screen shot at run time.
FIGURE L4.11
Simulation complete
prompt.
FIGURE L4.12
The 3DR Output
Viewer for the
Fantastic Dan model.
Note that the average time a customer spends waiting for Barber Dan
is 22.95 minutes. The average time spent by a customer in the barbershop is
32.28 minutes. The utilization of the Barber is 89.15 percent. The number of
customers served in 480 minutes (or 8 hours) is 47. On average 5.875 customers
are served per hour. The maximum number of customers waiting for a haircut is 8,
although the average number of customers waiting is only 2.3.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
416
II. Labs
Part II
The McGrawHill
Companies, 2004
Labs
FIGURE L4.13
Results of the Fantastic Dan simulation model.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
Lab 4
The McGrawHill
Companies, 2004
417
From the menu bar select File New. In the General Information panel
(Figure L4.14) ll in the title of the simulation model as Bank of USAATM. Fill
in some of the other general information about the model like the time and
distance units. Click OK to proceed to dene the locations.
From the menu bar select Build Locations. Dene two locationsATM
and ATM_Queue (Figure L4.15). The icon selected for the rst location is actually
called brake. We changed its name to ATM (Name column in the Location table).
The icon for the second location (a queue) is selected from the graphics panel.
The icon (third from top) originally looks like a ladder on its side. To place it
in our model layout rst, left-click the mouse at the start location of the queue. Then
drag the mouse pointer to the end of the queue and right-click. Change the name of
this queue location from Loc1 ATM_Queue. Now double-click theATM_Queue
icon on the layout. This opens another window as follows (Figure L4.16). Make
sure to click on the Queue option in the Conveyor/Queue options window. Change
the length of the queue to be exactly 31.413 feet.
FIGURE L4.14
General Information
for the Bank of USA
ATM simulation
model.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
418
II. Labs
Part II
Labs
FIGURE L4.15
Dening locations ATM_Queue and ATM.
FIGURE L4.16
Click on the Queue
option in the
Conveyor/Queue
options window.
The McGrawHill
Companies, 2004
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
Lab 4
The McGrawHill
Companies, 2004
419
Check off the New button on the graphics panel (Figure L4.15) and click the
button marked Aa (fourth icon from top). Click on the location icon in the layout
panel. The name of the location (ATM) appears on the location icon. Do the same
for the ATM_Queue location.
Dene the entity (Figure L4.17) and change its name to ATM_Customer.
Dene the processes and the routings (Figures L4.18 and L4.19) the customers
go through at the ATM system. All customers arrive and wait at the location
ATM_Queue. Then they are routed to the location ATM. At this location the
customers deposit or withdraw money or check their balances, which takes an
average of 2.4 minutes exponentially distributed. Use the step-by-step procedure
detailed in section L2.2.4 to create the process and routing tables graphically.
To dene the service time at the ATM, click Operation in the Process table.
Click the button with the hammer symbol. A new window named Logic Builder
opens up. Select the command Wait. The ProModel expression Wait causes the
ATM customer (entity) to be delayed for a specied amount of time. This is how
processing times are modeled.
FIGURE L4.17
Dene the entityATM_Customer.
FIGURE L4.18
Process and Routing tables for Bank of USA ATM model.
FIGURE L4.19
Process and Routing tables for Bank of USA ATM model in text format.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
420
II. Labs
Part II
The McGrawHill
Companies, 2004
Labs
FIGURE L4.20
The Logic Builder
menu.
FIGURE L4.21
Customer arrival table.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
Lab 4
The McGrawHill
Companies, 2004
421
FIGURE L4.22
Denition of
simulation run options.
we are going to model 980 hours of operation of the ATM system. The number of
replications refers to the number of times the simulation model will be run (each
time the model will run for an amount of time specied by run hours). The warm-up
time refers to the amount of time to let the simulation model run to achieve steadystate behavior. Statistics are usually collected after the warm-up period is over. The
run time begins at the end of the warm-up period. For a more detailed discussion on
warm-up time, please refer to Chapter 9, Section 9.6.1 and Lab 9. The unit of time
used in the model can be seconds, minutes, or hours. The clock precision refers to the
precision in the time unit used to measure all simulation event timings.
Let us select the Run option from the Simulation Options menu (or click
F10). Figure L4.23 shows a screen shot during run time. The button in the middle
of the scroll bar at the top controls the speed of the simulation run. Pull it right to
increase the speed and left to decrease the simulation execution speed.
After the simulation runs to its completion, the user is prompted, Do you want
to see the results? (Figure L4.24). Click Yes. Figures L4.25 and L4.26 are part of
the results that are automatically generated by ProModel in the Output Viewer.
Note that the average time a customer spends waiting in the ATM Queue
is 9.62 minutes. The average time spent by a customer in the ATM system is
12.02 minutes. The utilization of the ATM is 79.52 percent. Also, 20,000 customers
are served in 60,265.64 minutes or 19.91 customers per hour. The maximum number of customers waiting in the ATM Queue is 31, although the average number of
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
422
II. Labs
Part II
FIGURE L4.23
Screen shot at run time.
FIGURE L4.24
Simulation complete
prompt.
FIGURE L4.25
The output viewer for
the Bank of USA ATM
model.
Labs
The McGrawHill
Companies, 2004
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
Lab 4
The McGrawHill
Companies, 2004
423
FIGURE L4.26
Results of the Bank of USA ATM simulation model.
customers waiting is only 3.19. This model is an enhancement of the ATM model
in Lab 3, Section L3.1.2. Results will not match exactly as some realism has been
added to the model that cannot be addressed in queuing theory.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
424
II. Labs
Part II
The McGrawHill
Companies, 2004
Labs
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
Lab 4
FIGURE L4.28
Locations in the Poly Furniture Factory.
FIGURE L4.29
Entities in the Poly Furniture Factory.
FIGURE L4.30
Entity arrivals in the Poly Furniture Factory.
The McGrawHill
Companies, 2004
425
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
426
II. Labs
Part II
The McGrawHill
Companies, 2004
Labs
FIGURE L4.31
Processes and routings in the Poly Furniture Factory.
FIGURE L4.32
Simulation options in
the Poly Furniture
Factory.
The time to move material between processes is modeled in the Move Logic
eld of the Routing table. Four choices of constructs are available in the Move
Logic eld:
MOVEto
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
Lab 4
FIGURE L4.33
Sample of the results of the simulation run for the Poly Furniture Factory.
The McGrawHill
Companies, 2004
427
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
428
II. Labs
Part II
The McGrawHill
Companies, 2004
Labs
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
Lab 4
The McGrawHill
Companies, 2004
429
FIGURE L4.35
Simulation model layout of the Poly Furniture Factory.
FIGURE L4.36
Processes and routings at the Poly Furniture Factory.
The contents of a location can be displayed in one of the following two alternative ways:
a. To show the contents of a location as a counter, rst deselect the New
option from the Graphics toolbar. Left-click on the command button 00
in the Graphics toolbar (left column, top). Finally, left-click on the
location selected (Oven). The location counter will appear in the Layout
window next to the location Oven (Figure L4.35).
b. To show the contents of a location (Paint Booth) as a gauge, rst deselect
the New option from the Graphics toolbar. Left-click on the second
command button from the top in the left column in the Graphics toolbar.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
430
II. Labs
Part II
The McGrawHill
Companies, 2004
Labs
The gauge icon will appear in the Layout window next to the location
Paint Booth (Figure L4.35). The ll color and ll direction of the gauge
can now be changed if needed.
FIGURE L4.37
Customer arrival for haircut.
FIGURE L4.38
Processing of customers at the barbershop.
With Variability
Without Variability
32.27 min.
22.95 min.
9 min.
0 min.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
Lab 4
The McGrawHill
Companies, 2004
431
L4.6 Blocking
With respect to the way statistics are gathered, here are the rules that are used in
ProModel (see the ProModel Users Manual, p. 636):
1. Average <time> in system: The average total time the entity spends in
the system, from the time it arrives till it exits the system.
2. Average <time> in operation: The average time the entity spends in
processing at a location (due to a WAIT statement) or traveling on a
conveyor or queue.
3. Average <time> in transit: The average time the entity spends traveling
to the next location, either in or out of a queue or with a resource. The
move time in a queue is decided by the length of the queue (dened in
the queue dialog, Figure L4.16) and the speed of the entity (dened in
the entity dialog, Figure L4.4 or L4.17).
4. Average < time> wait for resource, etc.: The average time the entity spends
waiting for a resource or another entity to join, combine, or the like.
5. Average < time> blocked: The average time the entity spends waiting for
a destination location to become available. Any entities held up behind
another blocked entity are actually waiting on the blocked entity, so they
are reported as time waiting for resource, etc.
Example
At the SoCal Machine Shop (Figure L4.39) gear blanks arriving to the shop wait
in a queue (Incoming_Q) for processing on a turning center and a mill, in that
FIGURE L4.39
The Layout of the SoCal Machine Shop.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
432
II. Labs
Part II
The McGrawHill
Companies, 2004
Labs
order. A total of 100 gear blanks arrive at the rate of one every eight minutes. The
processing times on the turning center and mill are eight minutes and nine minutes, respectively. Develop a simulation model and run it.
To gure out the time the gear blanks are blocked in the machine shop,
waiting for a processing location, we have entered Move for 0 in the operation
logic (Figure L4.40) of the Incoming_Q. Also, the decision rule for the queue has
been changed to No Queuing in place of FIFO (Figure L4.41). This way all the
entities waiting in the queue for the turning center to be freed up are reported as
blocked. When you specify FIFO as the queuing rule for a location, only the lead
entity is ever blocked (other entities in the location are waiting for the lead entity
and are reported as wait for resource, etc.).
FIGURE L4.40
Process and Routing tables for SoCal Machine Shop.
FIGURE L4.41
Decision rules for Incoming_Q.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
Lab 4
The McGrawHill
Companies, 2004
433
FIGURE L4.42
Entity activity statistics at the SoCal Machine Shop.
FIGURE L4.43
Entity activity statistics at the SoCal Machine Shop with two mills.
From the entity activity statistics (Figure L4.42) in the output report we can see
the entities spend on average or 66.5 minutes in the system, of which 49.5 minutes
are blocked (waiting for another process location) and 17 minutes are spent in operation (eight minutes at the turning center and nine minutes at the mill). Blocking
as a percentage of the average time in system is 74.44 percent. The utilization of the
turning center and the mill are 98.14 percent and 98.25 percent, respectively.
In general, the blocking time as a percentage of the time in system increases as
the utilization of the processing locations increases. To reduce blocking in the machine shop, let us install a second mill. In the location table, change the number of
units of mill to 2. The entity activity statistics from the resulting output report are
shown in Figure L4.43. As expected, the blocking time has been reduced to zero.
L4.7 Exercises
1. Run the Tube Distribution Supply Chain example model (logistics.mod)
from the demos subdirectory for 40 hours. What are the various entities
modeled in this example? What are the various operations and processes
modeled in this example? Look at the results and nd
a. The percentage utilization of the locations Mill and the Process
Grades Threads.
b. The capacities of Inventory, and Inventory 26; the maximum
contents of Inventory and Inventory 26.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
434
II. Labs
Part II
2.
3.
4.
5.
6.
The McGrawHill
Companies, 2004
Labs
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
Lab 4
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
The McGrawHill
Companies, 2004
435
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
5. ProModels Output
Module
The McGrawHill
Companies, 2004
PROMODELS OUTPUT
MODULE
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
438
II. Labs
The McGrawHill
Companies, 2004
5. ProModels Output
Module
Part II
Labs
FIGURE L5.1
File menu in the 3DR
Output Viewer.
FIGURE L5.2
Output menu options
in ProModel.
FIGURE L5.3
View menu in the 3DR
Output Viewer.
The Output menu in ProModel (Figure L5.2) has the following options:
View Statistics: Allows the user to view the statistics generated from
running a simulation model. Selecting this option loads the Output Viewer
3DR.
View Trace: Allows the user to view the trace le generated from running
a simulation model. Sending a trace listing to a text le during runtime
generates a trace. Please refer to Lab 8, section L8.2, for a more complete
discussion of tracing a simulation model.
The View menu (Figure L5.3) allows the user to select the way in which output data, charts, and graphs can be displayed. The View menu has the following
options:
a. Report
b. Category Chart
c. State Chart
d. Histogram
e. Time Plot
f. Sheet Properties
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
Lab 5
5. ProModels Output
Module
The McGrawHill
Companies, 2004
439
FIGURE L5.4
3DR Report view of the results of the ATM System in Lab3.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
440
II. Labs
Part II
5. ProModels Output
Module
Labs
FIGURE L5.5
Categories of charts
available in the
Category Chart
Selection menu.
FIGURE L5.6
An example of a category chart presenting entity average time in system.
The McGrawHill
Companies, 2004
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
5. ProModels Output
Module
Lab 5
The McGrawHill
Companies, 2004
441
FIGURE L5.7
Categories of charts
available in the State
Chart Selection menu.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
442
II. Labs
5. ProModels Output
Module
Part II
The McGrawHill
Companies, 2004
Labs
FIGURE L5.8
State chart representation of location utilization.
FIGURE L5.9
A state chart
representation of all
the locations states.
Multiple Capacity Location Downtime: Multiple Capacity Location Downtime graphs show the percentage of time that each multicapacity location in
the system was down. A pie chart can be created for any one of the locations.
Resource Blocked in Travel: Resource Blocked in Travel graphs show the
percentage of time a resource was blocked. A resource is blocked if it is
unable to move to a destination because the next path node along the
route of travel was blocked (occupied).
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
5. ProModels Output
Module
Lab 5
The McGrawHill
Companies, 2004
443
FIGURE L5.10
A pie chart
representing the states
of the location Inspect.
Pct. in Use
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
444
II. Labs
Part II
5. ProModels Output
Module
Labs
FIGURE L5.11
State chart for the Cell Operator resource states.
FIGURE L5.12
State chart representation of entity states.
The McGrawHill
Companies, 2004
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
5. ProModels Output
Module
Lab 5
The McGrawHill
Companies, 2004
445
Entity State: The following information is given for each entity type
(Figure L5.12)
Pct. in Move Logic
Pct. in Operation
The percentage of time the entity spent in processing at a location or traveling on a conveyor.
Pct. Blocked
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
446
II. Labs
Part II
5. ProModels Output
Module
Labs
FIGURE L5.13
Dialog box for plotting a histogram.
FIGURE L5.14
A time-weighted histogram of the contents of the Bearing Queue.
The McGrawHill
Companies, 2004
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
Lab 5
5. ProModels Output
Module
FIGURE L5.15
A time series plot of the Bearing Queue contents over time.
FIGURE L5.16
A time series plot of WIP.
The McGrawHill
Companies, 2004
447
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
448
II. Labs
5. ProModels Output
Module
Part II
The McGrawHill
Companies, 2004
Labs
FIGURE L5.17
Sheet Properties menu
in the Output Viewer.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
Lab 5
FIGURE L5.18
The results of Poly
Furniture Factory
(with Oven) in Classic
view.
5. ProModels Output
Module
The McGrawHill
Companies, 2004
449
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
450
II. Labs
5. ProModels Output
Module
Part II
The McGrawHill
Companies, 2004
Labs
FIGURE L5.19
View menu in the
Classic output viewer.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
Lab 5
5. ProModels Output
Module
The McGrawHill
Companies, 2004
451
FIGURE L5.20
Time series plot of customers waiting for Barber Dan.
FIGURE L5.21
Time series histogram
of customers waiting
for Barber Dan.
states are Operation, Setup, Idle, Waiting, Blocked, and Down. The location states
for the Splitter Saw are shown in Figure L5.23 as a pie graph. The utilization of
multiple capacity locations at Poly Furniture Factory is shown in Figure L5.24.
All the states the entity (Painted_Logs) is in are shown in Figure L5.25 as a state
graph and in Figure L5.26 as a pie chart. The different states are move, wait for resource, and operation.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
452
FIGURE L5.22
State graphs for the
utilization of single
capacity locations.
FIGURE L5.23
Pie chart for the
utilization of the
Splitter Saw.
FIGURE L5.24
State graphs for the
utilization of multiple
capacity locations.
FIGURE L5.25
Graph of the states
of the entity
Painted_Logs.
II. Labs
Part II
5. ProModels Output
Module
Labs
The McGrawHill
Companies, 2004
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
5. ProModels Output
Module
Lab 5
The McGrawHill
Companies, 2004
453
FIGURE L5.26
Pie graph of the
states of the entity
Painted_Logs.
L5.3 Exercises
1. Customers arrive at the Lake Gardens post ofce for buying stamps,
mailing letters and packages, and so forth. The interarrival time is
exponentially distributed with a mean of 2 minutes. The time to process
each customer is normally distributed with a mean of 10 minutes and a
standard deviation of 2 minutes.
a. Make a time series plot of the number of customers waiting in line at
the post ofce in a typical eight-hour day.
b. How many postal clerks are needed at the counter so that there are no
more than 15 customers waiting in line at the post ofce at any time?
There is only one line serving all the postal clerks. Change the
number of postal clerks until you nd the optimum number.
2. The Lake Gardens postmaster in Exercise 1 wants to serve his customers
well. She would like to see that the average time spent by a postal
customer at the post ofce is no more than 15 mins. How many postal
clerks should she hire?
3. For the Poly Furniture Factory example in Lab 4, Section L4.3,
a. Make a state graph and a pie graph for the splitter and the lathe.
b. Find the percentage of time the splitter and the lathe are idle.
4. For the Poly Furniture Factory example in Lab 4, Section L4.4,
a. Make histograms of the contents of the oven and the paint booth.
Make sure the bar width is set equal to one. What information can
you gather from these histograms?
b. Plot a pie chart for the various states of the entity Painted_Logs. What
percentage of time the Painted_Logs are in operation?
c. Make a time series plot of the oven and the paint booth contents. How
would you explain these plots?
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
454
II. Labs
5. ProModels Output
Module
Part II
The McGrawHill
Companies, 2004
Labs
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
6. Fitting Statistical
Distributions to Input Data
The McGrawHill
Companies, 2004
FITTING STATISTICAL
DISTRIBUTIONS
TO INPUT DATA
There are three kinds of lies: lies, damned lies, and statistics.
Benjamin Disraeli
Input data drive our simulation models. Input data can be for interarrival times,
material handling times, setup and process times, demand rates, loading and unloading times, and so forth. The determination of what data to use and where to
get the appropriate data is a complicated and time-consuming task. The quality of
data is also very important. We have all heard the clich garbage in, garbage
out. In Chapter 6 we discussed various issues about input data collection and
analysis. We have also described various empirical discrete and continuous distributions and their characteristics. In this lab we describe how ProModel helps in
tting empirical statistical distributions to user input data.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
456
II. Labs
6. Fitting Statistical
Distributions to Input Data
Part II
The McGrawHill
Companies, 2004
Labs
FIGURE L6.1
Stat::Fit opening
screen.
FIGURE L6.2
Stat::Fit opening menu.
small, the goodness-of-t tests are of little use in selecting one distribution over another because it is inappropriate to t one distribution over another in such a situation.
Also, when conventional techniques have failed to t a distribution, the empirical
distribution is used directly as a user distribution (Chapter 6, Section 6.9).
The opening menu of Stat::Fit is shown in Figure L6.2. Various options are
available in the opening menu:
1. File: File opens a new Stat::Fit project or an existing project or data le.
The File menu is also used to save a project.
2. Edit:
3. Input:
4. Statistics:
5. Fit: The Fit menu provides a Fit Setup dialog and a Distribution Graph
dialog. Other options are also available when a Stat::Fit project is
opened. The Fit Setup dialog lists all the distributions supported by
Stat::Fit and the relevant choices for goodness-of-t tests. At least one
distribution must be chosen before the estimate, test, and graphing
commands become available. The Distribution Graph command uses the
distribution and parameters provided in the Distribution Graph dialog to
create a graph of any analytical distribution supported by Stat::Fit. This
graph is not connected to any input data or document.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
6. Fitting Statistical
Distributions to Input Data
Lab 6
The McGrawHill
Companies, 2004
457
FIGURE L6.4
Stat::Fit data input
options.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
458
II. Labs
Part II
The McGrawHill
Companies, 2004
6. Fitting Statistical
Distributions to Input Data
Labs
here. The Input Options command can be accessed from the Input menu as well as
the Input Options button on the Speed Bar.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
12.36
5.71
16.79
18.01
5.12
7.69
19.41
8.58
13.42
15.56
10.
18.
16.75
14.13
17.46
10.72
11.53
18.03
13.45
10.54
12.53
8.91
6.78
8.54
11.23
10.1
9.34
6.53
14.35
18.45
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
Lab 6
FIGURE L6.5
Times between arrival
of cars at San Dimas
Gas Station.
FIGURE L6.6
Histogram of the times
between arrival data.
6. Fitting Statistical
Distributions to Input Data
The McGrawHill
Companies, 2004
459
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
460
II. Labs
Part II
6. Fitting Statistical
Distributions to Input Data
The McGrawHill
Companies, 2004
Labs
FIGURE L6.7
Descriptive statistics
for the input data.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
Lab 6
6. Fitting Statistical
Distributions to Input Data
The McGrawHill
Companies, 2004
461
FIGURE L6.8
The Auto::Fit
submenu.
FIGURE L6.9
Various distributions tted to the input data.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
462
II. Labs
Part II
6. Fitting Statistical
Distributions to Input Data
Labs
FIGURE L6.10
Goodness-of-t tests
performed on the input
data.
FIGURE L6.11
Comparison of actual data and tted uniform distribution.
The McGrawHill
Companies, 2004
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
The McGrawHill
Companies, 2004
6. Fitting Statistical
Distributions to Input Data
Lab 6
463
Because the Auto::Fit function requires a specic setup, the Auto::Fit view
can be printed only as the active window or part of the active document, not as
part of a report. The Auto::Fit function will not t discrete distributions. The manual method, previously described, should be used instead.
L6.4 Exercises
1. Consider the operation of a fast-food restaurant where customers arrive
for ordering lunch. The following is a log of the time (minutes) between
arrivals of 40 successive customers. Use Stat::Fit to analyze the data and
t an appropriate continuous distribution. What are the parameters of this
distribution?
11
9
12
8
11
11
13
12
8
10
12
14
7
17
16
8
9
13
15
10
15
14
12
10
11
14
9
16
7
12
15
13
7
16
14
13
7
10
11
15
11
13
12
8
10
12
14
11
17
13
8
10
13
12
10
15
14
12
10
11
14
9
16
7
12
15
13
11
13
14
13
12
10
11
15
3. The following are the numbers of incoming calls (each hour for 80
successive hours) to a call center set up for serving customers of a
certain Internet service provider. Use Stat::Fit to analyze the data and
t an appropriate discrete distribution. What are the parameters of this
distribution?
12
9
12
10
11
9
12
10
11
10
12
13
12
8
11
13
12
8
10
8
11
14
11
17
12
14
11
17
13
17
13
10
13
12
8
10
13
12
10
12
12
14
12
10
15
14
12
10
11
10
16
9
16
7
14
9
16
7
12
7
11
13
11
13
15
13
11
13
14
13
10
12
10
11
13
12
10
11
15
11
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
464
II. Labs
The McGrawHill
Companies, 2004
6. Fitting Statistical
Distributions to Input Data
Part II
Labs
21.47
22.55
28.04
28.97
29.05
35.26
37.65
38.21
38.32
39.17
39.49
39.99
41.42
42.53
47.08
51.53
55.11
55.75
55.85
56.96
58.78
60.61
63.38
65.99
66.00
73.55
73.81
74.14
79.79
81.66
82.10
83.52
85.90
88.04
88.40
88.47
92.63
93.11
93.74
98.82
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
7. Basic Modeling
Concepts
The McGrawHill
Companies, 2004
BASIC MODELING
CONCEPTS
In this chapter we continue to describe other basic features and modeling concepts
of ProModel. In Section L7.1 we show an application with multiple locations and
multiple entity types. Section L7.2 describes modeling of multiple parallel locations. Section L7.3 shows various routing rules. In Section L7.4 we introduce the
concept of variables. Section L7.5 introduces the inspection process, tracking of defects, and rework. In Section L7.6 we show how to assemble nonidentical entities
and produce assemblies. In Section L7.7 we show the process of making temporary
entities through the process of loading and subsequent unloading. Section L7.8
describes how entities can be accumulated before processing. Section L7.9 shows
the splitting of one entity into multiple entities. In Section L7.10 we introduce
various decision statements with appropriate examples. Finally, in Section L7.11
we show you how to model a system that shuts down periodically.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
466
II. Labs
The McGrawHill
Companies, 2004
7. Basic Modeling
Concepts
Part II
Labs
FIGURE L7.1
The three processing locations and the receiving dock.
FIGURE L7.2
Layout of Pomona
Electronics.
Area
Mean Time
Area
Mean Time
Area
Mean Time
1
2
3
10
12
15
2
1
3
5
6
8
3
2
1
12
14
15
arrive (Figure L7.1). Assume each of the assembly areas has innite capacity.
The layout of Pomona Electronics is shown in Figure L7.2. Note that we used
Background Graphics Behind Grid, from the Build menu, to add the Pomona
Electronics logo on the simulation model layout. Add the robot graphics (or something appropriate). Dene three entities as PCB1, PCB2, and PCB3 (Figure L7.3).
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
Lab 7
7. Basic Modeling
Concepts
The McGrawHill
Companies, 2004
467
FIGURE L7.3
The three types of circuit boards.
FIGURE L7.4
The arrival process for all circuit boards.
FIGURE L7.5
Processes and routings for Pomona Electronics.
Dene the arrival process (Figure L7.4). Assume all 1500 boards are in stock
when the assembly operations begin. The process and routing tables are developed as in Figure L7.5.
Run the simulation model. Note that the whole batch of 1500 printed circuit
boards (500 of each) takes a total of 2 hours and 27 minutes to be processed.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
468
II. Labs
7. Basic Modeling
Concepts
Part II
The McGrawHill
Companies, 2004
Labs
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
Lab 7
7. Basic Modeling
Concepts
The McGrawHill
Companies, 2004
469
FIGURE L7.6
Single unit of multicapacity location.
FIGURE L7.7
Multiple units of single-capacity locations.
FIGURE L7.8
Multiple single-capacity locations.
Problem Statement
At San Dimas Electronics, jobs arrive at three identical inspection machines according to an exponential distribution with a mean interarrival time of 12 minutes.
The rst available machine is selected. Processing on any of the parallel machines
is normally distributed with a mean of 10 minutes and a standard deviation of
3 minutes. Upon completion, all jobs are sent to a fourth machine, where they
queue up for date stamping and packing for shipment; this takes ve minutes normally distributed with a standard deviation of two minutes. Completed jobs then
leave the system. Run the simulation for one month (20 days, eight hours each).
Calculate the average utilization of the four machines. Also, how many jobs are
processed by each of the four machines?
Dene a location called Inspect. Change its units to 3. Three identical parallel
locationsthat is, Inspect.1, Inspect.2, and Inspect.3are thus created. Also, dene a location for all the raw material to arrive (Material_Receiving). Change the
capacity of this location to innite. Dene a location for Packing (Figure L7.9).
Select Background Graphics from the Build menu. Make up a label San Dimas
Electronics. Add a rectangular border. Change the font and color appropriately.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
470
II. Labs
Part II
7. Basic Modeling
Concepts
The McGrawHill
Companies, 2004
Labs
FIGURE L7.9
The locations and the layout of San Dimas Electronics.
FIGURE L7.10
Arrivals of PCB at San Dimas Electronics.
Dene an entity called PCB. Dene the frequency of arrival of the entity PCB
as exponential with a mean interarrival time of 12 minutes (Figure L7.10). Dene
the process and routing at San Dimas Electronics as shown in Figure L7.11.
In the Simulation menu select Options. Enter 160 in the Run Hours box. Run
the simulation model. The average utilization and the number of jobs processed at
the four locations are given in Table L7.2.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
The McGrawHill
Companies, 2004
7. Basic Modeling
Concepts
Lab 7
471
FIGURE L7.11
Process and routing tables at San Dimas Electronics.
Inspector1
Inspector2
Inspector3
Packing
Average Utilization
49.5
29.7
15.9
41.5
437
246
117
798
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
472
II. Labs
7. Basic Modeling
Concepts
Part II
The McGrawHill
Companies, 2004
Labs
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
Lab 7
7. Basic Modeling
Concepts
The McGrawHill
Companies, 2004
473
FIGURE L7.13
Locations at the Bank of India.
FIGURE L7.14
Queue menu.
distribution (mean of 10 minutes and half-width of 6 minutes). However, the customers prefer Amar to Akbar, and Akbar over Anthony. If the teller of choice is busy,
the customers choose the rst available teller. Simulate the system for 200 customer
service completions. Estimate the tellers utilization (percentage of time busy).
The locations are dened as Akbar, Anthony, Amar, Teller_Q, and Enter as
shown in Figure L7.13. The Teller_Q is exactly 100 feet long. Note that we have
checked the queue option in the Conveyor/Queue menu (Figure L7.14). The
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
474
II. Labs
Part II
The McGrawHill
Companies, 2004
7. Basic Modeling
Concepts
Labs
FIGURE L7.15
Customer arrival at the Bank of India.
FIGURE L7.16
Process and routing tables at the Bank of India.
Selection in Order
of Preference
Selection by
Turn
Amar
Akbar
Anthony
79
64.7
46.9
63.9
65.1
61.5
customer arrival process is shown in Figure L7.15. The processes and routings are
shown in Figure L7.16. Note that the customers go to the tellers Amar, Akbar, and
Anthony in the order they are specied in the routing table.
The results of the simulation model are shown in Table L7.3. Note that Amar,
being the favorite teller, is much more busy than Akbar and Anthony.
If the customers were routed to the three tellers in turn (selected in rotation),
the process and routing tables would be as in Figure L7.17. Note that By Turn was
selected from the Rule menu in the routing table. These results of the simulation
model are also shown in Table L7.3. Note that Amar, Akbar, and Anthony are now
utilized almost equally.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
7. Basic Modeling
Concepts
Lab 7
The McGrawHill
Companies, 2004
475
FIGURE L7.17
Process and routing tables for tellers selected by turn.
L7.4 Variables
Variables are placeholders for either real or integer numbers that may change
during the simulation. Variables are typically used for making decisions or for
gathering data. Variables can be dened to track statistics and monitor other
activities during a simulation run. This is useful when the built-in statistics dont
capture a particular performance metric of interest. Variables might be dened
to track
In ProModel two types of variables are usedlocal variables and global variables.
Global variables are accessible from anywhere in the model and at any
time. Global variables are dened through the Variables(global) editor
in the Build menu. The value of a global variable may be displayed
dynamically during the simulation. It can also be changed interactively.
Global variables can be referenced anywhere a numeric expression is
valid.
Local variables are temporary variables that are used for quick
convenience when a variable is needed only within a particular operation
(in the Process table), move logic (in the Routing table), logic (in the
Arrivals, Resources, or Subroutine tables), the initialization or termination
logic (in the General Information dialog box), and so forth. Local
variables are available only within the logic in which they are declared
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
476
II. Labs
Part II
7. Basic Modeling
Concepts
The McGrawHill
Companies, 2004
Labs
and are not dened in the Variables edit table. They are created for each
entity, downtime occurrence, or the like executing a particular section of
logic. A new local variable is created for each entity that encounters an
INT or REAL statement. It exists only while the entity processes the logic
that declared the local variable. Local variables may be passed to
subroutines as parameters and are available to macros.
A local variable must be declared before it is used. To declare a local variable,
use the following syntax:
INT or REAL <name1>{= expression}, <name2>{= expression}
Examples:
INT HourOfDay, WIP
REAL const1 = 2.5, const2 = 5.0
INT Init_Inventory = 170
In Section L7.11 we show you how to use a local variable in your simulation
model logic.
Problem StatementTracking Work in Process and Production
In the Poly Casting Inc. machine shop, raw castings arrive in batches of four
every hour. From the raw material store they are sent to the mill, where they undergo a milling operation that takes an average of three minutes with a standard
deviation of one minute (normally distributed). The milled castings go to the
grinder, where they are ground for a duration that is uniformly distributed (minimum four minutes and maximum six minutes) or U(5,1). After grinding, the
ground pieces go to the nished parts store. Run the simulation for 100 hours.
Track the work-in-process inventory and the production quantity.
The complete simulation model layout is shown in Figure L7.18. The locations are dened as Receiving_Dock, Mill, Grinder, and Finish_Parts_Store
(Figure L7.19). Castings (entity) are dened to arrive in batches of four (Qty
each) every 60 minutes (Frequency) as shown in Figure L7.20. The processes and
routings are shown in Figure L7.21.
Dene a variable in your model to track the work-in-process inventory (WIP)
of parts in the machine shop. Also dene another variable to track the production
(PROD_QTY) of nished parts (Figure L7.22). Note that both of these are integer
type variables.
In the process table, add the following operation statement in the Receiving
location (Figure L7.21).
WIP = WIP + 1
Add the following operation statements in the outgoing Finish_Parts_Store location (Figure L7.21):
WIP = WIP - 1
PROD_QTY = PROD_QTY + 1
and
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
Lab 7
7. Basic Modeling
Concepts
FIGURE L7.18
Layout of Poly Casting Inc.
FIGURE L7.19
Locations at Poly Casting Inc.
FIGURE L7.20
Arrival of castings at Poly Casting Inc.
The McGrawHill
Companies, 2004
477
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
478
II. Labs
Part II
7. Basic Modeling
Concepts
The McGrawHill
Companies, 2004
Labs
FIGURE L7.21
Processes and routings for the Poly Casting Inc. model.
FIGURE L7.22
Variables for the Poly Casting Inc. model.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
Lab 7
7. Basic Modeling
Concepts
The McGrawHill
Companies, 2004
479
FIGURE L7.23
Simulation model layout for Poly Castings Inc. with inspection.
FIGURE L7.24
Variables for the Poly Castings Inc. with inspection model.
The last four locations are dened with innite capacity. The arrivals of
castings are dened in batches of four every hour. Next we dene ve variables (Figure L7.24) to track work in process, production quantity, mill rework,
grind rework, and scrap quantity. The processes and routings are dened as in
Figure L7.25.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
480
II. Labs
Part II
7. Basic Modeling
Concepts
The McGrawHill
Companies, 2004
Labs
FIGURE L7.25
Processes and routings for the Poly Castings Inc. with inspection model.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
Lab 7
7. Basic Modeling
Concepts
The McGrawHill
Companies, 2004
481
FIGURE L7.26
Layout of El Segundo Composites.
FIGURE L7.27
Process and routing tables for El Segundo Composites.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
482
II. Labs
Part II
7. Basic Modeling
Concepts
The McGrawHill
Companies, 2004
Labs
FIGURE L7.28
Work in process value history.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
Lab 7
7. Basic Modeling
Concepts
FIGURE L7.29
Locations at the Calcutta Tea Company.
FIGURE L7.30
Process and routing tables at the Calcutta Tea Company.
FIGURE L7.31
Layout of the Calcutta
Tea Company.
The McGrawHill
Companies, 2004
483
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
484
II. Labs
7. Basic Modeling
Concepts
Part II
The McGrawHill
Companies, 2004
Labs
FIGURE L7.32
Arrival of monitors and empty boxes at Shipping Boxes Unlimited.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
Lab 7
7. Basic Modeling
Concepts
The McGrawHill
Companies, 2004
485
FIGURE L7.33
Processes and routings for Shipping Boxes Unlimited.
FIGURE L7.34
A snapshot of the simulation model for Shipping Boxes Unlimited.
Figure L7.34. The plot of the work-in-process inventory for the 100 hours of simulation run is shown in Figure L7.35. Note that the work-in-process inventory
rises to as much as 12 in the beginning. However, after achieving steady state, the
WIP inventory stays mostly within the range of 0 to 3.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
486
II. Labs
Part II
7. Basic Modeling
Concepts
The McGrawHill
Companies, 2004
Labs
FIGURE L7.35
Time-weighted plot of the WIP inventory at Shipping Boxes Unlimited.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
Lab 7
7. Basic Modeling
Concepts
The McGrawHill
Companies, 2004
487
FIGURE L7.36
The locations at Shipping Boxes Unlimited.
FIGURE L7.37
Boxes loaded on
pallets at Shipping
Boxes Unlimited.
arrivals of monitors, empty boxes, and empty pallets are shown in Figure L7.39.
The processes and routings are shown in Figure L7.40. Note that comments can
be inserted in a line of code as follows (Figure L7.40):
/* inspection time */
The plot of the work-in-process inventory for the 100 hours of simulation run
=1200
is presented in Figure L7.41. Note that after the initial transient period (
minutes), the work-in-process inventory drops and stays mostly in a range of 02.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
488
II. Labs
Part II
7. Basic Modeling
Concepts
Labs
FIGURE L7.38
Entities at Shipping Boxes Unlimited.
FIGURE L7.39
Arrival of monitors, empty boxes, and empty pallets at Shipping Boxes Unlimited.
FIGURE L7.40
Process and routing tables at Shipping Boxes Unlimited.
The McGrawHill
Companies, 2004
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
Lab 7
7. Basic Modeling
Concepts
The McGrawHill
Companies, 2004
489
FIGURE L7.41
Time-weighted plot of the WIP inventory at Shipping Boxes Unlimited.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
490
Part II
7. Basic Modeling
Concepts
The McGrawHill
Companies, 2004
Labs
FIGURE L7.42
Process and routing tables for California Adventure Park.
FIGURE L7.43
Layout of California Adventure Park.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
Lab 7
7. Basic Modeling
Concepts
The McGrawHill
Companies, 2004
491
cartons (10 per case) are stored in the refrigerator for distribution to students during lunchtime. The distribution of milk cartons takes triangular(.1,.15,.2) minute
per student. The time to split open the cases takes a minimum of 5 minutes and a
maximum of 7 minutes (uniform distribution) per case. Moving the cases from receiving to the refrigerator area takes ve minutes per case, and moving the cartons
from the refrigerator to the distribution area takes 0.2 minute per carton. Students
wait in the lunch line to pick up one milk carton each. There are only 100 students
at this high school. Students show up for lunch with a mean interarrival time of
1 minute (exponential). On average, how long does a carton stay in the cafeteria
before being distributed and consumed? What are the maximum and the minimum
times of stay? Simulate for 10 days.
The layout of the San Dimas High School cafeteria is shown in Figure L7.44.
Three entitiesMilk_Case, Milk_Carton, and Studentare dened. Ten milk
cases arrive with a frequency of 480 minutes. One hundred students show up for
lunch each day. The arrival of students and milk cases is shown in Figure L7.45.
The processing and routing logic is shown in Figure L7.46.
FIGURE L7.44
Layout of the San Dimas High School cafeteria.
FIGURE L7.45
Arrival of milk and students at the San Dimas High School cafeteria.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
492
II. Labs
7. Basic Modeling
Concepts
Part II
The McGrawHill
Companies, 2004
Labs
FIGURE L7.46
Process and routing logic at the San Dimas High School cafeteria.
causes the program to take action1 if condition is true and action2 if condition is
false. Each action consists of one or more ProModel statements. After an action is
taken, execution continues with the line after the IF block.
Problem Statement
The Bombay Restaurant offers only a drive-in facility. Customers arrive at the
rate of six each hour (exponential interarrival time). They place their orders at the
rst window, drive up to the next window for payment, pick up food from the last
window, and then leave. The activity times are given in Table L7.4. The drive-in
facility can accommodate 10 cars at most. However, customers typically leave
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
Lab 7
The McGrawHill
Companies, 2004
7. Basic Modeling
Concepts
493
FIGURE L7.47
Control statements
available in ProModel.
Order food
Make payment
Pick up food
Normal (5,1)
Normal (7,2)
Normal (10,2)
and go to the Madras Caf across the street if six cars are waiting in line when they
arrive. Simulate for 100 days (8 hours each day). Estimate the number of customers served each day. Estimate on average how many customers are lost each
day to the competition.
An additional location (Arrive) is added to the model. After the customers arrive, they check if there are fewer than six cars at the restaurant. If yes, they join
the line and wait; if not, they leave and go across the street to the Madras Caf. An
IF-THEN-ELSE statement is added to the logic window in the processing table
(Figure L7.48). A variable (Customer_Lost) is added to the model to keep track of
the number of customers lost to the competition. The layout of the Bombay
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
494
II. Labs
Part II
7. Basic Modeling
Concepts
The McGrawHill
Companies, 2004
Labs
FIGURE L7.48
Process and routing logic at the Bombay Restaurant.
FIGURE L7.49
Layout of the Bombay
Restaurant.
Restaurant is shown in Figure L7.49. Note that Q_1 and Q_2 are each 100 feet
long and Q_3 is 200 feet long.
The total number of customers lost is 501 in 100 days. The number of
customers served in 100 days is 4791. The average cycle time per customer is
36.6 minutes.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
Lab 7
7. Basic Modeling
Concepts
The McGrawHill
Companies, 2004
495
FIGURE L7.50
An example of the WHILE-DO logic for Shipping Boxes Unlimited.
Problem Statement
The inspector in Section L7.7.2 is also the supervisor of the shop. As such, she
inspects only when at least ve full boxes are waiting for inspection in the
Inspect_Q. A WHILE-DO loop is used to check if the queue has ve or more boxes
waiting for inspection (Figure L7.50). The loop will be executed every hour.
Figure L7.51 shows a time-weighted plot of the contents of the inspection queue.
Note how the queue builds up to 5 (or more) before the inspector starts inspecting
the full boxes.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
496
II. Labs
Part II
7. Basic Modeling
Concepts
The McGrawHill
Companies, 2004
Labs
FIGURE L7.51
A plot of the contents of the inspection queue.
FIGURE L7.52
An example of a DO-WHILE loop.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
Lab 7
7. Basic Modeling
Concepts
FIGURE L7.53
A plot of the value of WIP at Poly Castings Inc.
FIGURE L7.54
The layout of the
Indian Bank.
The McGrawHill
Companies, 2004
497
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
498
II. Labs
Part II
7. Basic Modeling
Concepts
The McGrawHill
Companies, 2004
Labs
FIGURE L7.55
An example of a GOTO statement.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
Lab 7
7. Basic Modeling
Concepts
FIGURE L7.56
Process and routing logic for the Bank of India.
FIGURE L7.57
Time-series plot of Teller_Q at the Bank of India.
The McGrawHill
Companies, 2004
499
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
500
II. Labs
Part II
7. Basic Modeling
Concepts
The McGrawHill
Companies, 2004
Labs
FIGURE L7.58
Histogram of Teller_Q contents at the Bank of India.
the day; then, after the front door is locked at the fth hour (300 minutes) into the
simulated day, customers remaining in the queue are processed and the queue
length decreases (down to zero in this particular simulation run). The queue length
picks back up when the bank reopens the front door at simulation time 6.5 hours
(390 minutes).
The histogram of the same queue (Figure L7.58) shows that approximately
49% of the time the queue was empty. About 70% of the time there are 3 or fewer
customers waiting in line. What is the average time a customer spends in the
bank? Would you recommend that the bank not close the door after 5 hours of operation (customers never liked this practice anyway)? Will the average customer
stay longer in the bank?
L7.12 Exercises
1. Visitors arrive at Kids World entertainment park according to an
exponential interarrival time distribution with mean 2.5 minutes. The
travel time from the entrance to the ticket window is normally
distributed with a mean of three minutes and a standard deviation of
0.5 minute. At the ticket window, visitors wait in a single line until one
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
7. Basic Modeling
Concepts
Lab 7
The McGrawHill
Companies, 2004
501
of four cashiers is available to serve them. The time for the purchase of
tickets is normally distributed with a mean of ve minutes and a
standard deviation of one minute. After purchasing tickets, the visitors
go to their respective gates to enter the park. Create a simulation model,
with animation, of this system. Run the simulation model for 200 hours
to determine
a. The average and maximum length of the ticketing queue.
b. The average number of customers completing ticketing per hour.
c. The average utilization of the cashiers.
d. Whether management should add more cashiers.
2. A consultant for Kids World recommended that four individual queues
be formed at the ticket window (one for each cashier) instead of one
common queue. Create a simulation model, with animation, of this
system. Run the simulation model for 200 hours to determine
a. The average and maximum length of the ticketing queues.
b. The average number of customers completing ticketing per hour.
c. The average utilization of the cashiers.
d. Whether you agree with the consultants decision. Would you
recommend a raise for the consultant?
3. At the Kids World entertainment park in Exercise 1, the operating
hours are 8 A.M. till 10 P.M. each day (all week). Simulate for a whole
year (365 days) and answer questions ad as given in Exercise 1.
4. At Southern California Airlines traveler check-in facility, three types
of customers arrive: passengers with e-tickets (Type E), passengers with
paper tickets (Type T), and passengers that need to purchase tickets
(Type P). The interarrival distribution and the service times for these
passengers are given in Table L7.5. Create a simulation model, with
animation, of this system. Run the simulation model for 2000 hours.
If separate gate agents serve each type of passenger, determine the
following:
a. The average and maximum length of the three queues.
b. The average number of customers of each type completing check-in
procedures per hour.
c. The average utilization of the gate agents.
Interarrival Distribution
Type E
Type T
Type P
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
502
II. Labs
Part II
The McGrawHill
Companies, 2004
7. Basic Modeling
Concepts
Labs
X ray
Operating room
Cast-tting room
Recovery room
To
Probability
X ray
Operating room
Recovery room
Checkout room
Operating room
Cast-tting room
Recovery room
Checkout room
Cast-tting room
Recovery room
Checkout room
Recovery room
X ray
Checkout room
Operating room
X ray
Checkout room
.35
.20
.15
.30
.15
.25
.40
.20
.30
.65
.05
.55
.10
.35
.10
.20
.70
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
Lab 7
The McGrawHill
Companies, 2004
7. Basic Modeling
Concepts
503
Initial exam
X ray
Operating room
Cast-tting room
Recovery room
However, every seven hours (420 minutes) the front door is locked for
an hour (60 minutes). No new patients are allowed in the nursing home
during this time. Patients already in the system continue to get served.
Simulate for one year (365 days, 24 hours per day).
a. Figure out the utilization of each department.
b. What are the average and maximum numbers of patients in each
department?
c. Which is the bottleneck department?
d. What is the average time spent by a patient in the nursing home?
7. United Electronics manufactures small custom electronic assemblies.
Parts must be processed through four stations: assembly, soldering,
painting, and inspection. Orders arrive with an exponential interarrival
distribution (mean 20 minutes). The process time distributions are
shown in Table L7.8.
The soldering operation can be performed on three jobs at a time.
Painting can be done on four jobs at a time. Assembly and inspection
are performed on one job at a time. Create a simulation model, with
animation, of this system. Simulate this manufacturing system for
100 days, eight hours each day. Collect and print statistics on the
utilization of each station, associated queues, and the total number
of jobs manufactured during each eight-hour shift (average).
8. In United Electronics in Exercise 7, 10 percent of all nished
assemblies are sent back to soldering for rework after inspection, ve
percent are sent back to assembly for rework after inspection, and one
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
504
II. Labs
The McGrawHill
Companies, 2004
7. Basic Modeling
Concepts
Part II
Labs
Inspect Time
Correct Time
Center
Mean
Standard
Deviation
Mean
Standard
Deviation
P(error)
Mean
Standard
Deviation
1
2
3
.7
.75
.8
.2
.25
.15
.2
.2
.15
.05
.05
.03
.1
.05
.03
.2
.15
.1
.05
.04
.02
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
Lab 7
The McGrawHill
Companies, 2004
7. Basic Modeling
Concepts
505
FIGURE L7.59
Truck travel
and unload
Schematic of dump
truck operation for
DumpOnMe.
Loader 1
Loader queue
Weighing queue
Weighing
scale
Loader 2
Job
Type
Number
of
Batches
Number
of Jobs
per
Batch
Assembly
Time
Soldering
Time
Painting
Time
Inspection
Time
Time
between
Batch
Arrivals
1
2
15
25
5
3
Tria (5,7,10)
Tria (7,10,15)
Normal (36,10)
Uniform (5515)
Uniform (355)
Exponential (8)
Exponential (5)
Exp (14)
Exp (10)
to the scale to be weighed as soon as possible. Both the loaders and the
scale have a rst-come, rst-served waiting line (or queue) for trucks.
Travel time from a loader to the scale is considered negligible. After
being weighed, a truck begins travel time (during which time the truck
unloads), and then afterward returns to the loader queue. The
distributions of loading time, weighing time, and travel time are shown
in Table L7.11.
a. Create a simulation model, with animation, of this system. Simulate
for 200 days, eight hours each day.
b. Collect statistics to estimate the loader and scale utilization
(percentage of time busy).
c. About how many trucks are loaded each day on average?
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
506
II. Labs
7. Basic Modeling
Concepts
Part II
The McGrawHill
Companies, 2004
Labs
12. At the Pilot Pen Company, a molding machine produces pen barrels of
two different colorsred and bluein the ratio of 3:2. The molding
time is triangular (3,4,6) minutes per barrel. The barrels go to a lling
machine, where ink of appropriate color is lled at the rate of 20 pens
per hour (exponentially distributed). Another molding machine makes
caps of the same two colors in the ratio of 3:2. The molding time is
triangular (3,4,6) minutes per cap. At the next station, caps and lled
barrels of matching colors are joined together. The joining time is
exponentially distributed with a mean of 1 min. Simulate for 2000 hours.
Find the average number of pens produced per hour. Collect statistics on
the utilization of the molding machines and the joining equipment.
13. Customers arrive at the NoWaitBurger hamburger stand with an
interarrival time that is exponentially distributed with a mean of one
minute. Out of 10 customers, 5 buy a hamburger and a drink, 3 buy a
hamburger, and 2 buy just a drink. One server handles the hamburger
while another handles the drink. A person buying both items needs to
wait in line for both servers. The time it takes to serve a customer is
N(70,10) seconds for each item. Simulate for 100 hours. Collect
statistics on the number of customers served per hour, size of the
queues, and utilization of the servers. What changes would you suggest
to make the system more efcient?
14. Workers who work at the Detroit ToolNDie plant must check out tools
from a tool crib. Workers arrive according to an exponential distribution
with a mean time between arrivals of ve minutes. At present, three tool
crib clerks staff the tool crib. The time to serve a worker is normally
distributed with a mean of 10 minutes and a standard deviation of
2 minutes. Compare the following servicing methods. Simulate for
2000 hours and collect data.
a. Workers form a single queue, choosing the next available tool crib
clerk.
b. Workers enter the shortest queue (each clerk has his or her own
queue).
c. Workers choose one of three queues at random.
15. At the ShopNSave, a small family-owned grocery store, there are only
four aisles: aisle 1fruits/vegetables, aisle 2packaged goods (cereals
and the like), aisle 3dairy products, and aisle 4meat/sh. The time
between two successive customer arrivals is exponentially distributed
with a mean of 5 minutes. After arriving to the store, each customer
grabs a shopping cart. Twenty percent of all customers go to aisle 1,
30 percent go to aisle 2, 50 percent go to aisle 3, and 70 percent go to
aisle 4. The number of items selected for purchase in each aisle is
uniformly distributed between 2 and 8. The time spent to browse and
pick up each item is normally distributed: N(5,2) minutes. There are
three identical checkout counters; each counter has its own checkout
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
7. Basic Modeling
Concepts
Lab 7
The McGrawHill
Companies, 2004
507
line. The customer chooses the shortest line. Once a customer joins a
line, he or she is not allowed to leave or switch lines. The checkout time
is given by the following regression equation:
Checkout time = N(3,0.3) + (#of items) * N(0.5,0.15) minutes
The rst term of the checkout time is for receiving cash or a check or
credit card from the customer, opening and closing the cash register,
and handing over the receipt and cash to the customer. After checking
out, a customer leaves the cart at the front of the store and leaves. Build
a simulation model for the grocery store. Use the model to simulate a
14-hour day.
a. The percentages of customers visiting each aisle do not add up
to 100 percent. Why?
b. What is the average amount of time a customer spends at the
grocery store?
c. How many customers check out per cashier per hour?
d. What is the average amount of time a customer spends waiting in
the checkout line?
e. What is the average utilization of the cashiers?
f. Assuming there is no limit to the number of shopping carts,
determine the average and maximum number of carts in use at any
time.
g. On average how many customers are waiting in line to checkout?
h. If the owner adopts a customer service policy that there will never
be any more than three customers in any checkout line, how many
cashiers are needed?
Embellishments:
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
508
II. Labs
Part II
7. Basic Modeling
Concepts
The McGrawHill
Companies, 2004
Labs
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
The McGrawHill
Companies, 2004
MODEL VERIFICATION
AND VALIDATION
Dew knot trussed yore spell chequer two ned awl yore mistakes.
Brendan Hills
In this lab we describe the verication and validation phases in the development
and analysis of simulation models. In Section L8.1 we describe an inspection and
rework model. In Section L8.2 we show how to verify the model by tracing the
events in it. Section L8.3 shows how to debug a model. The ProModel logic,
basic, and advanced debugger options are also discussed.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
510
II. Labs
Part II
The McGrawHill
Companies, 2004
Labs
FIGURE L8.1
Layout of the Bombay
Clothing Mill.
FIGURE L8.2
Locations at the
Bombay Clothing Mill
warehouse.
FIGURE L8.3
Process and routing tables at the Bombay Clothing Mill.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
The McGrawHill
Companies, 2004
Lab 8
511
FIGURE L8.4
FIGURE L8.5
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
512
II. Labs
Part II
Labs
FIGURE L8.6
Tracing the simulation
model of the Bombay
Clothing Mill
warehouse.
FIGURE L8.7
Plots of garment and relabel queue contents.
The McGrawHill
Companies, 2004
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
Lab 8
The McGrawHill
Companies, 2004
513
FIGURE L8.9
The ProModel
Debugger menu.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
514
II. Labs
Part II
The McGrawHill
Companies, 2004
Labs
The user can launch the debugger using a DEBUG statement within the model
code or from the Options menu during run time. The system state can be monitored
to see exactly when and why things occur. Combined with the Trace window,
which shows the events that are being scheduled and executed, the debugger enables a modeler to track down logic errors or model bugs.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
Lab 8
The McGrawHill
Companies, 2004
515
FIGURE L8.10
An example of a DEBUG statement in the processing logic.
FIGURE L8.11
The Debugger window.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
516
II. Labs
Part II
The McGrawHill
Companies, 2004
Labs
Run: Continues the simulation, but still checks the debugger options
selected in the Debugger Options dialog box.
Next Statement: Jumps to the next statement in the current logic. Note
that if the last statement executed suspends the thread (for example, if
the entity is waiting to capture a resource), another thread that also meets
the debugger conditions may be displayed as the next statement.
Next Thread: Brings up the debugger at the next thread that is initiated or
resumed.
Into Subroutine: Steps to the rst statement in the next subroutine
executed by this thread. Again, if the last statement executed suspends
the thread, another thread that also meets debugger conditions may be
displayed rst. If no subroutine is found in the current thread, a message
is displayed in the Error Display box.
Options: Brings up the Debugger Options dialog box. You may also bring
up this dialog box from the Simulation menu.
Advanced: Changes the debugger to Advanced mode.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
Lab 8
The McGrawHill
Companies, 2004
517
FIGURE L8.12
The ProModel
Advanced Debugger
options.
L8.4 Exercises
1. For the example in Section L8.1, insert a DEBUG statement when a garment
is sent back for rework. Verify that the simulation model is actually
sending back garments for rework to the location named Label_Q.
2. For the example in Section L7.1 (Pomona Electronics), trace the model
to verify that the circuit boards of type B are following the routing given
in Table L7.1.
3. For the example in Section L7.5 (Poly Casting Inc.), run the simulation
model and launch the debugger from the Options menu. Turn on the
Local Information in the Basic Debugger. Verify the values of the
variables WIP and PROD_QTY.
4. For the example in Section L7.3 (Bank of India), trace the model to
verify that successive customers are in fact being served by the three
tellers in turn.
5. For the example in Section L7.7.2 (Shipping Boxes Unlimited), trace the
model to verify that the full boxes are in fact being loaded on empty pallets
at the Inspector location and are being unloaded at the Shipping location.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
9. Simulation Output
Analysis
The McGrawHill
Companies, 2004
SIMULATION OUTPUT
ANALYSIS
Nothing has such power to broaden the mind as the ability to investigate systematically and truly all that comes under thy observation in life.
Marcus Aurelius
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
520
II. Labs
Part II
The McGrawHill
Companies, 2004
9. Simulation Output
Analysis
Labs
Order food
Make payment
Pick up food
E(2)
E(1.5)
E(5)
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
Lab 9
9. Simulation Output
Analysis
The McGrawHill
Companies, 2004
521
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
522
II. Labs
Part II
9. Simulation Output
Analysis
The McGrawHill
Companies, 2004
Labs
FIGURE L9.1
Layout of the Spuds-n-More simulation model.
Order_Q have been processed and the simulation is terminated by the Stop statement. Notice that the combined capacity of the Entry and Order_Q locations is
ve in order to satisfy the requirement that the waiting area in front of the restaurant accommodates up to ve customers.
We have been viewing the output from our simulations with ProModels new
Output Viewer 3DR. Lets conduct this lab using ProModels traditional Output
Viewer, which serves up the same information as the 3DR viewer but does so a
little faster. To switch viewers, select Tools from the ProModel main menu bar
and then select Options. Select the Output Viewer as shown in Figure L9.3.
L9.2.2 Replications
Run the simulation for ve replications to record the number of customers served
each day for ve successive days. To run the ve replications, select Options from
under the Simulation main menu. Figure L9.4 illustrates the simulation options set
to run ve replications of the simulation. Notice that no run hours are specied.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
Lab 9
9. Simulation Output
Analysis
The McGrawHill
Companies, 2004
523
FIGURE L9.2
The simulation model of Spuds-n-More.
After ProModel runs the ve replications, it displays a message asking if you wish
to see the results. Answer yes.
Next ProModel displays the General Report Type window (Figure L9.5).
Here you specify your desire to see the output results for <All> replications and
then click on the Options . . . button to specify that you wish the output report to
also include the sample mean (average), sample standard deviation, and 90 percent condence interval of the ve observations collected via the ve replications.
The ProModel output report is shown in Figure L9.6. The number of customers
processed each day can be found under the Current Value column of the output
report in the VARIABLES section. The simulation results indicate that the number
of customers processed each day uctuates randomly. The uctuation may be
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
524
II. Labs
Part II
9. Simulation Output
Analysis
The McGrawHill
Companies, 2004
Labs
FIGURE L9.3
ProModels Default
Output Viewer set to
Output Viewer.
FIGURE L9.4
ProModels Simulation
Options window set to
run ve replications of
the simulation.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
Lab 9
9. Simulation Output
Analysis
FIGURE L9.5
ProModel General
Report Options set to
display the results
from all replications as
well as the average,
standard deviation,
and 90 percent
condence interval.
FIGURE L9.6
ProModel output report for ve replications of the Spuds-n-More simulation.
The McGrawHill
Companies, 2004
525
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
526
II. Labs
Part II
9. Simulation Output
Analysis
The McGrawHill
Companies, 2004
Labs
the sample mean and sample standard deviation. With approximately 90 percent
condence, the true but unknown mean number of customers processed per day
falls between 59.08 and 65.32 customers. These results convince Mr. Taylor that
the model is a valid representation of the actual restaurant, but he wants to get a
better estimate of the number of customers served per day.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
Lab 9
9. Simulation Output
Analysis
The McGrawHill
Companies, 2004
527
FIGURE L9.7
Saving a ProModel
output report into a
Microsoft Excel le
format.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
528
II. Labs
Part II
9. Simulation Output
Analysis
The McGrawHill
Companies, 2004
Labs
FIGURE L9.8
The ProModel output report displayed within a Microsoft Excel spreadsheet.
saved the ProModel output to, click on the VARIABLES sheet tab at the bottom of
the spreadsheet (Figure L9.8), highlight the 100 Current Value observations of the
Processed variablemaking sure not to include the average, standard deviation,
90% C.I. Low, and 90% C.I. High values (the last four values in the column)and
paste the 100 observations into the Stat::Fit data table (Figure L9.9).
After the 100 observations are pasted into the Stat::Fit Data Table, display a
histogram of the data. Based on the histogram, the observations appear somewhat
normally distributed (Figure L9.9). Furthermore, Stat::Fit estimates that the normal distribution with mean = 59.30 and standard deviation = 4.08 provides
an acceptable t to the data. Therefore, we probably could have dropped the word
approximate when we presented our condence interval to Mr. Taylor. Be sure
to verify all of this for yourself using the software.
Note that if you were to repeat this procedure in practice because the problem
requires you to be as precise and as sure as possible in your conclusions, then you
would report your condence interval based on the larger number of observations.
Never discard precious data.
Before leaving this section, look back at Figure L9.6. The Total Failed column in the Failed Arrivals section of the output report indicates the number of
customers that arrived to eat at Spuds-n-More but left because the waiting line
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
Lab 9
9. Simulation Output
Analysis
The McGrawHill
Companies, 2004
529
FIGURE L9.9
Stat::Fit analysis of the 100 observations of the number of customers processed per day
by Spuds-n-More.
was full. Mr. Taylor thinks that his proposed expansion plan will allow him to capture some of the customers he is currently losing. In Lab Chapter 10, we will add
embellishments to the as-is simulation model to reect Mr. Taylors expansion
plan to see if he is right.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
530
II. Labs
Part II
9. Simulation Output
Analysis
The McGrawHill
Companies, 2004
Labs
the simulations output will continue to be random, the statistical distribution of the
simulations output does not change after the simulation reaches steady state.
The period during which the simulation is in the transient phase is known as the
warm-up period. This is the amount of time to let the simulation run before gathering statistics. Data collection for statistics begins at the end of the warm-up and continues until the simulation has run long enough to allow all simulation events (even
rare ones) to occur many times (hundred to thousands of times if practical).
Problem Statement
A simulation model of the Green Machine Manufacturing Company (GMMC)
owned and operated by Mr. Robert Vaughn is shown in Figure L9.10. The interarrival time of jobs to the GMMC is constant at 1.175 minutes. Jobs require processing by each of the four machines. The processing time for a job at each green
machine is given in Table L9.2.
FIGURE L9.10
The layout of the Green Machine Manufacturing Company simulation model.
ProModel Format
Machine 1
Machine 2
Machine 3
Machine 4
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
Lab 9
9. Simulation Output
Analysis
The McGrawHill
Companies, 2004
531
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
532
II. Labs
Part II
9. Simulation Output
Analysis
Labs
FIGURE L9.11
The simulation model of the Green Machine Manufacturing Company.
The McGrawHill
Companies, 2004
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
Lab 9
9. Simulation Output
Analysis
The McGrawHill
Companies, 2004
533
FIGURE L9.12
Work-in-process (WIP) inventory value history for one replication of the GMMC simulation. (a) WIP value history
without warm-up phase removed. Statistics will be biased low by the transient WIP values. (b) WIP value history
with 100-hour warm-up phase removed. Statistics will not be biased low.
(a)
(b)
FIGURE L9.13
ProModels Simulation
Options window set for
a single replication
with a 100-hour warmup period followed by
a 150-hour run length.
details on this subject, and exercise 4 in Lab Chapter 11 illustrates how to use the
Welch method implemented in SimRunner.
Figure L9.14 was produced by SimRunner by recording the time-average
WIP levels of the GMMC simulation over successive one-hour time periods. The
results from each period were averaged across ve replications to produce the
raw data plot, which is the more erratic line that appears red on the computer
screen. A 54-period moving average (the smooth line that appears green on the
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
534
II. Labs
Part II
9. Simulation Output
Analysis
The McGrawHill
Companies, 2004
Labs
FIGURE L9.14
SimRunner screen for estimating the end of the warm-up time.
computer) indicates that the end of the warm-up phase occurs between the 33rd
and 100th periods. Given that we need to avoid underestimating the warm-up, lets
declare 100 hours as the end of the warm-up time. We feel much more comfortable
basing our estimate of the warm-up time on ve replications. Notice that
SimRunner indicates that at least 10 replications are needed to estimate the average WIP to within a 7 percent error and a condence level of 90 percent using a
warm-up of 100 periods (hours). You will see how this was done with SimRunner
in Exercise 4 of Section L11.4 in Lab Chapter 11.
Why did we choose an initial run length of 250 hours to produce the timeseries plot in Figure L9.14? Well, you have to start with something, and we picked
250 hours. You can rerun the experiment with a longer run length if the timeseries plot, produced by averaging the output from several replications of the
simulation, does not indicate a steady-state condition. Long runs will help prevent
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
Lab 9
9. Simulation Output
Analysis
The McGrawHill
Companies, 2004
535
you from wondering what the time-series plot looks like beyond the point at
which you stopped it. Do you wonder what our plot does beyond 250 hours?
Lets now direct our attention to answering the question of how long to run
the model past its warm-up time to estimate our steady-state statistic, mean WIP
inventory. We will somewhat arbitrarily pick 100 hours not because that is equal
to the warm-up time but because it will allow ample time for the simulation events
to happen thousands of times. In fact, the 100-hour duration will allow approximately 5,100 jobs to be processed per replication, which should give us decently
accurate results. How did we derive the estimate of 5,100 jobs processed per replication? With a 1.175-minute interarrival time of jobs to the system, 51 jobs arrive
per hour (60 minutes/1.175 minutes) to the system. Running the simulation for
100 hours should result in about 5,100 jobs (100 hours 51 jobs) exiting the system. You will want to check the number of Total Exits in the Entity Activity section of the ProModel output report that you just produced to verify this.
FIGURE L9.15
Specication of
warm-up hours and
run hours in the
Simulation Options
menu for running
replications.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
536
II. Labs
Part II
9. Simulation Output
Analysis
The McGrawHill
Companies, 2004
Labs
FIGURE L9.16
Ten replications of the Green Machine Manufacturing Company simulation using a 100-hour warm-up
and a 100-hour run length.
-----------------------------------------------------------------------------------General Report
Output from C:\Bowden Files\Word\McGraw 2nd Edition\GreenMachine.MOD
Date: Jul/16/2002
Time: 01:04:42 AM
-----------------------------------------------------------------------------------Scenario
: Normal Run
Replication
: All
Period
: Final Report (100 hr to 200 hr Elapsed: 100 hr)
Warmup Time
: 100
Simulation Time : 200 hr
-----------------------------------------------------------------------------------VARIABLES
Average
Variable
Total
Minutes
Minimum
Maximum
Current
Average
Name
Changes
Per Change
Value
Value
Value
Value
----------------------------------------------WIP
10202
0.588
13
31
27
24.369 (Rep 1)
WIP
10198
0.588
9
32
29
18.928 (Rep 2)
WIP
10207
0.587
14
33
22
24.765 (Rep 3)
WIP
10214
0.587
10
26
22
18.551 (Rep 4)
WIP
10208
0.587
8
27
25
16.929 (Rep 5)
WIP
10216
0.587
13
27
17
19.470 (Rep 6)
WIP
10205
0.587
8
30
24
19.072 (Rep 7)
WIP
10215
0.587
8
25
14
15.636 (Rep 8)
WIP
10209
0.587
9
26
20
17.412 (Rep 9)
WIP
10209
0.587
11
27
24
18.504 (Rep 10)
WIP
10208.3
0.587
10.3
28.4
22.4
19.364 (Average)
WIP
5.735
0.0
2.311
2.836
4.501
2.973 (Std. Dev.)
WIP
10205
0.587
8.959
26.756
19.790
17.640 (90% C.I. Low)
WIP
10211.6
0.587
11.64
30.044
25.009
21.087 (90% C.I. High)
If we decide to make one single long run and divide it into batch intervals
to estimate the expected WIP inventory, we would select the Batch Mean option
in the Output Reporting section of the Simulation Options menu (Figure L9.17).
A guideline in Chapter 9 for making an initial assignment to the length of time
for each batch interval was to set the batch interval length to the simulation run
time you would use for replications, in this case 100 hours. Based on the guideline, the Simulation Options menu would be congured as shown in Figure
L9.17 and would produce the output in Figure L9.18. To get the output report to
display the results of all 10 batch intervals, you specify <All> Periods in the
General Report Type settings window (Figure L9.19). We are approximately
90 percent condent that the true but unknown mean WIP inventory is between
18.88 and 23.19 jobs. If we desire a smaller condence interval, we can increase
the sample size by extending the run length of the simulation in increments of
the batch interval length. For example, we would specify run hours of 1500 to
collect 15 observations of average WIP inventory. Try this and see if the condence interval becomes smaller. Note that increasing the batch interval length is
also helpful.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
Lab 9
9. Simulation Output
Analysis
The McGrawHill
Companies, 2004
537
FIGURE L9.17
Warm-up hours, run
hours, and batch
interval length in the
Simulation Options
menu for running
batch intervals. Note
that the time units are
specied on the batch
interval length.
FIGURE L9.18
Ten batch intervals of the Green Machine Manufacturing Company simulation using a 100-hour warm-up
and a 100-hour batch interval length.
-----------------------------------------------------------------------------------General Report
Output from C:\Bowden Files\Word\McGraw 2nd Edition\GreenMachine.MOD
Date: Jul/16/2002 Time: 01:24:24 AM
-----------------------------------------------------------------------------------Scenario
: Normal Run
Replication
: 1 of 1
Period
: All
Warmup Time
: 100 hr
Simulation Time : 1100 hr
-----------------------------------------------------------------------------------VARIABLES
Average
Variable
Total
Minutes
Minimum
Maximum
Current
Average
Name
Changes
Per Change
Value
Value
Value
Value
----------------------------------------------WIP
10202
0.588
13
31
27
24.369 (Batch 1)
WIP
10215
0.587
16
29
26
23.082 (Batch 2)
WIP
10216
0.587
9
30
22
17.595 (Batch 3)
WIP
10218
0.587
13
30
16
22.553 (Batch 4)
WIP
10202
0.588
13
33
28
23.881 (Batch 5)
WIP
10215
0.587
17
37
25
27.335 (Batch 6)
WIP
10221
0.587
8
29
18
20.138 (Batch 7)
WIP
10208
0.587
10
26
22
17.786 (Batch 8)
WIP
10214
0.587
9
29
20
16.824 (Batch 9)
WIP
10222
0.586
9
23
12
16.810 (Batch 10)
WIP
10213.3
0.587
11.7
29.7
21.6
21.037 (Average)
WIP
7.103
0.0
3.164
3.743
5.168
3.714 (Std. Dev.)
WIP
10209.2
0.587
9.865
27.530
18.604
18.884 (90% C.I. Low)
WIP
10217.4
0.587
13.534
31.869
24.595
23.190 (90% C.I. High)
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
538
II. Labs
Part II
9. Simulation Output
Analysis
The McGrawHill
Companies, 2004
Labs
FIGURE L9.19
ProModel General
Report Options set to
display the results
from all batch
intervals.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
Lab 9
9. Simulation Output
Analysis
The McGrawHill
Companies, 2004
539
FIGURE L9.20
Stat::Fit Autocorrelation plot of the observations collected over 100 batch intervals. Lag-1
autocorrelation is within the 0.20 to +0.20 range.
observations under the Average Value column from the VARIABLES sheet tab of
the Excel spreadsheet into Stat::Fit.
Figure L9.20 illustrates the Stat::Fit results that you will want to verify. The
lag-1 autocorrelation value is the rst value plotted in Stat::Fits Autocorrelation
of Input Data plot. Note that the plot begins at lag-1 and continues to lag-20. For
this range of lag values, the highest autocorrelation is 0.192 and the lowest value
is 0.132 (see correlation(0.192, 0.132) at the bottom of the plot). Therefore,
we know that the lag-1 autocorrelation is within the 0.20 to +0.20 range recommended in Section 9.6.2 of Chapter 9, which is required before proceeding to
the nal step of rebatching the data into between 10 and 30 larger batches. We
have enough data to form 10 batches with a length of 1000 hours (10 batches at
1000 hours each equals 10,000 hours of simulation time, which we just did). The
results of rebatching the data in the Microsoft Excel spreadsheet are shown in
Table L9.3. Note that the rst batch mean of 21.037 in Table L9.3 is the average
of the rst 10 batch means that we originally collected. The second batch mean
of 19.401 is the average of the next 10 batch means and so on. The condence
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
540
II. Labs
The McGrawHill
Companies, 2004
9. Simulation Output
Analysis
Part II
Labs
(Batch 1)
(Batch 2)
(Batch 3)
(Batch 4)
(Batch 5)
(Batch 6)
(Batch 7)
(Batch 8)
(Batch 9)
(Batch 10)
(Average)
(Std. Dev.)
(90% C.I. Low)
(90% C.I. High)
interval is very narrow due to the long simulation time (1000 hours) of each batch.
The owner of GMMC, Mr. Robert Vaughn, should be very pleased with the precision of the time-average WIP inventory estimate.
The batch interval method requires a lot of work to get it right. Therefore, if
the time to simulate through the warm-up period is relatively short, the replications method should be used. Reserve the batch interval method for simulations
that require a very long time to reach steady state.
L9.4 Exercises
1. An average of 100 customers per hour arrive to the Picayune Mutual
Bank. It takes a teller an average of two minutes to serve a customer.
Interarrival and service times are exponentially distributed. The bank
currently has four tellers working. Bank manager Rich Gold wants to
compare the following two systems with regard to the average time
customers spend in the bank.
System #1
A single queue is provided for customers to wait for the rst available
teller.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
9. Simulation Output
Analysis
Lab 9
The McGrawHill
Companies, 2004
541
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
The McGrawHill
Companies, 2004
10 COMPARING
ALTERNATIVE SYSTEMS
In this lab we see how ProModel is used with some of the statistical methods
presented in Chapter 10 to compare alternative designs of a system with the goal
of identifying the superior system relative to some performance measure. We
will also learn how to program ProModel to use common random numbers
(CRN) to drive the simulation models that represent alternative designs for the
systems being compared. The use of CRN allows us to run the opposing simulations under identical experimental conditions to facilitate an objective evaluation
of the systems.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
544
II. Labs
Part II
The McGrawHill
Companies, 2004
Labs
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
Lab 10
The McGrawHill
Companies, 2004
545
already in the waiting area by closing time are served before the kitchen shuts
down for the day.
Mr. Taylor is thinking about expanding into the small newspaper stand next
to Spuds-n-More, which has been abandoned. This would allow him to add a
second window to serve his customers. The rst window would be used for taking
orders and collecting payments, and the second window would be used for lling
orders. Before investing in the new space, however, Mr. Taylor needs to convince
the major stockholder of Spuds-n-More, Pritchard Enterprises, that the investment
would allow the restaurant to serve more customers per day.
A baseline (as-is) model of the current restaurant conguration was developed and validated in Section L9.2 of Lab Chapter 9. See Figures L9.1 and L9.2
for the layout of the baseline model and the printout of the ProModel model. Our
task is to build a model of the proposed restaurant with a second window to determine if the proposed design would serve more customers per day than does the
current restaurant. Lets call the baseline model Spuds-n-More1 and the proposed
model Spuds-n-More2. The model is included on the CD accompanying the book
under le name Lab 10_2 Spuds-n-More2.MOD.
The ProModel simulation layout of Spuds-n-More2 is shown in Figure L10.1, and the ProModel printout is given in Figure L10.2. After customers
wait in the order queue (location Order_Q) for their turn to order, they move to the
rst window to place their orders and pay the order clerk (location Order_Clerk).
Next customers proceed to the pickup queue (location Pickup_Q) to wait for their
turn to be served by the pickup clerk (Pickup_Clerk) at the second window. There
FIGURE L10.1
Layout of Spuds-nMore2 simulation
model.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
546
II. Labs
Part II
The McGrawHill
Companies, 2004
Labs
FIGURE L10.2
The Spuds-n-More2 simulation model.
is enough space for one customer in the pickup queue. The pickup clerk processes
one order at a time.
As the Spuds-n-More2 model was being built, Mr. Taylor determined that for
additional expense he could have a carpenter space the two customer service windows far enough apart to accommodate up to three customers in the order pickup
queue. Therefore, he requested that a third alternative design with an order pickup
queue capacity of three be simulated. To do this, we only need to change the
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
Lab 10
The McGrawHill
Companies, 2004
547
capacity of the pickup queue from one to three in our Spuds-n-More2 model. We
shall call this model Spuds-n-More3. Note that for our Spuds-n-More2 and
Spuds-n-More3 models, we assigned a length of 25 feet to the Order_Q, a length
of 12 feet to the Pickup_Q, and a Customer entity travel speed of 150 feet per
minute. These values affect the entitys travel time in the queues (time for a customer to walk to the end of the queues) and are required to match the results presented in this lab.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
548
II. Labs
Part II
The McGrawHill
Companies, 2004
Labs
FIGURE L10.3
Unique random number streams assigned to the Spuds-n-More model.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
The McGrawHill
Companies, 2004
Lab 10
549
FIGURE L10.4
ProModels Simulation
Options set to run
25 replications using
Common Random
Numbers.
TABLE L10.1 Comparison of the Three Restaurant Designs Based on Paired Differences
(A)
(C)
Spuds-n-More2
Customers
Processed
x2 j
(D)
Spuds-n-More3
Customers
Processed
x3 j
(E)
(F)
(G)
Rep.
(j)
(B)
Spuds-n-More1
Customers
Processed
x1 j
Difference
(B C)
x(12) j
Difference
(B D)
x(13) j
Difference
(C D)
x(23) j
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
60
57
58
53
54
56
57
55
61
60
56
66
64
58
65
56
60
55
57
50
59
58
58
56
53
72
79
84
69
67
72
70
65
84
76
66
87
77
66
85
78
72
75
78
69
74
73
71
70
68
75
81
88
72
69
74
71
65
84
76
70
90
79
66
89
83
72
77
78
70
77
76
76
72
69
12
22
26
16
13
16
13
10
23
16
10
21
13
8
20
22
12
20
21
19
15
15
13
14
15
15
24
30
19
15
18
14
10
23
16
14
24
15
8
24
27
12
22
21
20
18
18
18
16
16
3
2
4
3
2
2
1
0
0
0
4
3
2
0
4
5
0
2
0
1
3
3
5
2
1
16.20
4.66
18.28
5.23
2.08
1.61
Sample mean x(ii ), for all i and i between 1 and 3, with i < i
Sample standard dev s(ii ), for all i and i between 1 and 3, with i < i
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
550
II. Labs
Part II
The McGrawHill
Companies, 2004
Labs
The Bonferroni approach with paired-t condence intervals is used to evaluate our hypotheses for the three alternative designs for the restaurant. The evaluation of the three restaurant designs requires that three pairwise comparisons be
made:
Spuds-n-More1 vs. Spuds-n-More2
Spuds-n-More1 vs. Spuds-n-More3
Spuds-n-More2 vs. Spuds-n-More3
Following the procedure described in Section 10.4.1 of Chapter 10 for the
Bonferroni approach, we begin constructing the required three paired-t condence intervals by letting 1 = 2 = 3 = /3 = 0.06/3 = 0.02.
The computation of the three paired-t condence intervals follows:
Spuds-n-More1 vs. Spuds-n-More2 ((12))
1 = 0.02
tn1,1 /2 = t24,0.01 = 2.485 from Appendix B
(2.485)4.66
(t24,0.01 )s(12)
=
hw =
= 2.32 customers
n
25
The approximate 98 percent condence interval is
x(12) hw (12) x(12) + hw
16.20 2.32 (12) 16.20 + 2.32
18.52 (12) 13.88
Spuds-n-More1 vs. Spuds-n-More3 ((13))
2 = 0.02
tn1,2 /2 = t24,0.01 = 2.485 from Appendix B
hw =
(t24,0.01 )s(13)
(2.485)5.23
=
= 2.60 customers
n
25
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
Lab 10
The McGrawHill
Companies, 2004
551
L10.5 Exercises
1. Without using the Common Random Numbers (CRN) technique,
compare the Spuds-n-More1, Spuds-n-More2, and Spuds-n-More3
models with respect to the average number of customers served per day.
To avoid the use of CRN, assign ProModel Stream 1 to each stochastic
element in the model and do not select the CRN option from the
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
552
II. Labs
Part II
The McGrawHill
Companies, 2004
Labs
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
The McGrawHill
Companies, 2004
11 SIMULATION
OPTIMIZATION
WITH SIMRUNNER
The purpose of this lab is to demonstrate how to solve simulation-based optimization problems using SimRunner. The lab introduces the ve major steps for
formulating and solving optimization problems with SimRunner. After stepping
through an example application of SimRunner, we provide additional application
scenarios to help you gain experience using the software.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
554
II. Labs
Part II
The McGrawHill
Companies, 2004
Labs
FIGURE L11.1
Relationship between
SimRunners
optimization
algorithms and
ProModel simulation
model.
Optimization
Algorithms
Simulation
Optimization
Input Factors
Step 2. Create a new SimRunner project and select the input factors you wish to
test. For each input factor, dene its numeric data type (integer or real) and its
lower bound (lowest possible value) and upper bound (highest possible value).
SimRunner will generate solutions by varying the values of the input factors according to their data type, lower bounds, and upper bounds. Care should be taken
when dening the lower and upper bounds of the input factors to ensure that a
combination of values will not be created that leads to a solution that was not
envisioned when the model was built.
Step 3. After selecting the input factors, dene an objective function to
measure the utility of the solutions tested by SimRunner. The objective function
is built using terms taken from the output report generated at the end of the
simulation run. For example, the objective function could be based on entity
statistics, location statistics, resource statistics, variable statistics, and so forth.
In designing the objective function, the user species whether a term is to be
minimized or maximized as well as the overall weighting of that term in the
objective function. Some terms may be more important than other terms to the
decision maker. SimRunner also allows you to seek a target value for an objective function term.
Step 4. Select the optimization prole and begin the search by starting the
optimization algorithms. The optimization prole sets the size of the evolutionary
algorithms population. The population size denes the number of solutions evaluated by the algorithm during each generation of its search. SimRunner provides
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
The McGrawHill
Companies, 2004
555
FIGURE L11.2
Generally, the larger the size of the population the better the result.
Max f(x)
Cautious
Moderate
Aggressive
Time
three population sizes: small, medium, and large. The small population size corresponds to the aggressive optimization prole, the medium population size corresponds to the moderate optimization prole, and the large population size corresponds to the cautious prole. In general, as the population size is increased, the
likelihood that SimRunner will nd the optimal solution increases, as does the
time required to conduct the search (Figure L11.2).
Step 5. Study the top solutions found by SimRunner and pick the best. SimRunner
will show the user the data from all experiments conducted and will rank each
solution based on its utility, as measured by the objective function. Remember that
the value of an objective function is a random variable because it is produced from
the output of a stochastic simulation model. Therefore, be sure that each experiment
is replicated an appropriate number of times during the optimization.
Another point to keep in mind is that the list of solutions presented by
SimRunner represents a rich source of information about the behavior, or response
surface, of the simulation model. SimRunner can sort and graph the solutions
many different ways to help you interpret the meaning of the data.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
556
II. Labs
Part II
The McGrawHill
Companies, 2004
Labs
Input Queue
Output Queue
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
FIGURE L11.4
ProModel model of Prosperity Company.
The McGrawHill
Companies, 2004
557
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
558
II. Labs
The McGrawHill
Companies, 2004
Part II
Labs
FIGURE L11.5
Response Surface
100
90
80
No. in Queue
Relationship between
the mean processing
time and the mean
number of entities
waiting in the queue
given a mean time
between arrivals of
three.
70
60
50
40
30
20
10
0
-10
0
1.5
Process Time
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
The McGrawHill
Companies, 2004
559
FIGURE L11.6
ProModel macro editor.
mean processing time of the milling machine to zero minutes, which of course is
a theoretical value. For complex systems, you would not normally know the
answer in advance, but it will be fun to see how SimRunner moves through this
known response surface as it seeks the optimal solution.
The rst step in the ve-step process for setting up a SimRunner project is to
dene the macros and their Run-Time Interface (RTI) in the simulation model. In
addition to dening ProcessTime as a macro (Figure L11.6), we shall also dene
the time between arrivals (TBA) of plates to the system as a macro to be used
later in the second scenario that management has asked us to look into. The identication for this macro is entered as TBA. Be sure to set each macros Text . . .
value as shown in Figure L11.6. The Text value is the default value of the macro.
In this case, the default value for ProcessTime is 2 and the default value of
TBA is 3. If you have difculty creating the macros or their RTI, please see Lab
Chapter 14.
Next we activate SimRunner from ProModels Simulation menu. SimRunner
opens in the Setup Project mode (Figure L11.7). The rst step in the Setup Project
module is to select a model to optimize or to select an existing optimization project
(the results of a prior SimRunner session). For this scenario, we are optimizing a
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
560
II. Labs
Part II
The McGrawHill
Companies, 2004
Labs
FIGURE L11.7
The opening SimRunner screen.
model for the rst time. Launching SimRunner from the ProModel Simulation
menu will automatically load the model you are working on into SimRunner. See
the model le name loaded in the box under Create new projectSelect model
in Figure 11.7. Note that the authors named their model ProsperityCo.Mod.
With the model loaded, the input factors and objective function are dened to
complete the Setup Project module. Before doing so, however, lets take a
moment to review SimRunners features and user interface. After completing the
Setup Project module, you would next run either the Analyze Model module or
the Optimize Model module. The Analyze Model module helps you determine the
number of replications to run to estimate the expected value of performance
measures and/or to determine the end of a models warm-up period using the
techniques described in Chapter 9. The Optimize Model module automatically
seeks the values for the input factors that optimize the objective function using the
techniques described in Chapter 11. You can navigate through SimRunner by
selecting items from the menus across the top of the window and along the left
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
The McGrawHill
Companies, 2004
561
FIGURE L11.8
Single term objective function setup for scenario one.
side of the window or by clicking the <Previous or Next> buttons near the bottom
right corner of the window.
Clicking the Next> button takes you to the section for dening the objective
function. The objective function, illustrated in Figure L11.8, indicates the desire
to minimize the average contents (in this case, plates) that wait in the location
called InputPalletQue. The InputPalletQue is a location category. Therefore, to
enter this objective, we select Location from the Response Category list under
Performance Measures by clicking on Location. This will cause SimRunner to
display the list of location statistics in the Response Statistic area. Click on the
response statistic InputPalletQueAverageContents and then press the button
below with the down arrows. This adds the statistic to the list of response statistics selected for the objective function. The default objective for each response
statistic is maximize. In this example, however, we wish to minimize the average
contents of the input pallet queue. Therefore, click on Location:Max:1*InputPalletQueAverageContents, which appears under the area labeled Response
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
562
II. Labs
Part II
The McGrawHill
Companies, 2004
Labs
Statistics Selected for the Objective Function; change the objective for the
response statistic to Min; and click the Update button. Note that we accepted the
default value of one for the weight of the factor. Please refer to the SimRunner
Users Guide if you have difculty performing this step.
Clicking the Next> button takes you to the section for dening the input
factors. The list of possible input factors (macros) to optimize is displayed at the
top of this section under Macros Available for Input (Figure L11.9). The input
factor to be optimized in this scenario is the mean processing time of the milling
machine, ProcessTime. Select this macro by clicking on it and then clicking the
button below with the down arrows. This moves the ProcessTime macro to the list
of Macros Selected as Input Factors (Figure L11.9). Next, indicate that you want
to consider integer values between one and ve for the ProcessTime macro.
Ignore the default value of 2.00. If you wish to change the data type or lower and
upper bounds, click on the input factor, make the desired changes, and click the
Update button. Please note that an input factor is designated as an integer when
FIGURE L11.9
Single input factor setup for scenario one.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
The McGrawHill
Companies, 2004
563
the lower and upper bounds appear without a decimal point in the Macros properties section. When complete, SimRunner should look like Figure L11.9.
From here, you click the Next> button until you enter the Optimize Model
module, or click on the Optimize Model module button near the top right corner
of the window to go directly to it. The rst step here is to specify Optimization options (Figure L11.10). Select the Aggressive Optimization Prole. Accept the default value of 0.01 for Convergence Percentage, the default of one for Min Generations, and the default of 99999 for Max Generations.
The convergence percentage, minimum number of generations, and maximum number of generations control how long SimRunners optimization algorithms will run experiments before stopping. With each experiment, SimRunner
records the objective functions value for a solution in the population. The evaluation of all solutions in the population marks the completion of a generation. The
maximum number of generations species the most generations SimRunner will
FIGURE L11.10
Optimization and simulation options.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
564
II. Labs
Part II
The McGrawHill
Companies, 2004
Labs
use to conduct its search for the optimal solution. The minimum number of
generations species the fewest generations SimRunner will use to conduct its
search for the optimal solution. At the end of a generation, SimRunner computes
the populations average objective function value and compares it with the populations best (highest) objective function value. When the best and the average are
at or near the same value at the end of a generation, all the solutions in the
population are beginning to look alike (their input factors are converging to
the same setting). It is difcult for the algorithms to locate a better solution to the
problem once the population of solutions has converged. Therefore, the optimization algorithms search is usually terminated at this point.
The convergence percentage controls how close the best and the average
must be to each other before the optimization stops. A convergence percentage
near zero means that the average and the best must be nearly equal before the
optimization stops. A high percentage value will stop the search early, while a
very small percentage value will run the optimization longer. High values for the
maximum number of generations allow SimRunner to run until it satises the
convergence percentage. If you want to force SimRunner to continue searching
after the convergence percentage is satised, specify very high values for both
the minimum number of generations and maximum number of generations.
Generally, the best approach is to accept the default values shown in Figure L11.10 for the convergence percentage, maximum generations, and minimum generations.
After you specify the optimization options, set the simulation options. Typically you will want to disable the animation as shown in Figure L11.10 to make
the simulation run faster. Usually you will want to run more than one replication
to estimate the expected value of the objective function for a solution in the population. When more than one replication is specied, SimRunner will display the
objective functions condence interval for each solution it evaluates. Note that
the condence level for the condence interval is specied here. Condence
intervals can help you to make better decisions at the end of an optimization as
discussed in Section 11.6.2 of Chapter 11. In this case, however, use one replication to speed things along so that you can continue learning other features of the
SimRunner software. As an exercise, you should revisit the problem and determine an acceptable number of replications to run per experiment. As indicated in
Figure L11.10, set the simulation warm-up time to 50 hours and the simulation
run time to 250 hours. You are now ready to have SimRunner seek the optimal
solution to the problem.
With these operations completed, click the Next> button (Figure L11.10) and
then click the Run button on the Optimize Model module (Figure L11.11) to start the
optimization. For this scenario, SimRunner runs all possible experiments, locating
the optimum processing time of one minute on its third experiment. The Experimental Results table shown in Figure L11.11 records the history of SimRunners
search. The rst solution SimRunner evaluated called for a mean processing time at
the milling machine of three minutes. The second solution evaluated assigned a
processing time of two minutes. These sequence numbers are recorded in the
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
The McGrawHill
Companies, 2004
565
FIGURE L11.11
Experimental results table for scenario one.
Experiment column, and the values for the processing time (ProcessTime) input
factor are recorded in the ProcessTime column. The value of the term used to dene
the objective function (minimize the mean number of plates waiting in the input
pallet queue) is recorded in the InputPalletQue:AverageContents column. This
value is taken from the output report generated at the end of a simulation. Therefore,
for the third experiment, we can see that setting the ProcessTime macro equal to one
results in an average of 0.162 plates waiting in the input pallet queue. If you were to
conduct this experiment manually with ProModel, you would set the ProcessTime
macro to one, run the simulation, display output results at the end of the run, and
read the average contents for the InputPalletQue location from the report. You may
want to verify this as an exercise.
Because the objective function was to minimize the mean number of plates
waiting in the input pallet queue, the same values from the InputPalletQue:AverageContents column also appear in the Objective Function column. However, notice
that the values in the Objective Function column are preceded by a negative sign
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
566
II. Labs
Part II
The McGrawHill
Companies, 2004
Labs
FIGURE L11.12
SimRunners process for converting minimization problems to maximization problems.
100
90
80
70
60
50
40
30
20
10
0
-10
1.5
-1 x Contents
-50
-100
-110
0
1.5
Process Time
(Figure L11.11). This has to do with the way SimRunner treats a minimization
objective. SimRunners optimization algorithms view all problems as maximization
problems. Therefore, if we want to minimize a term called Contents in an objective
function, SimRunner multiplies the term by a negative one {(1)Contents}. Thus
SimRunner seeks the minimal value by seeking the maximum negative value.
Figure L11.12 illustrates this for the ideal production systems response surface.
Figure L11.13 illustrates SimRunners Performance Measures Plot for this
optimization project. The darker colored line (which appears red on the computer
screen) at the top of the Performance Measures Plot represents the best value of
the objective function found by SimRunner as it seeks the optimum. The lighter
colored line (which appears green on the computer screen) represents the value of
the objective function for all of the solutions that SimRunner tried.
The last menu item of the Optimize Model module is the Response Plot
(Figure L11.11), which is a plot of the models output response surface based on
the solutions evaluated during the search. We will skip this feature for now and
cover it at the end of the lab chapter.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
The McGrawHill
Companies, 2004
567
FIGURE L11.13
SimRunners
Performance Plot
indicates the progress
of the optimization for
scenario one.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
568
II. Labs
Part II
The McGrawHill
Companies, 2004
Labs
FIGURE L11.14
Multiterm objective function for scenario two.
category. Management has indicated that a fairly high priority should be assigned
to minimizing the space required for the input pallet queue area. Therefore, a
weight of 100 is assigned to the second term in the objective function and a weight
of one is assigned to the rst term. Thus the objective function consists of the following two terms:
and
Maximize [(1)(Gear:TotalExits)]
Minimize [(100)(InputPalletQue:Maximum Contents)]
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
The McGrawHill
Companies, 2004
569
Given that SimRunners optimization algorithms view all problems as maximization problems, the objective function F becomes
F = Maximize {[(1)(Gear:TotalExits)]
+ [(100)(InputPalletQue:MaximumContents)]}
The highest reward is given to solutions that produce the largest number of gears
without allowing many plates to accumulate at the input pallet queue. In fact, a
solution is penalized by 100 points for each unit increase in the maximum number
of plates waiting in the input pallet queue. This is one way to handle competing
objectives with SimRunner.
Develop a SimRunner project using this objective function to seek the
optimal values for the input factors (macros) TBA and ProcessTime, which are
integers between one and ve (Figure L11.15). Use the aggressive optimization
prole with the convergence percentage set to 0.01, max generations equal to
99999, and min generations equal to one. To save time, specify one replication
FIGURE L11.15
Multiple input factors setup for scenario two.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
570
II. Labs
Part II
The McGrawHill
Companies, 2004
Labs
FIGURE L11.16
Experimental results table for scenario two.
per experiment. (Remember, you will want to run multiple replications on real
applications.) Also, set the simulation run hours to 250 and the warm-up hours
to 50 for now.
At the conclusion of the optimization, SimRunner will have run four generations as it conducted 23 experiments (Figure L11.16). What values do you recommend to management for TBA and ProcessTime?
Explore how sensitive the solutions listed in the Experimental Results table
for this project are to changes in the weight assigned to the maximum contents
statistic. Change the weight of this second term in the objective function from
100 to 50. To do this, go back to the Dene Objectives section of the Setup Project module and update the weight assigned to the InputPalletQueMaximumContents response statistic from 100 to 50. Upon doing this, SimRunner warns
you that the action will clear the optimization data that you just created. You can
save the optimization project with the File Save option if you wish to keep the
results from the original optimization. For now, do not worry about saving
the data and click the Yes button below the warning message. Now rerun the
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
The McGrawHill
Companies, 2004
571
FIGURE L11.17
Experimental results table for scenario two with modied objective function.
optimization and study the result (Figure L11.17). Notice that a different solution
is reported as optimum for the new objective function. Running a set of preliminary experiments with SimRunner is a good way to help ne-tune the weights assigned to terms in an objective function. Additionally, you may decide to delete
terms or add additional ones to better express your desires. Once the objective
function takes its nal form, rerun the optimization with the proper number of
replications.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
572
II. Labs
Part II
The McGrawHill
Companies, 2004
Labs
For this application scenario, the managers of the ideal production system
have specied that the mean time to process gears through the system should
range between four and seven minutes. This time includes the time a plate waits
in the input pallet queue plus the machining time at the mill. Recall that we built
the model with a single entity type, named Gear, to represent both plates and
gears. Therefore, the statistic of interest is the average time that the gear entity is
in the system. Our task is to determine values for the input factors ProcessTime
and TBA that satisfy managements objective.
The target range objective function is represented in SimRunner as shown in
Figure L11.18. Develop a SimRunner project using this objective function to seek
the optimal values for the input factors (macros) TBA and ProcessTime. Specify
that the input factors are integers between one and ve, and use the aggressive
optimization prole with the convergence percentage set to 0.01, maximum generations equal to 99999, and minimum generations equal to one. To save time, set
FIGURE L11.18
Target range objective function setup for scenario three.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
The McGrawHill
Companies, 2004
573
FIGURE 11.19
Experimental results table with Performance Measures Plot for scenario three.
the number of replications per experiment to one. (Remember, you will want to
run multiple replications on real applications.) Also, set the simulation run hours
to 250 and the warm-up hours to 50 and run the optimization. Notice that only the
solutions producing a mean time in the system of between four and seven minutes
for the gear received a nonzero value for the objective function (Figure L11.19).
What values do you recommend to management for TBA and ProcessTime?
Now plot the solutions SimRunner presented in the Experimental Results
table by selecting the Response Plot button on the Optimize Model module
(Figure L11.19). Select the independent variables as shown in Figure L11.20 and
click the Update Chart button. The graph should appear similar to the one in
Figure L11.20. The plot gives you an idea of the response surface for this objective function based on the solutions that were evaluated by SimRunner. Click the
Edit Chart button to access the 3D graph controls to format the plot and to reposition it for different views of the response surface.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
574
II. Labs
Part II
The McGrawHill
Companies, 2004
Labs
FIGURE L11.20
Surface response plot for scenario three.
L11.3 Conclusions
Sometimes it is useful to conduct a preliminary optimization project using only one
replication to help you set up the project. However, you should rarely, if ever, make
decisions based on an optimization project that used only one replication per experiment. Therefore, you will generally conduct your nal project using multiple
replications. In fact, SimRunner displays a condence interval about the objective
function when experiments are replicated more than once. Condence intervals
indicate how accurate the estimate of the expected value of the objective function is
and can help you make better decisions, as noted in Section 11.6.2 of Chapter 11.
Even though it is easy to use SimRunner, do not fall into the trap of letting
SimRunner, or any other optimizer, become the decision maker. Study the top
solutions found by SimRunner as you might study the performance records of
different cars for a possible purchase. Kick their tires, look under their hoods, and
drive them around the block before buying. Always remember that the optimizer
is not the decision maker. SimRunner can only suggest a possible course of action.
It is your responsibility to make the nal decision.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
The McGrawHill
Companies, 2004
575
L11.4 Exercises
Simulation Optimization Exercises
1. Rerun the optimization project presented in Section L11.2.1, setting the
number of replications to ve. How do the results differ from the original
results?
2. Conduct an optimization project on the buffer allocation problem
presented in Section 11.6 of Chapter 11. The models le name is
Lab 11_4 BufferOpt Ch11.Mod and is included on the CD
accompanying the textbook. To get your results to appear as shown in
Figure 11.5 of Chapter 11, enter Buffer3Cap as the rst input factor,
Buffer2Cap as the second input factor, and Buffer1Cap as the third input
factor. For each input factor, the lower bound is one and the upper bound
is nine. The objective is to maximize prot. Prot is computed in the
models termination logic by
Prot (10*Throughput)
(1000*(Buffer1Cap + Buffer2Cap + Buffer3Cap))
Figure L11.21 is a printout of the model. See Section 11.6.2 of Chapter
11 for additional details. Use the Aggressive optimization prole and
set the number of replications per experiment to 10. Specify a warm-up
time of 240 hours, a run time of 720 hours, and a condence level of
95 percent. Note that the student version of SimRunner will halt at
25 experiments, which will be before the search is completed. However,
it will provide the data necessary for answering these questions:
a. How do the results differ from those presented in Chapter 11 when
only ve replications were run per experiment?
b. Are the half-widths of the condence intervals narrower?
c. Do you think that the better estimates obtained by using 10 replications
will make it more likely that SimRunner will nd the true optimal
solution?
3. In Exercise 4 of Lab Section L10.5, you increased the amount of coal
delivered to the railroad by the DumpOnMe facility by adding more
dump trucks to the system. Your solution received high praise from
everyone but the lead engineer at the facility. He is concerned about the
maintenance needs for the scale because it is now consistently operated
in excess of 90 percent. A breakdown of the scale will incur substantial
repair costs and loss of prot due to reduced coal deliveries. He wants
to know the number of trucks needed at the facility to achieve a target
scale utilization of between 70 percent and 75 percent. This will allow
time for proper preventive maintenance on the scale. Add a macro to the
simulation model to control the number of dump trucks circulating in the
system. Use the macro in the Arrivals Table to specify the number of
dump trucks that are placed into the system at the start of each
simulation. In SimRunner, select the macro as an input factor and assign
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
FIGURE L11.21
Buffer allocation model from Chapter 11 (Section 11.6.2).
The McGrawHill
Companies, 2004
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
The McGrawHill
Companies, 2004
577
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
578
II. Labs
Part II
The McGrawHill
Companies, 2004
Labs
FIGURE L11.22
SimRunner parameters
for the GMMC warmup example.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
The McGrawHill
Companies, 2004
12 INTERMEDIATE MODELING
CONCEPTS
All truths are easy to understand once they are discovered; the point is to
discover them.
Galileo Galilei
L12.1 Attributes
Attributes can be dened for entities or for locations. Attributes are placeholders
similar to variables but are attached to specic entities or locations and usually
contain information about that entity or location. Attributes are changed and assigned when an entity executes the line of logic that contains an operator, much
like the way variables work. Some examples of attributes are part type, customer
number, and time of arrival of an entity, as well as length, weight, volume, or
some other characteristic of an entity.
579
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
580
II. Labs
The McGrawHill
Companies, 2004
Part II
Labs
Mean
Half-Width
Children
Women
Men
8
12
10
2
3
2
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
Lab 12
FIGURE L12.2
Process and routing tables for Fantastic Dan.
FIGURE L12.3
Arrival of customers at Fantastic Dan.
FIGURE L12.4
Simulation model for Fantastic Dan.
The McGrawHill
Companies, 2004
581
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
582
II. Labs
Part II
The McGrawHill
Companies, 2004
Labs
FIGURE L12.6
The minimum,
maximum, and
average cycle times
for various customers.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
Lab 12
The McGrawHill
Companies, 2004
583
How many widgets of each type are shipped each week (40-hour week)?
What is the cycle time for each type of widget?
What are the maximum and minimum cycle times?
What is the number of widgets reworked each week?
What is the average number of widgets waiting in the inspection queue?
Five locations (Mill, Input_Queue, Lathe, Inspect, and Inspect_Q) are dened for
this model. Three variables are dened, as in Figure L12.7. Figure L12.8 shows
how we keep track of the machined quantity as well as the probabilistic routings
FIGURE L12.7
Variables for Widgets-R-Us.
FIGURE L12.8
Keeping track of machined_qty and probabilistic routings at the Inspect location.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
584
II. Labs
Part II
The McGrawHill
Companies, 2004
Labs
FIGURE L12.9
Simulation model for Widgets-R-Us.
after inspection. Figure L12.9 shows the complete simulation model with counters added for keeping track of the number of widgets reworked and the number
of widgets shipped.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
Lab 12
The McGrawHill
Companies, 2004
585
Castings N Composites. Merge the model for Section L7.4 with the model for
Section L7.6.1. All the nished productscastings as well as compositesare
now sent to the shipping queue and shipping clerk. The model in Section L7.4 is
shown in Figure L12.10. The complete simulation model, after the model for Section L7.6.1 is merged, is shown in Figure L12.11. After merging, make the necessary modications in the process and routing tables.
We will make suitable modications in the Processing module to reect these
changes (Figure L12.12). Also, the two original variables in Section L7.4 (WIP
and PROD_QTY) are deleted and four variables are added: WIPCasting, WIPComposite, PROD_QTY_Casting, and PROD_QTY_Composite.
FIGURE L12.10
The layout of the
simulation model for
Section L7.4.
FIGURE L12.11
Merging the models from Section L7.4 and Section L7.6.1.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
586
II. Labs
Part II
The McGrawHill
Companies, 2004
Labs
FIGURE L12.12
Changes made to the process table after merging.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
Lab 12
The McGrawHill
Companies, 2004
587
7. To disable the downtime feature, click Yes in the disable eld. Otherwise,
leave this eld as No.
Time to Repair
Lathe
Mill
120 minutes
200 minutes
N(10,2) minutes
T(10,15,20) minutes
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
588
II. Labs
Part II
The McGrawHill
Companies, 2004
Labs
FIGURE L12.14
Processes and routings at Widgets-R-Us Manufacturing Inc.
FIGURE L12.15
Complete simulation model for Widgets-R-Us Manufacturing Inc.
routings are shown in Figure L12.14. Figure L12.15 shows the complete simulation model.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
Lab 12
The McGrawHill
Companies, 2004
589
Problem Statement
The turning center in this machine shop (Figure L12.16) has a time to failure
(TTF) distribution that is exponential with a mean of 10 minutes. The repair time
(TTR) is also distributed exponentially with a mean of 10 minutes.
This model shows how to get ProModel to implement downtimes that use
time to failure (TTF) rather than time between failures (TBF). In practice, you
most likely will want to use TTF because that is how data will likely be available to you, assuming you have unexpected failures. If you have regularly
scheduled downtimes, it may make more sense to use TBF. In this example, the
theoretical percentage of uptime is MTTF/(MTTF + MTTR), where M indicates
a mean value. The rst time to failure and time to repair are set in the variable
initialization section (Figure L12.17). Others are set in the downtime logic (Figure L12.18).
The processing and routing tables are shown in Figure L12.19. Run this
model for about 1000 hours, then view the batch mean statistics for downtime by
picking averaged for the period (Figure L12.20) when the output analyzer (classical) comes up. The batch mean statistics for downtime for the turning center are
shown in Figure L12.21. (This problem was contributed by Dr. Stephen Chick,
University of Michigan, Ann Arbor.)
FIGURE L12.16
Layout of the machine
shopmodeling
breakdown with TTF
and TTR.
FIGURE L12.17
Variable
initializations.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
590
II. Labs
Part II
Labs
FIGURE L12.18
Clock downtime logic.
FIGURE L12.19
Process and routing tables.
FIGURE L12.20
Average of all the
batches in the
Classical ProModel
Output Viewer.
FIGURE L12.21
Batch mean statistics for downtime.
The McGrawHill
Companies, 2004
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
The McGrawHill
Companies, 2004
Lab 12
591
From
To
Coffee break
Lunch break
Coffee break
10 A.M.
12 noon
3 P.M.
10:15 A.M.
12:45 P.M.
3:15 P.M.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
592
II. Labs
Part II
The McGrawHill
Companies, 2004
Labs
FIGURE L12.22
The operator Joes weekly work and break times at Widgets-R-Us.
FIGURE L12.23
Assigning the Shift File to the operator Joe.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
Lab 12
FIGURE L12.24
The layout and path network at Widgets-R-Us.
FIGURE L12.25
Processes and routings at Widgets-R-Us.
FIGURE L12.26
A snapshot during the
simulation model run
for Widgets-R-Us.
The McGrawHill
Companies, 2004
593
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
594
II. Labs
Part II
The McGrawHill
Companies, 2004
Labs
FIGURE L12.27
Locations at Joes Jobshop.
Machines
Job Mix
A
B
C
231
123
312
456075
707050
506060
25%
35%
40%
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
Lab 12
FIGURE L12.28
Entities at Joes
Jobshop.
FIGURE L12.29
Variables to track jobs
processed at Joes
Jobshop.
FIGURE L12.30
Processes and routings at Joes Jobshop.
The McGrawHill
Companies, 2004
595
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
596
II. Labs
Part II
The McGrawHill
Companies, 2004
Labs
FIGURE L12.31
Layout of Joes Jobshop.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
Lab 12
The McGrawHill
Companies, 2004
597
FIGURE L12.32
Choosing among upstream processes.
Process Time
Machining center
Lathe
Triangular(10,12,18) minutes
Triangular(12,15,20) minutes
To
Distance (feet)
Incoming
Machining center
Lathe
Outgoing
Machining center
Lathe
Outgoing
Incoming
200
400
200
400
Exponential(60) minutes. All jobs are processed through a machining center and
a lathe. The processing times (Table L12.5) for all jobs are triangularly distributed. A forklift truck that travels at the rate of 50 feet/minute handles all the material. Export jobs are given priority over domestic jobs for shop releasethat is,
in moving from the input queue to the machining center. The distance between the
stations is given in Table L12.6. Simulate for 16 hours.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
598
II. Labs
Part II
The McGrawHill
Companies, 2004
Labs
FIGURE L12.33
Forklift resource specied for Wangs Export Machine Shop.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
Lab 12
FIGURE L12.34
Denition of path network for forklift for Wangs Export Machine Shop.
FIGURE L12.35
Processes and routings dened for Wangs Export Machine Shop.
The McGrawHill
Companies, 2004
599
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
600
II. Labs
Part II
The McGrawHill
Companies, 2004
Labs
Problem Statement
At Wangs Export Machine Shop, two types of jobs are processed: domestic and
export. Mr. Wang is both the owner and the operator. The rate of arrival of both
types of jobs is Exponential(60) minutes. Export jobs are processed on machining
center E, and the domestic jobs are processed on machining center D. The
processing times for all jobs are triangularly distributed (10, 12, 18) minutes.
Mr. Wang gives priority to export jobs over domestic jobs. The distance between
the stations is given in Table L12.7.
Five locations (Machining_Center_D, Machining_Center_E, In_Q_Domestic,
In_Q_Export, and Outgoing_Q) are dened. Two types of jobs (domestic and export) arrive with an exponential interarrival frequency distribution of 60 minutes.
Mr. Wang is dened as a resource in Figure L12.36. The path network and the
processes are shown in Figures L12.37 and L12.38 respectively. Mr. Wang is getting old and can walk only 20 feet/minute with a load and 30 feet/minute without a
load. Simulate for 100 hours.
Priorities of resource requests can be assigned through a GET, JOINTLY GET,
or USE statement in operation logic, downtime logic, or move logic or the subroutines called from these logics. Priorities for resource downtimes are assigned in
the Priority eld of the Clock and Usage downtime edit tables.
Note that the priority of the resource (Mr_Wang) is assigned through the GET
statement in the operation logic (Figure L12.38). The domestic orders have a resource request priority of 1, while that of the export orders is 10.
FIGURE 12.36
Resource dened for Wangs Export Machine Shop.
To
Distance (feet)
Machining_Center_E
Machining_Center_D
Outgoing
Machining_Center_D
Outgoing
Machining_Center_E
200
200
200
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
Lab 12
The McGrawHill
Companies, 2004
601
FIGURE L12.37
Path network dened at Wangs Export Machine Shop.
FIGURE L12.38
Processes and routings dened at Wangs Export Machine Shop.
The results (partial) are shown in Figure L12.39. Note that the average time
waiting for the resource (Mr_Wang) is about 50 percent more for the domestic
jobs (with lower priority) than for the export jobs. The average time in the system
for domestic jobs is also considerably more than for the export jobs.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
602
II. Labs
Part II
The McGrawHill
Companies, 2004
Labs
FIGURE L12.39
Part of the results showing the entity activities.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
The McGrawHill
Companies, 2004
Lab 12
603
FIGURE L12.40
Arrival of orders at the Milwaukee Machine Shop.
FIGURE L12.41
Variables dened for
the Milwaukee
Machine Shop.
Number of
Orders
Time between
Order Arrival
Processing Time
on Lathe
Processing Time
on Mill
1
2
120
100
E(8) hrs
E(10) hrs
E(3) hrs
U(3,1) hrs
U(4,1) hrs
shown in Figure L12.42. Note that as soon as a customer order is received at the
Milwaukee Machine Shop, a signal is sent to Brookeld Forgings to ship a gear
forging (of the appropriate type). Thus the arrival of customer orders pulls the raw
material from the vendor. When the gears are fully machined, they are united
(JOINed) with the appropriate customer order at the orders arrival location. Figure L12.43 shows a layout of the Milwaukee Machine Shop and a snapshot of the
simulation model.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
604
II. Labs
Part II
The McGrawHill
Companies, 2004
Labs
FIGURE L12.42
Processes and routings dened for the Milwaukee Machine Shop.
FIGURE L12.43
Simulation model for the Milwaukee Machine Shop.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
Lab 12
The McGrawHill
Companies, 2004
605
Kanban literally means visual record. The word kanban refers to the
signboard of a store or shop, but at Toyota it simply means any small sign displayed in front of a worker. The kanban contains information that serves as a work
order. It gives information concerning what to produce, when to produce it, in
what quantity, by what means, and how to transport it.
Problem Statement
A consultant recommends implementing a production kanban system for Section 12.9.1s Milwaukee Machine Shop. Simulation is used to nd out how
many kanbans should be used. Model the shop with a total of ve kanbans.
The kanban procedure operates in the following manner:
1. As soon as an order is received by the Milwaukee Machine Shop, they
communicate it to Brookeld Forgings.
2. Brookeld Forgings holds the raw material in their own facility in the
forging queue in the sequence in which the orders were received.
3. The production of jobs at the Milwaukee Machine Shop begins only
when a production kanban is available and attached to the production
order.
4. As soon as the production of any job type is nished, the kanban is
detached and sent to the kanban square, from where it is pulled by
Brookeld Forgings and attached to a forging waiting in the forging
queue to be released for production.
The locations at the Milwaukee Machine Shop are dened as shown in Figure L12.44. Kanbans are dened as virtual entities in the entity table (Figure L12.45).
The arrival of two types of gears at Brookeld Forgings is shown in the arrivals table (Figure L12.46). This table also shows the arrival of two types of customer orders at the Milwaukee Machine Shop. A total of ve kanbans are generated
at the beginning of the simulation run. These are recirculated through the system.
FIGURE L12.44
Locations at the Milwaukee Machine Shop.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
606
II. Labs
Part II
Labs
FIGURE L12.45
Entities dened for
Milwaukee Machine
Shop.
FIGURE L12.46
Arrival of orders at Milwaukee Machine Shop.
FIGURE L12.47
Simulation model of a kanban system for the Milwaukee Machine Shop.
The McGrawHill
Companies, 2004
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
Lab 12
The McGrawHill
Companies, 2004
607
FIGURE L12.48
Process and routing tables for the Milwaukee Machine Shop.
Figure L12.47 shows the layout of the Milwaukee Machine Shop. The
processes and the routings are shown in Figure L12.48. The arrival of a customer
order (type 1 or 2) at the orders arrival location sends a signal to Brookeld Forgings in the form of a production kanban. The kanban is temporarily attached
(LOADed) to a gear forging of the right type at the Order_Q. The gear forgings are
sent to the Milwaukee Machine Shop for processing. After they are fully
processed, the kanban is separated (UNLOADed). The kanban goes back to the kanban square. The nished gear is united (JOINed) with the appropriate customer
order at the orders arrival location.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
608
II. Labs
Part II
The McGrawHill
Companies, 2004
Labs
FIGURE 12.49
The Cost option in the
Build menu.
FIGURE L12.50
The Cost dialog boxLocations option.
Locations
The Locations Cost dialog box (Figure L12.50) has two elds: Operation Rate
and Per Operation Rate species the cost per unit of time to process at the selected
location. Costs accrue when an entity waits at the location or uses the location. Per
is a pull-down menu to set the time unit for the operation rate as second, minute,
hour, or day.
Resources
The Resources Cost dialog box (Figure L12.51) has three elds: Regular Rate,
Per, and Cost Per Use. Regular Rate species the cost per unit of time for a resource used in the model. This rate can also be set or changed during run time
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
Lab 12
The McGrawHill
Companies, 2004
609
FIGURE L12.51
The Cost dialog boxResources option.
FIGURE L12.52
The Cost dialog boxEntities option.
using the SETRATE operation statement. Per is a pull-down menu, dened before.
Cost Per Use is a eld that allows you to dene the actual dollar cost accrued
every time the resource is obtained and used.
Entities
The Entities Cost dialog box (Figure L12.52) has only one eld: Initial Cost.
Initial Cost is the cost of the entity when it arrives to the system through a scheduled arrival.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
610
II. Labs
The McGrawHill
Companies, 2004
Part II
Labs
Increment Cost
The costs of a location, resource, or entity can be incremented by a positive or
negative amount using the following operation statements:
IncLocCostEnables
To
Distance (feet)
Arriving
Lathe
Mill 1
Mill 2
Exit
Lathe
Mill 1
Mill 2
Exit
Arrive
40
80
60
50
80
Operation Costs
Lathe
Mill 1
Mill 2
$10/minute
$18/minute
$22/minute
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
Lab 12
The McGrawHill
Companies, 2004
611
FIGURE L12.53
Processes and routings at Rajas manufacturing cell.
FIGURE L12.54
Simulation model of Rajas manufacturing cell.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
612
II. Labs
Part II
The McGrawHill
Companies, 2004
Labs
AutoCAD drawings can also be copied to the clipboard and pasted into the background. The procedure is as follows:
1. With the graphic on the screen, press <Ctrl> and <C> together.
Alternatively, choose Copy from the Edit menu. This will copy the
graphic into the Windows clipboard.
2. Open an existing or new model le in ProModel.
3. Press <Ctrl> and <V> together. Alternatively, choose Paste from the
Edit menu.
This action will paste the graphic as a background on the layout of the model.
Another way to import backgrounds is to use the Edit menu in ProModel:
1.
2.
3.
4.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
Lab 12
FIGURE L12.55
FIGURE L12.56
The McGrawHill
Companies, 2004
613
FIGURE L12.57
The Add View dialog
box.
1. Select General Information (Figure L12.59) from the Build menu. Select
Initialization Logic and use the VIEW statement:
View "Full View"
2. Select Processing from the Build menu. Use the following statement in
the Operation eld when the Pallet Empty arrives at the Pallet Queue
location (Figure L12.60):
View "Pallet Queue"
Also, use the following statement in the Operation eld when the box
arrives at the shipping dock:
View "Full View"
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
614
II. Labs
Part II
Labs
FIGURE L12.58
Full View of the Shipping Boxes Inc. model.
FIGURE L12.59
General Information
for the Shipping Boxes
Inc. model.
The McGrawHill
Companies, 2004
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
Lab 12
The McGrawHill
Companies, 2004
615
FIGURE L12.60
Processes and routings at Shipping Boxes Inc. incorporating the change of views.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
616
II. Labs
Part II
The McGrawHill
Companies, 2004
Labs
Example
For the Widgets-R-Us example in Section L12.6, make a model package that includes the model le and the shift le for operator Joe. Save the model package in
a oppy disk. Figure L12.61 shows the Create Model Package dialog.
Unpack
To unpack and install the model package, double-click on the package le. In the
Unpack Model Package dialog select the appropriate drive and directory path to
install the model le and its associated les (Figure L12.62). Then click Install.
After the package le has been installed, ProModel prompts you (Figure L12.63)
for loading the model. Click Yes.
FIGURE L12.61
Create Model Package
dialog.
FIGURE L12.62
Unpack Model
Package dialog.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
The McGrawHill
Companies, 2004
Lab 12
617
FIGURE L12.63
Load Model dialog.
L12.14 Exercises
1. Five different types of equipment are available for processing a special
type of part for one day (six hours) of each week. Equipment 1 is
available on Monday, equipment 2 on Tuesday, and so forth. The
processing time data follow:
Equipment
1
2
3
4
5
52
42
3 1.5
61
51
Mean
Half-Width
8
12
10
2
3
2
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
618
II. Labs
Part II
The McGrawHill
Companies, 2004
Labs
The initial greetings and signing in take Normal (2, 2) minutes, and the
transaction of money at the end of the haircut takes Normal (3, 3) minutes.
Run the simulation model for 100 working days (480 minutes each).
a. About how many customers of each type does Dan process per day?
b. What is the average number of customers of each type waiting to get
a haircut? What is the maximum?
c. What is the average time spent by a customer of each type in the
salon? What is the maximum?
3. Poly Castings Inc. receives castings from its suppliers in batches of
one every eleven minutes exponentially distributed. All castings arrive
at the raw material store. Of these castings, 70 percent are used to make
widget A, and the rest are used to make widget B. Widget A goes from
the raw material store to the mill, and then on to the grinder. Widget B
goes directly to the grinder. After grinding, all widgets go to degrease
for cleaning. Finally, all widgets are sent to the nished parts store.
Simulate for 1000 hours.
Widget
Process 1
Process 2
Process 3
Widget A
Widget B
Degrease [7 min.]
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
Lab 12
The McGrawHill
Companies, 2004
619
c. What percentage of the time does each mill spend in cleaning and
tool change operations?
d. What is the average time a casting spends in the system?
e. What is the average work-in-process of castings in the system?
7. Consider the NoWaitBurger stand in Exercise 13, in Section L7.12,
and answer the following questions.
a. What is the average amount of time spent by a customer at the
hamburger stand?
b. Run 10 replications and compute a 90 percent condence interval for
the average amount of time spent by a customer at the stand.
c. Develop a 90 percent condence interval for the average number of
customers waiting at the burger stand.
8. Sharukh, Amir, and Salman wash cars at the Bollywood Car Wash.
Cars arrive every 10 6 minutes. They service customers at the rate of
one every 20 10 minutes. However, the customers prefer Sharukh to
Amir, and Amir over Salman. If the attendant of choice is busy, the
customers choose the rst available attendant. Simulate the car wash
system for 1000 service completions (car washes). Answer the
following questions:
a. Estimate Sharukhs, Amirs, and Salmans utilization.
b. On average, how long does a customer spend at the car wash?
c. What is the longest time any customer spent at the car wash?
d. What is the average number of customers at the car wash?
Embellishment: The customers are forced to choose the rst available
attendant; no individual preferences are allowed. Will this make a
signicant enough difference in the performance of the system to justify
this change? Answer questions a through d to support your argument.
9. Cindy is a pharmacist and works at the Save-Here Drugstore. Walkin customers arrive at a rate of one every 10 3 minutes. Drive-in
customers arrive at a rate of one every 20 10 minutes. Drive-in
customers are given higher priority than walk-in customers. The
number of items in a prescription varies from 1 to 5 (3 2). Cindy can
ll one item in 6 1 minutes. She works from 8 A.M. until 5 P.M. Her
lunch break is from 12 noon until 1 P.M. She also takes two 15-minute
breaks: at 10 A.M. and at 3 P.M. Dene a shift le for Cindy named
Cindy.sft. Model the pharmacy for a year (250 days) and answer the
following questions:
a. Estimate the average time a customer (of each type) spends at the
drugstore.
b. What is the average number of customers (of each type) waiting for
service at the drugstore?
c. What is the utilization of Cindy (percentage of time busy)?
d. Do you suggest that we add another pharmacist to assist Cindy? How
many pharmacists should we add?
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
620
II. Labs
The McGrawHill
Companies, 2004
Part II
Labs
Probability
0.3
0.5
0.2
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
The McGrawHill
Companies, 2004
Lab 12
621
Operation
Time Required
(minutes)
Assemble
Fire
30 5
82
Cost Information
($)
Item
Assemblers salary
Oven cost
Raw material
Sale price of nished sub-assembly
$35/hour
$180 per 8-hour workday
(independent of utilization)
$8 per sub-assembly
$40 per sub-assembly
Office work
Audit
15%
Rework
85%
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
622
II. Labs
Part II
The McGrawHill
Companies, 2004
Labs
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
The McGrawHill
Companies, 2004
13 MATERIAL HANDLING
CONCEPTS
With malice toward none, with charity for all, with rmness in the right, as God
gives us to see the right, let us strive on to nish the work we are in . . .
Abraham Lincoln (18091865)
L13.1 Conveyors
Conveyors are continuous material handling devices for transferring or moving
objects along a xed path having xed, predened loading and unloading points.
Some examples of conveyors are belt, chain, overhead, power-and-free, roller,
and bucket conveyors.
In ProModel, conveyors are locations represented by a conveyor graphic. A
conveyor is dened graphically by a conveyor path on the layout. Once the path
has been laid out, the length, speed, and visual representation of the conveyor can
be edited by double-clicking on the path and opening the Conveyor/Queue dialog
box (Figure L13.1). The various conveyor options are specied in the Conveyor
Options dialog box, which is accessed by clicking on the Conveyor Options button in the Conveyor/Queue dialog box (Figure L13.2).
In an accumulating conveyor, if the lead entity comes to a stop, the trailing
entities queue behind it. In a nonaccumulating conveyor, if the lead entity is unable to exit the conveyor, then the conveyor and all other entities stop.
623
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
624
II. Labs
Part II
The McGrawHill
Companies, 2004
Labs
FIGURE L13.1
The Conveyor/Queue
dialog box.
FIGURE L13.2
The Conveyor Options
dialog box.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
Lab 13
The McGrawHill
Companies, 2004
625
FIGURE L13.3
Processes and routings for the Shipn Boxes Inc. model.
FIGURE L13.4
Simulation model
layout for Shipn
Boxes Inc.
Length
Speed
Conv1
Conv2
Shipping
100
100
200
30 feet/minute
50 feet/minute
20 feet/minute
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
626
II. Labs
Part II
The McGrawHill
Companies, 2004
Labs
A path network denes the way a resource travels between locations. The
specications of path networks allow you to dene the nodes at which the
resource parks, the motion of the resource, and the path on which the resource
travels. Path networks consist of nodes that are connected by path segments. A
beginning node and an ending node dene a path segment. Path segments may be
unidirectional or bidirectional. Multiple path segments, which may be straight or
joined, are connected at path nodes. To create path networks:
1. Select the Path button and then left-click in the layout where you want
the path to start.
2. Subsequently, left-click to put joints in the path and right-click to end the
path.
Interfaces are where the resource interacts with the location when it is on the
path network. To create an interface between a node and a location:
1. Left-click and release on the node (a dashed line appears).
2. Then left-click and release on the location.
Multiple interfaces from a single node to locations can be created, but only
one interface may be created from the same path network to a particular location.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
Lab 13
The McGrawHill
Companies, 2004
627
FIGURE L13.5
Process and routing logic at Ghoshs Gear Shop.
FIGURE L13.6
Layout of Ghoshs Gear Shop.
We ran this model with one, two, or three operators working together. The
results are summarized in Tables L13.2 and L13.3. The production quantities and
the average time in system (minutes) are obtained from the output of the simulation analysis. The prot per hour, the expected delay cost per piece, and the
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
628
II. Labs
Part II
The McGrawHill
Companies, 2004
Labs
Production Quantity
Per 100 Hours
Production Rate
Per Hour
Gross Prot
Per Hour
Average Time in
System (minutes)
1
2
3
199
404
598
1.99
4.04
5.98
39.80
80.80
119.60
2423
1782
1354
Number of
Operators
Expected
Delay Cost
($/piece)
Expected
Delay Cost
($/hour)
Expected
Service Cost
($/hour)
1
2
3
$4.04
$2.97
$2.25
$8.04
$12.00
$13.45
$20
$40
$60
$11.76
$28.80
$46.15
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
Lab 13
The McGrawHill
Companies, 2004
629
additional throughput can be sold for $20 per item prot. Should we replace one
operator in the previous manufacturing system with a conveyor system? Build a
simulation model and run it for 100 hours to help in making this decision.
The layout, locations, and processes and routings are dened as shown
in Figures L13.7, L13.8, and L13.9 respectively. The length of each conveyor is
40 feet (Figure L13.10). The speeds of all three conveyors are assumed to be
50 feet/minute (Figure L13.11).
The results of this model are summarized in Tables L13.4 and L13.5. The
production quantities and the average time in system (minutes) are obtained from
the output of the simulation analysis. The prot per hour, the expected delay cost
FIGURE L13.7
Layout of Ghoshs Gear Shop with conveyors.
FIGURE L13.8
Locations at Ghoshs
Gear Shop with
conveyors.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
630
II. Labs
Part II
Labs
FIGURE L13.9
Process and routing at Ghoshs Gear Shop with conveyors.
FIGURE L13.10
Conveyors at Ghoshs
Gear Shop.
FIGURE L13.11
Conveyor options at
Ghoshs Gear Shop.
The McGrawHill
Companies, 2004
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
The McGrawHill
Companies, 2004
Lab 13
631
Mode of
Transportation
Production
Quantity
Per 100 Hours
Production Rate
Per Hour
Gross Prot
Per Hour
Average Time in
System (minutes)
Conveyor
747
7.47
$149.40
891
Mode of
Transportation
Expected
Delay Cost
($/piece)
Expected
Delay Cost
($/hour)
Expected
Service Cost
($/hour)
Total Net
Prot ($/hour)
Conveyor
$1.485
$11.093
$16
$122.307
per piece, and the expected delay cost per hour are calculated as follows:
Prot per hour
Production rate/hour Prot per piece
Expected delay cost ($/piece) (Average time in system (min.)/60)
$0.1/piece/hour
Expected delay cost ($/hour) Expected delay cost ($/piece)
Production rate/hour
To calculate the service cost of the conveyor, we assume that it is used about
2000 hours per year (8 hrs/day 250 days/year). Also, for depreciation purposes,
we assume straight-line depreciation over three years.
Total cost of installation $50/ft. 40 ft./conveyor segment
3 conveyor segments
$6000
Depreciation per year $6000/3 $2000/year
Maintenance cost
$30,000/year
Total service cost/year Depreciation cost + Maintenance cost
$32,000/year
Total service cost/hour $32,000/2000 $16/hour
The total net prot per hour after deducting the expected delay and the expected service costs is
Total net prot/hour Gross prot/hour Expected delay cost/hour
Expected service cost/hour
Comparing the net prot per hour between manual material handling and
conveyorized material handling, it is evident that conveyors should be installed to
maximize prot.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
632
II. Labs
Part II
The McGrawHill
Companies, 2004
Labs
FIGURE L13.13
Path network for the material handler.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
Lab 13
FIGURE L13.14
Processes and routings for the example in Section L13.2.3.
FIGURE L13.15
Arrivals for the example in Section L13.2.3.
FIGURE L13.16
The layout for
Shipping Boxes Inc.
The McGrawHill
Companies, 2004
633
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
634
II. Labs
Part II
The McGrawHill
Companies, 2004
Labs
and exit station. The processing time on each machine is normally distributed with
a mean of 60 seconds and a standard deviation of 5 seconds. The arrival rate of
jobs is exponentially distributed with a mean of 120 seconds.
Raja also transports the jobs between the machines and the arrival and exit
stations. Job pickup and release times are uniformly distributed between six and
eight seconds. The distances between the stations are given in Table L13.6. Raja can
travel at the rate of 150 feet/minute when carrying no load. However, he can walk at
the rate of only 80 feet/minute when carrying a load. Simulate for 80 hours.
The layout, locations, path networks, resource specication, and processes
and routings are shown in Figures L13.17, L13.18, L13.19, L13.20, and L13.21,
respectively.
FIGURE L13.17
Layout of Rajas manufacturing cell.
To
Distance (feet)
Arrival station
Lathe
Mill 1
Mill 2
Exit
Lathe
Mill 1
Mill 2
Exit
Arrive
40
80
60
50
80
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
Lab 13
FIGURE L13.18
Locations at Rajas manufacturing cell.
FIGURE L13.19
Path networks at Rajas manufacturing cell.
FIGURE L13.20
Resource specication at Rajas manufacturing cell.
The McGrawHill
Companies, 2004
635
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
636
II. Labs
Part II
The McGrawHill
Companies, 2004
Labs
FIGURE L13.21
Process and routing tables at Rajas manufacturing cell.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
Lab 13
The McGrawHill
Companies, 2004
637
Problem Statement
Pritha takes over the manufacturing operation (Section L13.2.4) from Raja and renames it Prithas manufacturing cell. After taking over, she installs an overhead
crane system to handle all the material movements between the stations. Job
pickup and release times by the crane are uniformly distributed between six and
eight seconds. The coordinates of all the locations in the cell are given in
Table L13.7. The crane can travel at the rate of 150 feet/minute with or without a
load. Simulate for 80 hours.
Five locations (Lathe, Mill 1, Mill 2, Exit_Station, and Arrive_Station) are dened for Prithas manufacturing cell. The path networks, crane system resource,
and processes and routings are shown in Figures L13.22, L13.23, and L13.24.
FIGURE L13.22
Path networks for the crane system at Prithas manufacturing cell.
X (Rail)
Y (Bridge)
Arriving
Lathe
Mill 1
Mill 2
Exit
10
50
90
80
0
100
80
80
20
20
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
638
II. Labs
Part II
The McGrawHill
Companies, 2004
Labs
FIGURE L13.23
The crane system resource dened.
FIGURE L13.24
Process and routing tables dened for Prithas manufacturing cell.
Select Path Network from the Build menu. From the Type menu, select
Crane. The following four nodes are automatically created when we select the
crane type of path network: Origin, Rail1 End, Rail2 End, and Bridge End. Dene
the ve nodes N1 through N5 to represent the three machines, the arrival station,
and the exit station. Click on the Interface button in the Path Network menu and
dene all the interfaces for these ve nodes.
Select Resource from the Build menu. Name the crane resource. Enter 1 in
the Units column (one crane unit). Click on the Specs. button. The Specications
menu opens up. Select Net1 for the path network. Enter the empty and full speed
of the crane as 150 ft/min. Also, enter Uniform(3 1) seconds as the pickup and
deposit time.
L13.4 Exercises
1. Consider the DumpOnMe facility in Exercise 11 of Section L7.12 with
the following enhancements. Consider the dump trucks as material
handling resources. Assume that 10 loads of coal arrive to the loaders
every hour (randomly; the interarrival time is exponentially
distributed). Create a simulation model, with animation, of this system.
Simulate for 100 days, eight hours each day. Collect statistics to
estimate the loader and scale utilization (percentage of time busy).
About how many trucks are loaded each day on average?
2. For the Widgets-R-Us Manufacturing Inc. example in Section L12.5.1,
consider that a maintenance mechanic (a resource) will be hired to do the
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
The McGrawHill
Companies, 2004
Lab 13
639
repair work on the lathe and the mill. Modify the simulation model and run
it for 2000 hours. What is the utilization of the maintenance mechanic?
Hint: Dene maintenance_mech as a resource. In the Logic eld of the
Clock downtimes for Lathe enter the following code:
GET maintenance_mech
DISPLAY "The Lathe is Down for Maintenance"
Wait N(10,2) min
FREE maintenance_mech
Forging
Forge Inc.
Mach 1
Mach 2
Mach 3
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
640
II. Labs
Part II
The McGrawHill
Companies, 2004
Labs
4. U.S. Construction Company has one bulldozer, four trucks, and two
loaders. The bulldozer stockpiles material for the loaders. Two piles of
material must be stocked prior to the initiation of any load operation.
The time for the bulldozer to stockpile material is Erlang distributed
and consists of the sum of two exponential variables, each with a mean
of 4 (this corresponds to an Erlang variable with a mean of 8 and a
variance of 32). In addition to this material, a loader and an unloaded
truck must be available before the loading operation can begin. Loading
time is exponentially distributed with a mean time of 14 minutes for
server 1 and 12 minutes for server 2.
After a truck is loaded, it is hauled and then dumped; it must be
returned before it is available for further loading. Hauling time is
normally distributed. When loaded, the average hauling time is
22 minutes. When unloaded, the average time is 18 minutes. In both
cases, the standard deviation is three minutes. Dumping time is uniformly
distributed between two and eight minutes. Following a loading
operation, the loader must rest for ve minutes before it is available to
begin loading again. Simulate this system at the U.S. Construction Co.
for a period of one year (2000 working hours) and analyze it.
5. At Walnut Automotive, machined castings arrive randomly
(exponential, mean of six minutes) from the supplier to be assembled at
one of ve identical engine assembly stations. A forklift truck delivers
the castings from the shipping dock to the engine assembly department.
A loop conveyor connects the assembly stations.
The forklift truck moves at a velocity of ve feet per second. The distance from the shipping dock to the assembly department is 1000 feet. The
conveyor is 5000 feet long and moves at a velocity of ve feet per second.
At each assembly station, no more than three castings can be
waiting for assembly. If a casting arrives at an assembly station and
there is no room for the casting (there are already three castings
waiting), it goes around for another try. The assembly time is normally
distributed with a mean of ve minutes and standard deviation of two
minutes. The assembly stations are equally distributed around the belt.
The load/unload station is located halfway between stations 5 and 1.
The forklift truck delivers the castings to the load/unload station. It also
picks up the completed assemblies from the load/unload station and
delivers them back to the shipping dock.
Create a simulation model, with animation, of this system. Run the
simulation model until 1000 engines have been assembled.
a. What is the average throughput time for the engines in the
manufacturing system?
b. What are the utilization gures for the forklift truck and the conveyor?
c. What is the maximum number of engines in the manufacturing system?
d. What are the maximum and average numbers of castings on the
conveyor?
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
The McGrawHill
Companies, 2004
Lab 13
641
Lane 1
Lane 2
Lane 3
Group of sorters
Shipping
dock
1 min.
3 min.
10 ft./min
250 ft.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
642
II. Labs
Part II
The McGrawHill
Companies, 2004
Labs
Simulate for one year (250 working days, eight hours each day).
Answer the following questions:
a. How many sorters are required? The objective is to have the minimum
number of sorters but also avoid overowing the conveyors.
b. How many forklifts do we need?
c. Report on the sorter utilization, total number of cases shipped, and
the number of cases palletized by lane.
7. Repeat Exercise 6 with a dedicated sorter in each sorting lane. Address
all the same issues.
8. Printed circuit boards arrive randomly from the preparation department.
The boards are moved in sets of ve by a hand truck to the component
assembly department, where the board components are manually assembled. Five identical assembly stations are connected by a loop conveyor.
When boards are placed onto the conveyor, they are directed to the
assembly station with the fewest boards waiting to be processed. After
the components are assembled onto the board, they are set aside and
removed for inspection at the end of the shift. The time between boards
arriving from preparation is exponentially distributed with a mean of
ve seconds. The hand truck moves at a velocity of ve feet per second
and the conveyor moves at a velocity of two feet per second. The
conveyor is 100 feet long. No more than 20 boards can be placed on the
belt at any one time.
At each assembly station, no more than two boards can be waiting
for assembly. If a board arrives at an assembly station and there is no
room for the board (there are already two boards waiting), the board
goes around the conveyor another time and again tries to enter the
station. The assembly time is normally distributed with a mean of
35 seconds and standard deviation of 8 seconds. The assembly stations
are uniformly distributed around the belt, and boards are placed onto
the belt four feet before the rst station. After all ve boards are placed
onto the belt, the hand truck waits until ve boards have arrived from
the preparation area before returning for another set of boards.
Simulate until 100 boards have been assembled. Report on the
utilization of the hand truck, conveyor, and the ve operators. How
many assemblies are produced at each of the ve stations? (Adapted
from Hoover and Perry, 1989.)
9. In this example we will model the assembly of circuit boards at four
identical assembly stations located along a closed loop conveyor
(100 feet long) that moves at 15 feet per minute. The boards are
assembled from kits, which are sent in totes from a load/unload station
to the assembly stations via the top loop of the conveyor. Each kit can
be assembled at any one of the four assembly stations. Completed
boards are returned in their totes from the assembly stations back to the
loading /unloading station. The loading/unloading station (located at the
left end of the conveyor) and the four assembly stations (identied by
the letters A through D) are equally spaced along the conveyor, 20 feet
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
Lab 13
The McGrawHill
Companies, 2004
643
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
644
II. Labs
Part II
The McGrawHill
Companies, 2004
Labs
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
The McGrawHill
Companies, 2004
Lab 13
645
FIGURE L13.27
Steel coils
Copper coils
Input
station A
Input
station B
Crane 1
Crane 2
Output
station C
Output
station D
Output
station E
To
Distance (feet)
Input A
Input A
Input A
Input B
Output C
Output D
Input B
Output E
100
150
300
100
Reference
S. V. Hoover and R. F. Perry, Simulation: A Problem Solving Approach, Addison Wesley,
1989, pp. B93B95.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
The McGrawHill
Companies, 2004
14
ADDITIONAL MODELING
CONCEPTS
In this lab we discuss some of the advanced but very useful concepts in ProModel.
In Section L14.1 we model a situation where customers balk (leave) when there is
congestion in the system. In Section L14.2 we introduce the concepts of macros
and runtime interfaces. In Section L14.3 we show how to generate multiple scenarios for the same model. In Section L14.4 we discuss how to run multiple replications. In Section L14.5 we show how to set up and import data from external
les. In Section L14.6 we discuss arrays of data. Table functions are introduced in
Section L14.7. Subroutines are explained with examples in Section L14.8. Section L14.9 introduces the concept of arrival cycles. Section L14.10 shows the use
of user distributions. Section L14.11 introduces the concepts and use of random
number streams.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
648
II. Labs
Part II
The McGrawHill
Companies, 2004
Labs
FIGURE L14.1
Locations and layout of All American Car Wash.
FIGURE L14.2
Customer arrivals at All American Car Wash.
able to move until the car ahead of it moves. Cars arrive every 2.5 2 minutes for
a wash. If a car cannot get into the car wash facility, it drives across the street to
Better Car Wash. Simulate for 100 hours.
a. How many customers does All American Car Wash lose to its
competition per hour (balking rate per hour)?
b. How many cars are served per hour?
c. What is the average time spent by a customer at the car wash facility?
The customer arrivals and processes/routings are dened as shown in Figures L14.2 and L14.3. The simulation run hours are set at 100. The total number
of cars that balked and went away to the competition across the street is 104 in
100 hours (Figure L14.4). That is about 10.4 cars per hour. The total number of
customers served is 2377 in 100 hours (Figure L14.5). That is about 23.77 cars per
hour. We can also see that, on average, the customers spent 14 minutes in the car
wash facility.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
Lab 14
The McGrawHill
Companies, 2004
649
FIGURE L14.3
Process and routing tables at All American Car Wash.
FIGURE L14.4
Cars that balked.
FIGURE L14.5
Customers served.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
650
II. Labs
Part II
The McGrawHill
Companies, 2004
Labs
The runtime interface (RTI) is a useful feature through which the user can interact with and supply parameters to the model without having to rewrite it. Every
time the simulation is run, the RTI allows the user to change model parameters
dened in the RTI. The RTI provides a user-friendly menu to change only
the macros that the modeler wants the user to change. An RTI is a custom interface dened by a modeler that allows others to modify the model or conduct
multiple-scenario experiments without altering the actual model data. All changes
are saved along with the model so they are preserved from run to run. RTI parameters are based on macros, so they may be used to change any model parameter
that can be dened using a macro (that is, any eld that allows an expression or
any logic denition).
An RTI is created and used in the following manner:
1. Select Macros from the Build menu and type in a macro ID.
2. Click the RTI button and choose Dene from the submenu. This opens
the RTI Denition dialog box.
3. Enter the Parameter Name that you want the macro to represent.
4. Enter an optional Prompt to appear for editing this model parameter.
5. Select the parameter type, either Unrestricted Text or Numeric Range.
6. For numeric parameters:
a. Enter the lower value in the From box.
b. Enter the upper value in the To box.
7. Click OK.
8. Enter the default text or numeric value in the Macro Text eld.
9. Use the macro ID in the model to refer to the runtime parameter (such
as operation time or resource usage time) in the model.
10. Before running the model, use the Model Parameters dialog box or the
Scenarios dialog box to edit the RTI parameter.
Problem Statement
Widgets-R-Us Inc. receives various kinds of widget orders. Raw castings of widgets arrive in batches of one every ve minutes. Some widgets go from the raw
material store to the mill, and then on to the grinder. Other widgets go directly to
the grinder. After grinding, all widgets go to degrease for cleaning. Finally, all
widgets are sent to the nished parts store. The milling and grinding times vary
depending on the widget design. However, the degrease time is seven minutes per
widget. The layout of Widgets-R-Us is shown in Figure L14.6.
Dene a runtime interface to allow the user to change the milling and grinding times every time the simulation is run. Also, allow the user to change the total
quantity of widgets being processed. Track the work-in-process inventory (WIP)
of widgets. In addition, dene another variable to track the production (PROD_QTY)
of nished widgets.
The macros are dened as shown in Figure L14.7. The runtime interface for
the milling time is shown in Figure L14.8. Figure L14.9 shows the use of the
parameters Mill_Time and Grind_Time in the process and routing tables. To
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
Lab 14
FIGURE L14.6
Layout of
Widgets-R-Us.
FIGURE L14.7
Macros created for Widgets-R-Us simulation model.
FIGURE L14.8
The runtime interface
dened for the milling
operation.
The McGrawHill
Companies, 2004
651
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
652
II. Labs
Part II
The McGrawHill
Companies, 2004
Labs
FIGURE L14.9
The process and routing tables showing the Mill_Time and Grind_Time parameters.
FIGURE L14.10
FIGURE L14.11
FIGURE L14.12
The
Grind_Time_Halfwidth
parameter dialog box.
view or change any of the model parameters, select Model Parameters from
the Simulation menu (Figure L14.10). The model parameters dialog box is
shown in Figure L14.11. To change the Grind_Time_Halfwidth, rst select it
from the model parameters list, and then press the Change button. The
Grind_Time_Halfwidth dialog box is shown in Figure L14.12.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
Lab 14
The McGrawHill
Companies, 2004
653
FIGURE L14.14
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
654
II. Labs
Part II
Labs
FIGURE L14.15
Editing the scenario
parameter.
FIGURE L14.16
Arrivals table for Shipping Boxes Unlimited in Section L7.7.2.
FIGURE L14.17
Reports created for both the scenarios.
The McGrawHill
Companies, 2004
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
Lab 14
The McGrawHill
Companies, 2004
655
If the same le is to be read more than once in the model, it may be necessary to
reset the le between each reading. This can be achieved by adding an arbitrary
end-of-le marker 99999 and the following two lines of code:
Read MydataFile1, Value1
If Value1D99999 Then Reset MydataFile1
The data stored in a general read le must be in ASCII format. Most spreadsheet
programs (Lotus 1-2-3, Excel, and others) can convert spreadsheets to ASCII les
(MydataFile1.TXT).
General Write. These les are used to write text strings and numeric values
using the WRITE and WRITELINE statements. Text strings are enclosed in quotes
when written to the le, with commas automatically appended to strings. This enables the les to be read into spreadsheet programs like Excel or Lotus 1-2-3 for
viewing. Write les can also be written using the XWRITE statement, which gives
the modeler full control over the output and formatting. If you write to the same
le more than once, either during multiple replications or within a single replication, the new data are appended to the previous data.
The WRITE statement writes to a general write le. The next item is written to
the le immediately after the previous item. Any le that is written to with the
WRITE statement becomes an ASCII text le and ProModel attaches an end-of-le
marker automatically when it closes the le.
WRITE <le ID>, <string or numeric expression>
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
656
II. Labs
The McGrawHill
Companies, 2004
Part II
Labs
The XWRITE statement allows the user to write in any format he or she chooses.
XWRITE <le ID>, <string or numeric expression>
XWRITE MyReport2, Customer Service Completed At: $FORMAT(Var1,5,2)
Table L14.1
Column
Data
A
B
C
D
E
F
G
Entity name
Location name
Quantity per arrival
Time of rst arrival
Number of arrivals
Frequency of arrivals
Attribute assignments
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
Lab 14
The McGrawHill
Companies, 2004
657
FIGURE L14.18
An external
entity-location le
in .WK1 format.
FIGURE L14.19
File ID and le name created for the external le.
FIGURE L14.20
The process table referring to the le ID of the external le.
of the castings are processed as casting type A, while the rest are processed as
casting type B.
For Pomona Castings, Inc., create an entitylocation le named P14_5 .WK1
to store the process routing and process time information (Figure L14.18). In
the simulation model, read from this external le to obtain all the process
information. Build a simulation model and run it for 100 hours. Keep track of the
work-in-process inventory and the production quantity.
Choose Build/More Elements/External Files. Dene the ID as SvcTms. The
Type of le is Entity Location. The le name (and the correct path) is also provided
(Figure L14.19). In the Process denition, use the le ID (Figure L14.20) instead of
the actual process timefor example, WAIT SvcTms(). Change the le path to
point to the appropriate directory and drive where the external le is located. A
snapshot of the simulation model is shown in Figure L14.21.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
658
II. Labs
Part II
The McGrawHill
Companies, 2004
Labs
FIGURE L14.21
A snapshot of the
simulation model for
Pomona Castings, Inc.
L14.5 Arrays
An array is a collection of values that are related in some way such as a list of test
scores, a collection of measurements from some experiment, or a sales tax table.
An array is a structured way of representing such data.
An array can have one or more dimensions. A two-dimensional array is useful when the data can be arranged in rows and columns. Similarly, a threedimensional array is appropriate when the data can be arranged in rows, columns,
and ranks. When several characteristics are associated with the data, still higher
dimensions may be appropriate, with each dimension corresponding to one of
these characteristics.
Each cell in an array works much like a variable. A reference to a cell in an
array can be used anywhere a variable can be used. Cells in arrays are usually initialized to zero, although initializing cells to some other value can be done in the
initialization logic. A WHILE-DO loop can be used for initializing array cell values.
Suppose that electroplating bath temperatures are recorded four times a day
at each of three locations in the tank. These temperature readings can be arranged
in an array having four rows and three columns (Table L14.2). These 12 data
items can be conveniently stored in a two-dimensional array named Temp[4,3]
with four rows and three columns.
An external Excel le (BathTemperature.xls) contains these bath temperature
data (Figure L14.22). The information from this le can be imported into an array
in ProModel (Figure L14.23) using the Array Editor. When you import data from
an external Excel spreadsheet into an array, ProModel loads the data from left to
right, top to bottom. Although there is no limit to the quantity of values you may
use, ProModel supports only two-dimensional arrays. Figure L14.24 shows the
Import File dialog in the Array Editor.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
Lab 14
The McGrawHill
Companies, 2004
659
FIGURE L14.22
An external le
containing bath
temperatures.
FIGURE L14.23
The Array Editor in ProModel.
FIGURE L14.24
The Import File dialog
in the Array Editor.
Location 1
Location 2
Location 3
1
2
3
4
75.5
78.8
80.4
78.5
78.7
78.9
79.4
79.1
72.0
74.5
76.3
75.8
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
660
II. Labs
Part II
The McGrawHill
Companies, 2004
Labs
Problem Statement
Table L14.3 shows the status of orders at the beginning of the month at Joes Jobshop. In his shop, Joe has three machines through which three types of jobs are
routed. All jobs go to all machines, but with different routings. The data for job
routings and processing times (exponential) are given in Table L14.4. The
processing times are given in minutes. Use a one-dimensional array to hold the information on the order status. Simulate and nd out how long it will take for Joe
to nish all his pending orders.
The layout, locations, and arrival of jobs at Joes Jobshop are shown in
Figures L14.25, L14.26, and L14.27. The pending order array is shown in
FIGURE L14.25
Layout of Joes Jobshop.
Jobs
Jobs
Machines
Process Times
A
B
C
25
27
17
A
B
C
231
123
312
456075
707050
506060
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
Lab 14
The McGrawHill
Companies, 2004
661
Figure L14.28. The initialization logic is shown in Figure L14.29. The processes
and routings are shown in Figure L14.30.
It took about 73 hours to complete all the pending work orders. At eight
hours per day, it took Joe a little over nine days to complete the backlog of
orders.
FIGURE L14.26
Locations at Joes Jobshop.
FIGURE L14.27
Arrival of jobs at Joes Jobshop.
FIGURE L14.28
Pending order array for Joes Jobshop.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
662
II. Labs
Part II
Labs
FIGURE L14.29
Initialization logic in the General Information menu.
FIGURE L14.30
Process and routing tables for Joes Jobshop.
The McGrawHill
Companies, 2004
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
Lab 14
The McGrawHill
Companies, 2004
663
0
2
4
6
8
10
12
14
16
50
40
25
20
20
20
25
30
30
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
664
II. Labs
Part II
Labs
FIGURE L14.31
Locations in Save Here Grocery.
FIGURE L14.32
Arrivals at Save Here Grocery.
FIGURE L14.33
Process and routing tables for Save Here Grocery.
The McGrawHill
Companies, 2004
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
Lab 14
FIGURE L14.34
The layout of Save Here Grocery.
FIGURE L14.35
The Table Functions
dialog box.
The McGrawHill
Companies, 2004
665
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
666
II. Labs
The McGrawHill
Companies, 2004
Part II
Labs
L14.7 Subroutines
A subroutine is a separate section of code intended to accomplish a specic task.
It is a user-dened command that can be called upon to perform a block of logic
and optionally return a value. Subroutines may have parameters or local variables
(local to the subroutine) that take on the values of arguments passed to the subroutine. There are three variations for the use of subroutines in ProModel:
1. A subroutine is called by its name from the main block of code.
2. A subroutine is processed independently of the calling logic so that
the calling logic continues without waiting for the subroutine to nish.
An ACTIVATE statement followed by the name of the subroutine is
needed.
3. Subroutines written in an external programming language can be called
using the XSUB() function.
Subroutines are dened in the Subroutines Editor in the More Elements section of
the Build menu. For more information on ProModel subroutines, please refer to
the ProModel Users Guide.
Problem Statement
At California Gears, gear blanks are routed from a fabrication center to one of
three manual inspection stations. The operation logic for the gears is identical at
each station except for the processing times, which are a function of the individual inspectors. Each gear is assigned two attributes during fabrication. The rst attribute, OuterDia, is the dimension of the outer diameter of the gear. The second
attribute, InnerDia, is the dimension of the inner diameter of the gear. During the
fabrication process, the outer and inner diameters are machined with an average
of 4.015 0.01 and 2.015 0.01 (uniformly distributed). These dimensions are
tested at the inspection stations and the values entered into a text le (Quality.doc)
for quality tracking. After inspection, gears are routed to a shipping location if
they pass inspection, or to a scrap location if they fail inspection.
Gear blanks arrive at the rate of 12 per hour (interarrival time exponentially
distributed with a mean of ve minutes). The fabrication and inspection times
(normal) are shown in Table L14.6. The specication limits for the outer and inner
Fabrication time
Inspector 1
Inspector 2
Inspector 3
Mean
Standard Deviation
3.2
4
5
4
0.1
0.3
0.2
0.1
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
Lab 14
The McGrawHill
Companies, 2004
667
diameters are given in Table L14.7. The layout, locations, entities, and arrival of
raw material are shown in Figures L14.36, L14.37, L14.38, and L14.39 respectively. The subroutine dening routing logic is shown in Figure L14.40. Figure L14.41 shows the processes and routing logic. The external le in which
quality data will be written is dened in Figure L14.42. Figure L14.43 shows a
portion of the actual gear rejection report.
FIGURE L14.36
Layout of California Gears.
Outer diameter
Inner diameter
Lower Specication
Limit
Upper Specication
Limit
4.01"
2.01"
4.02"
2.02"
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
668
II. Labs
Part II
Labs
FIGURE L14.37
Locations at
California Gears.
FIGURE L14.38
Entities at California
Gears.
FIGURE L14.39
Arrival of raw material at California Gears.
FIGURE L14.40
Subroutine dening routing logic.
The McGrawHill
Companies, 2004
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
Lab 14
FIGURE L14.41
Process and routing tables for California Gears.
FIGURE L14.42
External le for
California Gears.
FIGURE L14.43
Gear rejection report (partial) for California Gears.
The McGrawHill
Companies, 2004
669
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
670
II. Labs
The McGrawHill
Companies, 2004
Part II
Labs
To
6:00 A.M.
6:30 A.M.
8:00 A.M.
11:00 A.M.
1:00 P.M.
5:00 P.M.
7:00 P.M.
6:30 A.M.
8:00 A.M.
11:00 A.M.
1:00 P.M.
5:00 P.M.
7:00 P.M.
9:00 P.M.
Percent
5
20
5
35
10
20
5
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
Lab 14
The McGrawHill
Companies, 2004
671
before and after these peak periods. The same cycle of arrivals repeats every day. A
total of 100 customers visit the store on an average day (normal distribution with a
standard deviation of ve). Upon arrival, the customers take Uniform(5 2) minutes to order and receive food and Normal(15, 3) minutes to eat, nish business
discussions, gossip, read a newspaper, and so on. The restaurant currently has only
one employee (who takes the order, prepares the food, serves, and takes the
money). Simulate for 100 days.
The locations, processes and routings, arrivals, and arrival quantities are
shown in Figures L14.45, L14.46, L14.47, and L14.48, respectively. The arrival
cycles and the probability density function of arrivals at Newport Beach Burger
are shown in Figure L14.49. A snapshot of the simulation model is shown in
Figure L14.50.
FIGURE L14.45
Locations at Newport Beach Burger.
FIGURE L14.46
Process and routing tables at Newport Beach Burger.
FIGURE L14.47
Arrivals at Newport Beach Burger.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
672
Part II
The McGrawHill
Companies, 2004
Labs
FIGURE L14.48
FIGURE L14.49
FIGURE L14.50
A snapshot of Newport Beach Burger.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
Lab 14
The McGrawHill
Companies, 2004
673
FIGURE L14.51
User Distributions
menu.
Probability
1
2
3
4
.4
.3
.1
.2
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
674
II. Labs
Part II
The McGrawHill
Companies, 2004
Labs
shown in Table L14.10. The probability density function of eating times are
shown in Table L14.11. Simulate for 100 hours.
The user distributions, group size distribution, order time distribution, and
eating time distribution are dened as shown in Figures L14.52, L14.53, L14.54,
and L14.55, respectively.
FIGURE L14.52
User distributions dened for Newport Beach Burger.
FIGURE L14.53
FIGURE L14.54
FIGURE L14.55
Eating time
distribution for
Newport Beach
Burger.
Ordering Time
Probability
Eating Time
Probability
.0
.35
.35
.3
.0
.0
.3
.35
.35
.0
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
Lab 14
The McGrawHill
Companies, 2004
675
FIGURE L14.56
Layout of Salt Lake
Machine Shop.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
676
II. Labs
Part II
The McGrawHill
Companies, 2004
Labs
minutes. The downtimes and the denition of random number streams are shown
in Figures L14.60 and L14.61, respectively. Note that the same seed value (Figure L14.61) is used in both the random number streams to ensure that both machines are shut down at exactly the same time.
FIGURE L14.57
Resources at Salt Lake Machine Shop.
FIGURE L14.58
Path network of the mechanic at Salt Lake Machine Shop.
FIGURE L14.59
Process and routing tables at Salt Lake Machine Shop.
FIGURE L14.60
Clock downtimes for machines A and B at Salt Lake Machine Shop.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
The McGrawHill
Companies, 2004
Lab 14
677
FIGURE L14.61
Denition of random
number streams.
L14.11 Exercises
1. Differentiate between the following:
a. Table functions versus arrays.
b. Subroutines versus macros.
c. Arrivals versus arrival cycles.
d. Scenarios versus replications.
e. Scenarios versus views.
f. Built-in distribution versus user distribution.
2. What are some of the advantages of using an external le in ProModel?
3. HiTek Molding, a small mold shop, produces three types of parts: Jobs
A, B, and C. The ratio of each part and the processing times (minimum,
mean, and maximum of a triangular distribution) are as follows:
Job Type
Ratio
Minimum
Mean
Maximum
Job A
Job B
Job C
0.4
0.5
0.1
30 sec.
20 sec.
90 sec.
60 sec.
30 sec.
120 sec.
90 sec.
40 sec.
200 sec.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
678
II. Labs
Part II
The McGrawHill
Companies, 2004
Labs
7. For the Detroit ToolNDie plant (Exercise 14, Section L7.12), generate
the following three scenarios:
a. Scenario I: One tool crib clerk.
b. Scenario II: Two tool crib clerks.
c. Scenario III: Three tool crib clerks.
Run 10 replications of each scenario. Analyze and compare the results.
How many clerks would you recommend hiring?
8. In United Electronics (Exercise 7 in Section L7.12), use an array to
store the process time information. Read this information from an
external Excel spreadsheet into the simulation model.
9. For Salt Lake City Electronics (Exercise 10 in Section L7.12), use
external les (arrivals and entity_location les) to store all the data.
Read this le directly into the simulation model.
10. For Ghoshs Gear Shop example in Section L13.2.1, create macros
and suitable runtime interfaces for the following processing time
parameters:
11. West Coast Federal a drive-in bank, has one teller and space for ve
waiting cars. If a customer arrives when the line is full, he or she drives
around the block and tries again. Time between arrivals is exponential
with mean of 10 minutes. Time to drive around the block is normally
distributed with mean 3 min and standard deviation 0.6 min. Service
time is uniform at 9 3 minutes. Build a simulation model and run it
for 2000 hours (approximately one year of operation).
a. Collect statistics on time in queue, time in system, teller utilization,
number of customers served per hour, and number of customers
balked per hour.
b. Modify the model to allow two cars to wait after they are served to
get onto the street. Waiting time for trafc is exponential with a
mean of four minutes. Collect all the statistics from part a.
c. Modify the model to reect balking customers leaving the system
and not driving around the block. Collect all the same statistics. How
many customers are lost per hour?
d. The banks operating hours are 9 A.M. till 3 P.M. The drive-in facility
is closed at 2:30 P.M. Customers remaining in line are served until
the last customer has left the bank. Modify the model to reect these
changes. Run for 200 days of operation.
12. San Dimas Mutual Bank has two drive-in ATM kiosks in tandem but
only one access lane (Figure L14.62). In addition, there is one indoor
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
II. Labs
Lab 14
The McGrawHill
Companies, 2004
679
FIGURE L14.62
WalkIndoor ATM
Parking Lot
Drive-
ATM 1
ATM 2
ATM for customers who decide to park (30 percent of all customers)
and walk in. Customers arrive at intervals that are spaced on average
ve minutes apart (exponentially distributed). ATM customers are of
three typessave money (deposit cash or check), spend money
(withdraw cash), or count money (check balance). If both ATMs are
free when a customer arrives, the customer will use the downstream
ATM 2. A car at ATM 1 cannot pass a car at the ATM in front of it even
if it has nished.
Type of Customer
Time at ATM
Save Money
Spend Money
Count Money
1/3
1/2
1/6
Normal(7,2) minutes
Normal(5,2) minutes
Normal(3,1) minutes
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
680
II. Labs
Part II
The McGrawHill
Companies, 2004
Labs
time at this ATM is same as the drive-in ATMs. Run the model until
2000 cars have been served. Analyze the following:
a. Average and maximum number of customers deciding to park and
walk in. How big should the parking lot be?
b. The average and maximum drive-in queue size.
c. The average and maximum time spent by a customer waiting in the
drive-in queue.
d. The average and maximum walk-in queue size.
e. The average and maximum time spent by a customer waiting in the
walk-in queue.
f. The average and maximum time in the system.
g. Utilization of the three ATMs.
h. Number of customers served each hour.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
Appendixes
The McGrawHill
Companies, 2004
APPENDIX A
COMMON CONTINUOUS AND DISCRETE DISTRIBUTIONS*
A.1 Continuous Distributions
Beta Distribution (min, max, p, q)
f (x) =
1
(x min) p1 (max x)q 1
B( p, q)
(max min) p+q 1
min x max
min = minimum value of x
max = maximum value of x
p = lower shape parameter > 0
q = upper shape parameter > 0
B(p, q) = beta function
p
mean = (max min)
+ min
p+q
pq
variance = (max min)2
( p + q)2 ( p + q + 1)
p1
+ min
if p > 1, q > 1
(max
min)
p+q 2
max
if ( p 1, q < 1) or if ( p > 1, q = 1)
*Adapted by permission from Stat::Fit Users Guide (South Kent, Connecticut: Geer Mountain
Software Corporation, 1997).
709
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
710
Appendixes
Appendix A
(x min)m1
[x min]
exp
m (m)
min = minimum x
m = shape factor = positive integer
= scale factor > 0
mean = min + m
variance = m2
mode = min + (m 1)
The McGrawHill
Companies, 2004
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
Appendixes
Appendix A
The McGrawHill
Companies, 2004
711
1
[x min]
exp
The exponential distribution is a continuous distribution bounded on the lower side. Its
shape is always the same, starting at a nite value at the minimum and continuously
decreasing at larger x. As shown in the example, the exponential distribution decreases
rapidly for increasing x.
The exponential distribution is frequently used to represent the time between random
occurrences, such as the time between arrivals at a specic location in a queuing model or
the time between failures in reliability models. It has also been used to represent the service
times of a specic operation. Further, it serves as an explicit manner in which the time
dependence on noise may be treated. As such, these models are making explicit use of
the lack of history dependence of the exponential distribution; it has the same set of
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
712
Appendixes
Appendix A
The McGrawHill
Companies, 2004
probabilities when shifted in time. Even when exponential models are known to be
inadequate to describe the situation, their mathematical tractability provides a good starting
point. A more complex distribution such as Erlang or Weibull may be investigated (see
Johnson et al. 1994, p. 499; Law and Kelton 1991, p. 330).
(x min)1
[x min]
exp
()
min = minimum x
= shape parameter > 0
= scale parameter > 0
mean = min +
2
variance =
min + ( 1)
if 1
mode =
min
if < 1
The gamma distribution is a continuous distribution bounded at the lower side. It has
three distinct regions. For = 1, the gamma distribution reduces to the exponential distribution, starting at a nite value at minimum x and decreasing monotonically thereafter. For
< 1, the gamma distribution tends to innity at minimum x and decreases monotonically
for increasing x. For > 1, the gamma distribution is 0 at minimum x, peaks at a value that
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
Appendixes
Appendix A
The McGrawHill
Companies, 2004
713
depends on both alpha and beta, and decreases monotonically thereafter. If is restricted
to positive integers, the gamma distribution is reduced to the Erlang distribution.
Note that the gamma distribution also reduces to the chi-square distribution for
min = 0, = 2, and = n/2. It can then be viewed as the distribution of the sum of
squares of independent unit normal variables, with n degrees of freedom, and is used in
many statistical tests.
The gamma distribution can also be used to approximate the normal distribution, for
large , while maintaining its strictly positive values of x [actually (x-min)].
The gamma distribution has been used to represent lifetimes, lead times, personal income data, a population about a stable equilibrium, interarrival times, and service times. In
particular, it can represent lifetime with redundancy (see Johnson et al. 1994, p. 343;
Shooman 1990).
Examples of each of the regions of the gamma distribution are shown here. Note the
peak of the distribution moving away from the minimum value for increasing , but with a
much broader distribution.
[ln(x min) ]2
exp
2 2
(x min) 2 2
min = minimum x
= mean of the included normal distribution
= standard deviation of the included normal distribution
mean = min + exp( + ( 2 /2))
variance = exp(2 + 2 )(exp( 2 ) 1)
mode = min + exp( 2 )
f (x) =
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
714
Appendixes
Appendix A
The McGrawHill
Companies, 2004
The lognormal distribution is used in many different areas including the distribution
of particle size in naturally occurring aggregates, dust concentration in industrial atmospheres, the distribution of minerals present in low concentrations, the duration of sickness
absence, physicians consulting time, lifetime distributions in reliability, distribution of
income, employee retention, and many applications modeling weight, height, and so forth
(see Johnson et al. 1994, p. 207).
The lognormal distribution can provide very peaked distributions for increasing
indeed, far more peaked than can be easily represented in graphical form.
Normal Distribution (
, )
[x ]2
exp
f (x) =
2 2
2 2
= shift parameter
= scale parameter = standard deviation
mean =
variance = 2
mode =
1
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
Appendixes
Appendix A
The McGrawHill
Companies, 2004
715
exp
()(x min)+1
[x min]
min = minimum x
= shape parameter > 0
= scale
parameter > 0
min +
for > 1
mean =
1
for > 2
variance = ( 1)2 ( 2)
+1
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
716
Appendixes
Appendix A
The McGrawHill
Companies, 2004
x min p1
f (x) =
x min p+q
1+
B( p, q)
x > min
min (, )
>0
p>0
q>0
( p, q) = beta function
min + p
mean =
q 1
min + ( p 1)
mode =
q +1
min
for q > 1
for 0 < q 1
for q > 2
for 0 < q 2
for p 1
otherwise
The Pearson 6 distribution is a continuous distribution bounded on the low side. The
Pearson 6 distribution is sometimes called the beta distribution of the second kind due to
the relationship of a Pearson 6 random variable to a beta random variable.
Like the gamma distribution, the Pearson 6 distribution has three distinct regions. For
p = 1, the Pearson 6 distribution resembles the exponential distribution, starting at a nite
value at minimum x and decreasing monotonically thereafter. For p < 1, the Pearson 6
distribution tends to innity at minimum x and decreases monotonically for increasing x.
For p > 1, the Pearson 6 distribution is 0 at minimum x, peaks at a value that depends on
both p and q, and decreases monotonically thereafter.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
Appendixes
Appendix A
The McGrawHill
Companies, 2004
717
The Pearson 6 distribution appears to have found little direct use, except in its reduced
form as the F distribution, where it serves as the distribution of the ratio of independent
estimators of variance and provides the nal test for the analysis of variance.
The three regions of the Pearson 6 distribution are shown here. Also note that the
distribution becomes sharply peaked just off the minimum for increasing q.
2(x min)
2(max x)
if min x mode
if mode < x max
min = minimum x
max = maximum x
mode = most likely x
min + max + mode
mean =
3
2
min + max2 + mode2 (min)(max) (min)(mode) (max)(mode)
variance =
18
The triangular distribution is a continuous distribution bounded on both sides. The triangular distribution is often used when no or few data are available; it is rarely an accurate
representation of a data set (see Law and Kelton 1991, p. 341). However, it is employed as
the functional form of regions for fuzzy logic due to its ease of use.
The triangular distribution can take on very skewed forms, as shown here, including
negative skewness. For the exceptional cases where the mode is either the min or max, the
triangular distribution becomes a right triangle.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
718
Appendixes
Appendix A
The McGrawHill
Companies, 2004
1
max min
The uniform distribution is a continuous distribution bounded on both sides. Its density
does not depend on the value of x. It is a special case of the beta distribution. It is frequently
called the rectangular distribution (see Johnson et al. 1995, p. 276). Most random number
generators provide samples from the uniform distribution on (0,1) and then convert these
samples to random variates from other distributions.
The uniform distribution is used to represent a random variable with constant likelihood
of being in any small interval between min and max. Note that the probability of either the
min or max value is 0; the end points do not occur. If the end points are necessary, try the
sum of two opposing right triangular distributions.
min = minimum x
= shape parameter > 0
= scale parameter > 0
mean = min +
2
variance =
min + ( 1)
if 1
mode =
min
if < 1
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
Appendixes
Appendix A
The McGrawHill
Companies, 2004
719
The Weibull distribution is a continuous distribution bounded on the lower side. Because it provides one of the limiting distributions for extreme values, it is also referred to
as the Frechet distribution and the WeibullGnedenko distribution. Unfortunately, the
Weibull distribution has been given various functional forms in the many engineering references; the form here is the standard form given in Johnson et al. 1994, p. 628.
Like the gamma distribution, the Weibull distribution has three distinct regions. For
= 1, the Weibull distribution is reduced to the exponential distribution, starting at a nite
value at minimum x and decreasing monotonically thereafter. For < 1, the Weibull distribution tends to innity at minimum x and decreases monotonically for increasing x. For
> 1, the Weibull distribution is 0 at minimum x, peaks at a value that depends on both
and , and decreases monotonically thereafter. Uniquely, the Weibull distribution has negative skewness for > 3.6.
The Weibull distribution can also be used to approximate the normal distribution for
= 3.6, while maintaining its strictly positive values of x [actually (x-min)], although the
kurtosis is slightly smaller than 3, the normal value.
The Weibull distribution derived its popularity from its use to model the strength of
materials, and has since been used to model just about everything. In particular, the
Weibull distribution is used to represent wear-out lifetimes in reliability, wind speed, rainfall intensity, health-related issues, germination, duration of industrial stoppages,
migratory systems, and thunderstorm data (see Johnson et al. 1994, p. 628; Shooman
1990, p. 190).
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
720
Appendixes
Appendix A
The McGrawHill
Companies, 2004
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
Appendixes
Appendix A
The McGrawHill
Companies, 2004
721
As shown in the examples, low values of p give high probabilities for low values of x
and vice versa, so that the peak in the distribution may approach either bound. Note that the
probabilities are actually weights at each integer but are represented by broader bars for
visibility.
The binomial distribution can be used to describe
The number of defective items in a batch.
The number of people in a group of a particular type.
Out of a group of employees, the number of employees who call in sick on a given
day.
It is also useful in other event sampling tests where the probability of the event is known to
be constant or nearly so. See Johnson et al. (1992, p. 134).
1
max min + 1
The discrete uniform distribution is a discrete distribution bounded on [min, max] with
constant probability at every value on or between the bounds. Sometimes called the discrete rectangular distribution, it arises when an event can have a nite and equally probable number of outcomes (see Johnson et al. 1992, p. 272). Note that the probabilities are
actually weights at each integer but are represented by broader bars for visibility.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
722
Appendixes
Appendix A
The McGrawHill
Companies, 2004
Poisson Distribution (
)
p(x) =
e x
x!
= rate of occurrence
mean =
variance =
1 and
if is an integer
mode =
otherwise
The Poisson distribution is a discrete distribution bounded at 0 on the low side and unbounded on the high side. The Poisson distribution is a limiting form of the hypergeometric
distribution.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
Appendixes
Appendix A
The McGrawHill
Companies, 2004
723
The Poisson distribution nds frequent use because it represents the infrequent occurrence of events whose rate is constant. This includes many types of events in time and
space such as arrivals of telephone calls, defects in semiconductor manufacturing, defects
in all aspects of quality control, molecular distributions, stellar distributions, geographical
distributions of plants, shot noise, and so on. It is an important starting point in queuing
theory and reliability theory (see Johnson et al. 1992, p. 151). Note that the time between
arrivals (defects) is exponentially distributed, which makes this distribution a particularly
convenient starting point even when the process is more complex.
The Poisson distribution peaks near and falls off rapidly on either side. Note that the
probabilities are actually weights at each integer but are represented by broader bars for
visibility.
References
Banks, Jerry, and John S. Carson II. Discrete-Event System Simulation. Englewood Cliffs,
NJ: Prentice Hall, 1984.
Johnson, Norman L.; Samuel Kotz; and N. Balakrishnan. Continuous Univariate Distributions. Vol. 1. New York: John Wiley & Sons, 1994.
Johnson, Norman L.; Samuel Kotz; and N. Balakrishnan. Continuous Univariate Distributions. Vol. 2. New York: John Wiley & Sons, 1995.
Johnson, Norman L.; Samuel Kotz; and Adrienne W. Kemp. Univariate Discrete Distributions. New York: John Wiley & Sons, 1992.
Law, Averill M., and W. David Kelton. Simulation Modeling and Analysis. New York:
McGraw-Hill, 1991.
Shooman, Martin L. Probabilistic Reliability: An Engineering Approach. Melbourne,
Florida: Robert E. Krieger, 1990.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
Appendixes
The McGrawHill
Companies, 2004
APPENDIX B
CRITICAL VALUES FOR STUDENTS t DISTRIBUTION AND STANDARD
NORMAL DISTRIBUTION
(Critical values for the standard normal distribution (z ) appear in the last row with
d f = . z = t, .)
tdf,
724
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
40
60
120
0.325
0.289
0.277
0.271
0.267
0.265
0.263
0.262
0.261
0.260
0.260
0.259
0.259
0.258
0.258
0.258
0.257
0.257
0.257
0.257
0.257
0.256
0.256
0.256
0.256
0.256
0.256
0.256
0.256
0.256
0.255
0.254
0.254
0.253
0.727
0.617
0.584
0.569
0.559
0.553
0.549
0.546
0.543
0.542
0.540
0.539
0.538
0.537
0.536
0.535
0.534
0.534
0.533
0.533
0.532
0.532
0.532
0.531
0.531
0.531
0.531
0.530
0.530
0.530
0.529
0.527
0.526
0.524
1.367
1.061
0.978
0.941
0.920
0.906
0.896
0.889
0.883
0.879
0.876
0.873
0.870
0.868
0.866
0.865
0.863
0.862
0.861
0.860
0.859
0.858
0.858
0.857
0.856
0.856
0.855
0.855
0.854
0.854
0.851
0.848
0.845
0.842
3.078
1.886
1.638
1.533
1.476
1.440
1.415
1.397
1.383
1.372
1.363
1.356
1.350
1.345
1.341
1.337
1.333
1.330
1.328
1.325
1.323
1.321
1.319
1.316
1.316
1.315
1.314
1.313
1.310
1.310
1.303
1.296
1.289
1.282
6.314
2.920
2.353
2.132
2.015
1.943
1.895
1.860
1.833
1.812
1.796
1.782
1.771
1.761
1.753
1.746
1.740
1.734
1.729
1.725
1.721
1.717
1.714
1.708
1.708
1.706
1.703
1.701
1.697
1.697
1.684
1.671
1.658
1.645
12.706
4.303
3.182
2.776
2.571
2.447
2.365
2.306
2.262
2.228
2.201
2.179
2.160
2.145
2.131
2.120
2.110
2.101
2.093
2.086
2.080
2.074
2.069
2.060
2.060
2.056
2.052
2.048
2.042
2.042
2.021
2.000
1.980
1.960
31.827
6.965
4.541
3.747
3.365
3.143
2.998
2.896
2.821
2.764
2.718
2.681
2.650
2.624
2.602
2.583
2.567
2.552
2.539
2.528
2.518
2.508
2.500
2.485
2.485
2.479
2.473
2.467
2.457
2.457
2.423
2.390
2.358
2.326
63.657
9.925
5.841
4.604
4.032
3.707
3.499
3.355
3.250
3.169
3.106
3.055
3.012
2.977
2.947
2.921
2.898
2.878
2.861
2.845
2.831
2.819
2.807
2.797
2.787
2.779
2.771
2.763
2.756
2.750
2.704
2.660
2.617
2.576
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
APPENDIX C
10
12
14
16
20
24
30
40
50
100
200
10.13
7.71
6.61
5.99
5.59
5.32
5.12
4.96
4.84
4.75
4.67
4.60
4.54
4.49
4.45
4.41
4.38
4.35
4.30
4.26
4.23
4.20
4.17
4.12
4.08
4.06
4.03
4.00
3.98
3.96
3.94
3.89
1.04
9.55
6.94
5.79
5.14
4.74
4.46
4.26
4.10
3.98
3.89
3.81
3.74
3.68
3.63
3.59
3.55
3.52
3.49
3.44
3.40
3.37
3.34
3.32
3.27
3.23
3.20
3.18
3.15
3.13
3.11
3.09
3.04
3.00
9.28
6.59
5.41
4.76
4.35
4.07
3.86
3.71
3.59
3.49
3.41
3.34
3.29
3.24
3.20
3.16
3.13
3.10
3.05
3.01
2.98
2.95
2.92
2.87
2.84
2.81
2.79
2.76
2.74
2.72
2.70
2.65
2.61
9.12
6.39
5.19
4.53
4.12
3.84
3.63
3.48
3.36
3.26
3.18
3.11
3.06
3.01
2.96
2.93
2.90
2.87
2.82
2.78
2.74
2.71
2.69
2.64
2.61
2.58
2.56
2.53
2.50
2.49
2.46
2.42
2.37
9.01
6.26
5.05
4.39
3.97
3.69
3.48
3.33
3.20
3.11
3.03
2.96
2.90
2.85
2.81
2.77
2.74
2.71
2.66
2.62
2.59
2.56
2.53
2.49
2.45
2.42
2.40
2.37
2.35
2.33
2.31
2.26
2.21
8.94
6.16
4.95
4.28
3.87
3.58
3.37
3.22
3.09
3.00
2.92
2.85
2.79
2.74
2.70
2.66
2.63
2.60
2.55
2.51
2.47
2.45
2.42
2.37
2.34
2.31
2.29
2.25
2.23
2.21
2.19
2.14
2.10
8.89
6.09
4.88
4.21
3.79
3.50
3.29
3.14
3.01
2.91
2.83
2.76
2.71
2.66
2.61
2.58
2.54
2.51
2.46
2.42
2.39
2.36
2.33
2.29
2.25
2.22
2.20
2.17
2.14
2.13
2.10
2.06
2.01
8.85
6.04
4.82
4.15
3.73
3.44
3.23
3.07
2.95
2.85
2.77
2.70
2.64
2.59
2.55
2.51
2.48
2.45
2.40
2.36
2.32
2.29
2.27
2.22
2.18
2.15
2.13
2.10
2.07
2.06
2.03
1.98
1.94
8.81
6.00
4.77
4.10
3.68
3.39
3.18
3.02
2.90
2.80
2.71
2.65
2.59
2.54
2.49
2.46
2.42
2.39
2.34
2.30
2.27
2.24
2.21
2.16
2.12
2.10
2.07
2.04
2.02
2.00
1.97
1.93
1.88
8.79
5.96
4.74
4.06
3.64
3.35
3.14
2.98
2.85
2.75
2.67
2.60
2.54
2.49
2.45
2.41
2.38
2.35
2.30
2.25
2.22
2.19
2.16
2.11
2.08
2.05
2.03
1.99
1.97
1.95
1.93
1.88
1.83
8.74
5.91
4.68
4.00
3.57
3.28
3.07
2.91
2.79
2.69
2.60
2.53
2.48
2.42
2.38
2.34
2.31
2.28
2.23
2.18
2.15
2.12
2.09
2.04
2.00
1.97
1.95
1.92
1.89
1.88
1.85
1.80
1.75
8.71
5.87
4.64
3.96
3.53
3.24
3.03
2.86
2.74
2.64
2.55
2.48
2.42
2.37
2.33
2.29
2.26
2.23
2.17
2.13
2.09
2.06
2.04
1.99
1.95
1.92
1.89
1.86
1.84
1.82
1.79
1.74
1.69
8.69
5.84
4.60
3.92
3.49
3.20
2.99
2.83
2.70
2.60
2.51
2.44
2.38
2.33
2.29
2.25
2.21
2.18
2.13
2.09
2.05
2.02
1.99
1.94
1.90
1.87
1.85
1.82
1.79
1.77
1.75
1.69
1.64
8.66
5.80
4.56
3.87
3.44
3.15
2.94
2.77
2.65
2.54
2.46
2.39
2.33
2.28
2.23
2.19
2.16
2.12
2.07
2.03
1.99
1.96
1.93
1.88
1.84
1.81
1.78
1.75
1.72
1.70
1.68
1.62
1.57
8.64
5.77
4.53
3.84
3.41
3.12
2.90
2.74
2.61
2.51
2.42
2.35
2.29
2.24
2.19
2.15
2.11
2.08
2.03
1.98
1.95
1.91
1.89
1.83
1.79
1.76
1.74
1.70
1.67
1.65
1.63
1.57
1.52
8.62
5.75
4.50
3.81
3.38
3.08
2.86
2.70
2.57
2.47
2.38
2.31
2.25
2.19
2.15
2.11
2.07
2.04
1.98
1.94
1.90
1.87
1.84
1.79
1.74
1.71
1.69
1.65
1.62
1.60
1.57
1.52
1.46
8.59
5.72
4.46
3.77
3.34
3.04
2.83
2.66
2.53
2.43
2.34
2.27
2.20
2.15
2.10
2.06
2.03
1.99
1.94
1.89
1.85
1.82
1.79
1.74
1.69
1.66
1.63
1.59
1.57
1.54
1.52
1.46
1.40
8.58
5.70
4.44
3.75
3.32
3.02
2.80
2.64
2.51
2.40
2.31
2.24
2.18
2.12
2.08
2.04
2.00
1.97
1.91
1.86
1.82
1.79
1.76
1.70
1.66
1.63
1.60
1.56
1.53
1.51
1.48
1.41
1.35
8.55
5.66
4.41
3.71
3.27
2.97
2.76
2.59
2.46
2.35
2.26
2.19
2.12
2.07
2.02
1.98
1.94
1.91
1.85
1.80
1.76
1.73
1.70
1.63
1.59
1.55
1.52
1.48
1.45
1.43
1.39
1.32
1.25
8.54
5.65
4.39
3.69
3.25
2.95
2.73
2.56
2.43
2.32
2.23
2.16
2.10
2.04
1.99
1.95
1.91
1.88
1.82
1.77
1.73
1.69
1.66
1.60
1.55
1.51
1.48
1.44
1.40
1.38
1.34
1.26
1.17
8.54
5.63
4.36
3.67
3.23
2.93
2.71
2.54
2.41
2.30
2.21
2.13
2.07
2.01
1.96
1.92
1.88
1.84
1.78
1.73
1.69
1.66
1.62
1.56
1.51
1.47
1.44
1.39
1.35
1.33
1.28
1.19
1.03
The McGrawHill
Companies, 2004
725
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
22
24
26
28
30
35
40
45
50
60
70
80
100
200
Appendixes
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
Appendixes
The McGrawHill
Companies, 2004
APPENDIX D
CRITICAL VALUES FOR CHI-SQUARE DISTRIBUTION
2df,
726
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
40
60
120
= 0.4
= 0.3
= 0.2
= 0.1
= 0.05
= 0.025
= 0.01
= 0.005
0.708
1.833
2.946
4.045
5.132
6.211
7.283
8.351
9.414
10.473
11.530
12.584
13.636
14.685
15.733
16.780
17.824
18.868
19.910
20.951
21.992
23.031
24.069
25.106
26.143
27.179
28.214
29.249
30.283
31.316
41.622
62.135
123.289
1.074
2.408
3.665
4.878
6.064
7.231
8.383
9.524
10.656
11.781
12.899
14.011
15.119
16.222
17.322
18.418
19.511
20.601
21.689
22.775
23.858
24.939
26.018
27.096
28.172
29.246
30.319
31.391
32.461
33.530
44.165
65.226
127.616
1.642
3.219
4.642
5.989
7.289
8.558
9.803
11.030
12.242
13.442
14.631
15.812
16.985
18.151
19.311
20.465
21.615
22.760
23.900
25.038
26.171
27.301
28.429
29.553
30.675
31.795
32.912
34.027
35.139
36.250
47.269
68.972
132.806
2.706
4.605
6.251
7.779
9.236
10.645
12.017
13.362
14.684
15.987
17.275
18.549
19.812
21.064
22.307
23.542
24.769
25.989
27.204
28.412
29.615
30.813
32.007
33.196
34.382
35.563
36.741
37.916
39.087
40.256
51.805
74.397
140.233
3.841
5.991
7.815
9.488
11.070
12.592
14.067
15.507
16.919
18.307
19.675
21.026
22.362
23.685
24.996
26.296
27.587
28.869
30.144
31.410
32.671
33.924
35.172
36.415
37.652
38.885
40.113
41.337
42.557
43.773
55.758
79.082
146.567
5.024
7.378
9.348
11.143
12.832
14.449
16.013
17.535
19.023
20.483
21.920
23.337
24.736
26.119
27.488
28.845
30.191
31.526
32.852
34.170
35.479
36.781
38.076
39.364
40.646
41.923
43.195
44.461
45.722
46.979
59.342
83.298
152.211
6.635
9.210
11.345
13.277
15.086
16.812
18.475
20.090
21.666
23.209
24.725
26.217
27.688
29.141
30.578
32.000
33.409
34.805
36.191
37.566
38.932
40.289
41.638
42.980
44.314
45.642
46.963
48.278
49.588
50.892
63.691
88.379
158.950
7.879
10.597
12.838
14.860
16.750
18.548
20.278
21.955
23.589
25.188
26.757
28.300
29.819
31.319
32.801
34.267
35.718
37.156
38.582
39.997
41.401
42.796
44.181
45.558
46.928
48.290
49.645
50.994
52.335
53.672
66.766
91.952
163.648
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
III
The McGrawHill
Companies, 2004
CASE STUDY
ASSIGNMENTS
Case 1
Case 2
Case 3
Case 4
Case 5
Case 6
Case 7
Case 8
683
683
685
688
690
692
698
705
These case studies have been used in senior- or graduate-level simulation classes. Each of
these case studies can be analyzed over a three- to ve-week period. A single student or a
group of two to three students can work together on these case studies. If you are using the
student version of the software, you may need to make some simplifying assumptions to
limit the size of the model. You will also need to ll in (research or assume) some of the information and data missing from the case descriptions.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
The McGrawHill
Companies, 2004
CASE STUDY 1
TOY AIRPLANE MANUFACTURING
A toy company produces three types (A, B, and C) of toy aluminum airplanes in the following daily volumes: A 1000, B 1500 and C 1800. The company expects demand
to increase for its products by 30 percent over the next six months and needs to know the
total machines and operators that will be required. All planes go through ve operations
(10 through 50) except for plane A, which skips operation 40. Following is a list of operation times, move times, and resources used:
Opn
Description
10
20
30
Die casting
Cutting
Grinding
40
50
Coating
Inspection and
packaging
Move Time to
Next Operation
Operation Time
Resource
.3 min.
none
.2 min.
Coater
Packager
.2 min
To exit with 88% yield
Movement
Resource
Mover
Mover
Mover
After die casting, planes are moved to each operation in batch sizes of 24. Input buffers
exist at each operation. The factory operates eight hours a day, ve days per week. The factory starts out empty at the beginning of each day and ships all parts produced at the end of
the day. The die caster experiences downtimes every 30 minutes exponentially distributed
and takes 8 minutes normally distributed with a standard deviation of 2 minutes to repair.
One maintenance person is always on duty to make repairs.
Find the total number of machines and personnel needed to meet daily production requirements. Document the assumptions and experimental procedure you went through to
conduct the study.
CASE STUDY 2
MI CAZUELAMEXICAN RESTAURANT
Maria opened her authentic Mexican restaurant Mi Cazuela (a cazuela is a clay cooking
bowl with a small handle on each side) in Pasadena, California, in the 1980s. It quickly became popular for the tasty food and use of fresh organic produce and all-natural meats. As
her oldest child, you have been asked to run the restaurant. If you are able to gain her condence, she will eventually hand over the restaurant to you.
683
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
684
Part III
The McGrawHill
Companies, 2004
You have denite ideas about increasing the protability at Mi Cazuela. Lately, you
have observed a troubling trend in the restaurant. An increasing number of customers are
expressing dissatisfaction with the long wait, and you have also observed that some people
leave without being served.
Your initial analysis of the situation at Mi Cazuela indicates that one way to improve customer service is to reduce the waiting time in the restaurant. You also realize that by optimizing the process for the peak time in the restaurant, you will be able to increase the prot.
Customers arrive in groups that vary in size from one to four (uniformly distributed). Currently, there are four tables for four and three tables for two patrons in the dining area. One table
for four can be replaced with two tables for two, or vice versa. Groups of one or two customers
wait in one queue while groups of three or four customers wait in another queue. Each of these
waiting lines can accommodate up to two groups only. One- or two-customer groups are
directed to tables for two. Three- or four-customer groups are directed to tables for four.
There are two cooks in the kitchen and two waiters. The cooks are paid $100/day, and the
waiters get $60/day. The cost of raw material (vegetables, meat, spices, and other food material) is $1 per customer. The overhead cost of the restaurant (rent, insurance, utilities, and so
on) is $300/day. The bill for each customer varies uniformly from $10 to $16 or U(13,3).
The restaurant remains open seven days a week from 5 P.M. till 11 P.M. The customer arrival pattern is as follows. The total number of customer groups visiting the restaurant each
day varies uniformly between 30 and 50 or U(40,10):
To
Percent
5 P.M.
6 P.M.
7 P.M.
9 P.M.
10 P.M.
6 P.M.
7 P.M.
9 P.M.
10 P.M.
11 P.M.
10
20
55
10
5
Activity #
Activity
1
2
3
4
5
6
7
8
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
Case Study 3
The McGrawHill
Companies, 2004
685
Part A
Analyze and answer the following questions:
1. What is the range of prot (develop a 3 condence interval) per day at Mi Cazuela?
2. On average, how many customers leave the restaurant (per day) without eating?
3. What is the range of time (develop a 3 condence interval) a customer group
spends at the restaurant?
4. How much time (develop a 3 condence interval) does a customer group wait in
line?
Part B
You would like to change the mix of four-seat tables and two-seat tables in the dining area
to increase prot and reduce the number of balking customers. You would also like to investigate if hiring additional waiters and/or cooks will improve the bottom line (prot).
Part C
You are thinking of using an automated handheld device for the waiters to take the customer orders and transmit the information (wireless) to the kitchen. The order entry and
transmission (activities #2 and 3) is estimated to take N(1.5, 0.2) minutes. The rent for each
of these devices is $2/hour. Will using these devices improve prot? Reduce customer time
in the system? Should you invest in these handheld devices?
Part D
The area surrounding the mall is going through a construction boom. It is expected that Mi
Cazuela (and the mall) will soon see an increase in the number of patrons per day. Soon the
number of customer groups visiting the restaurant is expected to grow to 5070 per day, or
U(60,10). You have been debating whether to take over the adjoining coffeeshop and expand the Mi Cazuela restaurant. The additional area will allow you to add four more tables
of four and three tables of two customers each. The overhead cost of the additional area
will be $200 per day. Should you expand your restaurant? Will it increase prot?
How is your performance in managing Mi Cazuela? Do you think Mama Maria will be
proud and hand over the reins of the business to you?
CASE STUDY 3
JAI HIND CYCLES INC. PLANS NEW PRODUCTION FACILITY
Mr. Singh is the industrial engineering manager at Jai Hind Cycles, a producer of bicycles. As
part of the growth plan for the company, the management is planning to introduce a new
model of mountain bike strictly for the export market. Presently, JHC assembles regular bikes
for the domestic market. The company runs one shift every day. The present facility has a
process layout. Mr. Singh is considering replacing the existing layout with a group technology
cell layout. As JHCs IE manager, Mr. Singh has been asked to report on the impact that will
be made by the addition of the mountain bike to JHCs current production capabilities.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
686
Part III
The McGrawHill
Companies, 2004
Mr. Singh has collected the following data from the existing plant:
1. The present production rate is 200 regular bikes per day in one 480-minute
shift.
2. The following is the list of all the existing equipment in JHCs production
facility:
Equipment Type
Forging
Molding
Welding
Tube bender
Die casting
Drill press
Punch press
Electric saw
Assembly
Process Time
Quantity
60 sec/large sprocket
30 sec/small sprocket
2 parts/90 sec
1 weld/60 sec
1 bend/30 sec
1 part/minute
20 sec/part
30 sec/part
1 cut/15 sec
3060 minutes
2
2
8
2
1
1
1
2
Table 1 shows a detailed bill of materials of all the parts manufactured by JHC and the
machining requirements for both models of bikes. Only parts of the regular and the mountain bikes that appear in this table are manufactured within the plant. The rest of the parts
either are purchased from the market or are subcontracted to the vendors.
A job-shop oor plan of the existing facility is shown in Figure 1. The whole facility is
500,000 square feet in covered area.
The gures for the last ve years of the combined total market demand are as follows:
Year
Demand
1998
1999
2000
2001
2002
75,000
82,000
80,000
77,000
79,000
At present, the shortages are met by importing the balance of the demand. However, this is
a costly option, and management thinks indigenously manufactured bikes of good quality
would be in great demand.
Tasks
1. Design a cellular layout for the manufacturing facility, incorporating group
technology principles.
2. Determine the amount of resources needed to satisfy the increased demand.
3. Suggest a possible material handling system for the new facilityconveyor(s),
forklift truck(s), AGV(s).
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
The McGrawHill
Companies, 2004
Case Study 3
687
Subassembly Name
Part Name
1 Regular bike
1.1 Bike frame
1.1.1 Top tube
1.1.2 Seat tube
1.1.3 Down tube
1.1.4 Head tube
1.1.5 Fork blade
1.1.6 Chainstay
1.1.7 Seatstay
1.1.8 Rear fork tip
1.1.9 Front fork tip
1.1.10 Top tube lug
1.1.11 Down tube lug
1.1.12 Seat lug
1.1.13 Bottom bracket
1.2 Handlebar and
stem assembly
2 Mountain
bike
1.2.1 Handlebars
1.2.2 Handlebar plugs
1.2.3 Handlebar stem
1.3.1 Saddle
1.3.2 Seat post
1.4.1 Crank spider
1.4.2 Large sprocket
1.4.3 Small sprocket
2.1.1 Hub
2.1.2 Frame legs
2.1.3 Handlebar tube
2.1.4 Saddle post tube
2.1.5 Handlebar
2.1.6 Balance bar
2.2.1 Handlebar post
2.2.2 Saddle post
2.2.3 Mount brackets
2.2.4 Axle mount
2.2.5 Chain guard
Operations
Assembly
Assembly
Cutting
Cutting
Cutting
Cutting
Cutting
Cutting
Cutting
Welding
Welding
Casting
Casting
Casting
Casting
Assembly
Cutting
Molding
Casting
Assembly
Molding
Cutting
Assembly
Forging
Forging
Forging
Assembly
Assembly
Cutting
Cutting
Cutting
Cutting
Cutting
Cutting
Assembly
Cutting
Cutting
Cutting
Cutting
Molding
Bending
Bending
Welding
Welding
Welding
Welding
Bending
Cutting
Bending
Welding
Bending
Welding
Welding
Welding
Drill press
Drill press Welding
Punch press Welding
Welding
Welding
Welding
Welding
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
688
Part III
The McGrawHill
Companies, 2004
FIGURE 1
Floor plan for Jai
Hind Cycles.
Cutting
Molding
Bending
Casting
Welding
Final assembly
Offices
Warehouse and
shipping
CASE STUDY 4
THE FSB COIN SYSTEM
George A. Johnson
Idaho State University
Todd Cooper
First Security Bank
Todd had a problem. First Security Bank had developed a consumer lending software
package to increase the capacity and speed with which auto loan applications could be
processed. The system consisted of faxed applications combined with online processing.
The goal had been to provide a 30-minute turnaround of an application from the time the
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
The McGrawHill
Companies, 2004
Case Study 4
689
bank received the faxed application from the dealer to the time the loan was either
approved or disapproved. The system had recently been installed and the results had not
been satisfactory. The question now was what to do next.
First Security Bank of Idaho is the second largest bank in the state of Idaho with
branches throughout the state. The bank is a full-service bank providing a broad range of
banking services. Consumer loans and, in particular, auto loans make up an important part
of these services. The bank is part of a larger system covering most of the intermountain
states, and its headquarters are in Salt Lake City.
The auto loan business is a highly competitive eld with a number of players including full-line banks, credit unions, and consumer nance companies. Because of the highly
competitive nature, interest rates tend to be similar and competition is based on other
factors. An important factor for the dealer is the time it takes to obtain loan approval. The
quicker the loan approval, the quicker a sale can be closed and merchandise moved. A
30-minute turnaround of loan applications would be an important factor to a dealer, who
has a signicant impact on the consumers decision on where to seek a loan.
The loan application process begins at the automobile dealership. It is there that an
application is completed for the purpose of borrowing money to purchase a car. The
application is then sent to the bank via a fax machine. Most fax transmissions are less
than two minutes in length, and there is a bank of eight receiving fax machines. All
machines are tied to the same 800 number. The plan is that eight machines should provide sufcient capacity that there should never be the problem of a busy signal received
by the sending machine.
Once the fax transmission is complete, the application is taken from the machine by a
runner and distributed to one of eight data entry clerks. The goal is that data entry should
take no longer than six minutes. The goal was also set that there should be no greater than
5 percent errors.
Once the data input is complete, the input clerk assigns the application to one of six
regions around the state. Each region has a group of specic dealers determined by geographic distribution. The application, now electronic in form, is distributed to the regions
via the wide area network. The loan ofcer in the respective region will then process the
loan, make a decision, and fax that decision back to the dealer. The goal is that the loan
ofcer should complete this function within 20 minutes. This allows about another two
minutes to fax the application back to the dealer.
The system has been operating approximately six months and has failed to meet the
goal of 30 minutes. In addition, the error rate is running approximately 10 percent.
Summary data are provided here:
Region
Applications
Average Time
Number of
Loan Ofcers
1
2
3
4
5
6
6150
1485
2655
1680
1440
1590
58.76
37.22
37.00
51.07
37.00
37.01
6
2
4
2
2
3
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
690
Part III
The McGrawHill
Companies, 2004
Information on data input indicates that this part of the process is taking almost twice as
long as originally planned. The time from when the runner delivers the document to when
it is entered is currently averaging 9.5 minutes. Also, it has been found that the time to
process an error averages six minutes. Errors are corrected at the region and add to the regions processing time.
Todd needed to come up with some recommendations on how to solve the problem.
Stafng seemed to be an issue in some regions, and the performance of the data input clerks
was below expectations. The higher processing times and error rates needed to be corrected. He thought that if he solved these two problems and increased the staff, he could get
the averages in all regions down to 30 minutes.
CASE STUDY 5
AUTOMATED WAREHOUSING AT ATHLETIC SHOE COMPANY
The centralized storage and distribution operation at Athletic Shoe Company (ASC) is considering replacement of its conventional manual storage racking systems with an elaborate
automated storage and retrieval system (AS/RS). The objective of this case study is to
come up with the preliminary design of the storage and material handling systems for ASC
that will meet the needs of the company in timely distribution of its products.
On average, between 100,000 and 150,000 pairs of shoes are shipped per day to between 8000 and 10,000 shipping destinations. In order to support this level of operations,
it is estimated that rack storage space of up to 3,000,000 pairs of shoes, consisting of
30,000 stock-keeping units (SKUs), is required.
The area available for storage, as shown in Figure 1, is 500,000 square feet. The height
of the ceiling is 40 feet. A rst-in, rst-out (FIFO) inventory policy is adopted in the
FIGURE 1
Layout of the Athletic
Shoe Company
warehouse.
Sort,
wrap,
and
pack
Shipping
Unpack
and
scan
Receiving
Store
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
Case Study 5
The McGrawHill
Companies, 2004
691
Receiving
1.
2.
3.
4.
Shipping
1.
2.
3.
4.
Tasks
1. Construct a simulation model of the warehouse and perform experiments using
the model to judge the effectiveness and efciency of the design with respect to
parameters such as ows, capacity, operation, interfacing, and so on.
2. Write a detailed specication of the storage plan: the amount of rack storage space
included in the design (capacity), rack types, dimensions, rack congurations, and
aisles within the layout.
3. Design and specify the material handling equipment for all of the functions listed,
including the interfaces required to change handling methods between functions.
4. Design and specify the AS/R system. Compare a dedicated versus a shared picker
system.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
692
Part III
The McGrawHill
Companies, 2004
CASE STUDY 6
CONCENTRATE LINE AT FLORIDA CITRUS COMPANY
Wai Seto, Suhandi Samsudin, Shi Lau, and Samson Chen
California State Polytechnic UniversityPomona
Depalletizer: Tri-Can
Filler: Pfaudler
Seamer: Angelus
Palletizer: Currie
Packer: Diablo
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
The McGrawHill
Companies, 2004
Case Study 6
693
Rated Speed
Operating Speed
Filler
Seamer
Packer
Palletizer
Depalletizer
Bundler
1750 cases/hr
1500 cases/hr
1800 cases/hr
1800 cases/hr
1800 cases/hr
1500 cases/hr
600 cans/min
600 cans/min
28 cases/min
28 cases/min
600 cans/min
550 cans/min
The concentrate line stations and the ow of production are shown in Figure 1. The
concentrate line starts from the receiving area. Full pallet loads of 3600 empty cans in
10 layers arrive at the receiving area. The arrival conveyor transports these pallets to the
depalletizer (1). The cans are loaded onto the depalletizer, which is operated by Don.
FIGURE 1
Concentrate line stations for Florida Citrus Company.
6
1a
Starting point
1
5
01
02
2a
3b
2b
3a
4
3
3c
03
4a
Ending point
1. Depalletizer
2. Pfaudler bowl
3. Packmaster
4. Palletizer
5. The pallet
6. The concentrate cans
7. The box organizer
8. The box
01 Operator 1
02 Operator 2
03 Operator 3
5a
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
694
Part III
The McGrawHill
Companies, 2004
The depalletizer pushes out one layer of 360 cans at a time from the pallet and then
raises up one layer of empty cans onto the depalletizer conveyor belt (1a). Conveyor 1a
transports the layer of cans to the depalletizer dispenser. The dispenser separates each can
from the layer of cans. Individual empty cans travel on the empty can conveyor to the
Pfaudler bowl.
The Pfaudler bowl is a big circular container that stores the concentrate. Its 36 lling
devices are used to ll the cans with concentrate. Pamela operates the Pfaudler bowl.
Empty cans travel on the ller bowl conveyor (2b) and are lled with the appropriate juice
concentrate. Filled cans are sent to the lid stamping mechanism (2a) on the ller bowl conveyor. The lid stamping closes the lled cans. As the closed cans come through the lid
stamping mechanism, they are transported by the prewash conveyor to the washing machine to be ushed with water to wash away any leftover concentrate on the can. Four
closed cans are combined as a group. The group of cans is then transported by the accumulate conveyor to the accumulator.
The accumulator combines six such groups (24 cans in all). The accumulated group
of 24 cans is then transported by the prepack conveyor to the Packmaster (3), operated
FIGURE 2
Process ow for Florida Citrus Company.
Travel by
depalletizer
conveyor
Travel by arrival
conveyor
Arrival
Depalletizer start
Entity: cans 80
Travel by
empty-can
conveyor
Entity: empty can
Empty cans will
travel one by one
in a row.
Travel by
accumulate
conveyor
Entity: accumulated
cans
Travel by filler
bowl conveyor
Stage 2
Travel by
prepack conveyor
Entity: prepack cans
Travel by
prewash conveyor
Entity: concentrate
cans
Packmaster
Stage 1
Entity: concentrate
cans
Four concentrate cans will
be combined as group
cans.
Travel by packmaster
conveyor
Entity: full box
Entity: accumulated
cans
Palletizer
Entity: cans 60
The cans 60 will change to empty can
and will travel one by one to the
empty can conveyor.
It changes 360 cans/layer to 3600 empty
cans.
Travel by palletizer
conveyor
Depalletizer end
(dispenser)
Entity: Cans 80
Travel by exit
conveyor
Entity: full box pallet
Loading zone
Entity: full box pallet
Travel by
resource forklift 2
Entity: full box pallet
Exit
Entity: full box pallet
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
The McGrawHill
Companies, 2004
Case Study 6
695
by Pat. Pat loads cardboard boxes onto the cardboard feeding machine (3b) next to the
Packmaster. Then the 24 cans are wrapped and packed into each cardboard box.
The glue mechanism inside the Packmaster glues all six sides of the box. The boxes are
then raised up to the palletizer conveyor (3c), which transports the boxes to the
palletizer (4).
The box organizer (7) mechanism loads three boxes at a time onto the pallet. A total of
90 boxes are loaded onto each pallet (10 levels, 9 boxes per level). The palletizer then lowers the pallet onto the exit conveyor (4a) to be transported to the loading zone. From the
loading zone a forklift truck carries the pallets to the shipping dock. Figure 2 describes the
process ow.
A study conducted by a group of Cal Poly students revealed the cause of most downtime to be located at the Packmaster. The Packmaster is supposed to pack a group of cans
into a cardboard box. However, if the cardboard is warped, the mechanism will stop the
operation. Another problem with the Packmaster is its glue operation. The glue heads
sometimes are clotted.
All these machines operate in an automatic manner. However, there are frequent
machine stoppages caused by the following factors: change of avor, poor maintenance,
lack of communication between workers, lack of attention by the workers, inefcient layout of the concentrate line, and bad machine design.
All the stations are arranged in the sequence of the manufacturing process. As such, the
production line cannot operate in a exible or parallel manner. Also, the machines depend
on product being fed from upstream processes. An upstream machine stoppage will cause
eventual downstream machine stoppages.
Work Measurement
A detailed production study was conducted that brought out the following facts:
Packmaster
Juice Flavors
Albertsons Pink Lemonade
Albertsons Pink Lemonade
Best Yet Orange Juice
Crisp Lemonade
Flav-R-Pac Lemonade
Frys Lemonade
Hy-Top Pink Lemonade
IGA Grape Juice
Ladylee Grape Juice
Rosauers Orange Juice
Rosauers Pink Lemonade
Smiths Kiwi Raspberry
Smiths Kiwi Strawberry
Stater Bros. Lemonade
Stater Bros. Pink Lemonade
Western Family Pink Lemonade
68.75
77.84
71.73
65.75
76.35
78.76
68.83
83.04
93.32
51.40
61.59
75.16
85.05
21.62
86.21
64.07
31.25
22.16
28.27
34.25
23.65
21.24
31.17
16.96
6.68
48.60
38.41
24.84
14.95
78.38
13.79
35.93
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
696
Part III
The McGrawHill
Companies, 2004
The production study also showed the label change time on the Packmaster as follows:
Flavor from
Flavor to
824
189
177
41
641
66
160
The Packmaster was observed for a total of 45,983 sec. Out of this time, the Packmaster was
working for a total of 24,027 sec, down for 13,108 sec, and being set up for change of avor
for 8848 sec. The average avor change time for the Pfaudler bowl is 19.24 percent of the total
observed time. The number of cases produced during this observed time was 11,590. The
production rate is calculated to be (11,590/46,384)3600, or about 907 cases per hour.
It was also observed that the Packmaster was down because of ipped cans (8.6 percent), sensor failure (43.9 percent), and miscellaneous other reasons (47.5 percent).
The following information on the conveyors was obtained:
Name of Conveyor
Length (ft.)
Arrival conveyor
Depalletizer conveyor
Empty-cans conveyor
Filler bowl conveyor
Prewash conveyor
Accumulate conveyor
Prepack conveyor
Palletizer conveyor
Exit conveyor
28.75
120
10
23.6
38
12
54.4
Speed (ft/min)
12.6
130
126
255
48
35
76
The Pfaudler bowl was observed for a total of 46,384 sec. Out of this time, the bowl
was working for 27,258 sec, down for 10,278 sec, and being set up for change of avor for
8848 sec. The average avor change time for the Pfaudler bowl is 19.08 percent of the total
observed time. The number of cases produced in this observed time was 11,590. The
production rate is calculated to be (11,590/46,384)3600, or about 900 cases per hour.
Pfaudler Bowl
Fruit Juice Flavors
74.81
88.20
68.91
86.08
53.21
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
The McGrawHill
Companies, 2004
Case Study 6
697
Flav-R-Pac Lemonade
Flavorite Lemonade
Frys Lemonade
Hy-Top Pink Lemonade
IGA Grape Juice
IGA Pink Lemonade
Ladylee Grape Juice
Ladylee Lemonade
Rosauers Orange Juice
Rosauers Pink Lemonade
Smiths Kiwi Raspberry
Smiths Kiwi Strawberry
Special Value Wild Berry Punch
Stater Bros. Lemonade
Stater Bros. Pink Lemonade
Western Family Pink Lemonade
79.62
69.07
80.54
81.85
89.93
45.54
94.36
91.86
64.20
100.00
92.71
96.49
80.09
26.36
90.18
66.30
20.38
30.93
19.46
18.15
10.07
54.46
5.64
8.14
35.80
0.00
7.29
3.51
19.91
73.64
9.82
33.70
The avor change time was observed as given in the following table:
Flavor from
Flavor to
Albertsons Lemonade
Albertsons Lemonade
Albertsons Limeade
Albertsons Pink Lemonade
Albertsons Pink Lemonade
Albertsons Wild Berry Punch
Best Yet Grape Juice
Flav-R-Pac Lemonade
Flav-R-Pac Orange Juice
Flavorite Lemonade
Frys Lemonade
Furrs Orange Juice
Hy-Top Grape Juice
Hy-Top Pink Lemonade
IGA Grape Juice
IGA Pink Lemonade
Ladylee Grape Juice
Ladylee Lemonade
Ladylee Pink Lemonade
Rosauers Orange Juice
Rosauers Pink Lemonade
Smiths Apple Melon
Smiths Kiwi Raspberry
Smiths Kiwi Strawberry
Special Value Wild Berry Punch
Stater Bros. Lemonade
Western Family Pink Lemonade
537
702
992
400
69
1292
627
303
42
41
183
684
155
49
67
0
100
49
0
0
98
382
580
53
62
50
1153
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
698
Part III
The McGrawHill
Companies, 2004
Tasks
1. Build simulation models and gure out the production capacity of the concentrate
line at FCC (without considering any downtime).
2. What would be the capacity after considering the historical downtimes in the line?
3. What are the bottleneck operations in the whole process?
4. How can we reduce the level of inventory in the concentrate line? What would be
the magnitude of reduction in the levels of inventory?
5. If we address the bottleneck operations as found in task 3, what would be the
increase in capacity levels?
CASE STUDY 7
BALANCING THE PRODUCTION LINE AT SOUTHERN CALIFORNIA
DOOR COMPANY
Suryadi Santoso
California State Polytechnic UniversityPomona
Southern California Door Company produces solid wooden doors of various designs for
new and existing homes. A layout of the production facility is shown in Figure 1. The current production facility is not balanced well. This leads to frequent congestion and stockouts on the production oor. The overall inventory (both raw material and work in process)
is also fairly high. Mr. Santoso, the industrial engineering manager for the company, has
been asked by management to smooth out the ow of production as well as reduce the levels of inventory. The company is also expecting a growth in the volume of sales. The production manager is asking Mr. Santoso to nd the stafng level and equipment resources
needed for the current level of sales as well as 10, 25, 50, and 100 percent growth in sales
volume.
A preliminary process ow study by Mr. Santoso reveals the production ow shown in
Figure 2.
Process Flow
Raw wood material is taken from the raw material storage to carriage 1. The raw material
is inspected for correct sizes and defects. Material that does not meet the specications is
moved to carriage 1B. Raw wood from carriage 1 is fed into the rip saw machine.
In the rip saw machine, the raw wood is cut into rectangular cross sections. Cut wood
material coming out of the rip saw machine is placed on carriage 3. Waste material from the
cutting operation (rip saw) is placed in carriage 2.
Cut wood from carriage 3 is brought to the moulding shaper and grooved on one side. Out
of the moulding shaper, grooved wood material is placed on carriage 4. From carriage 4, the
grooved wood is stored in carriage 5 (if carriage 5 is full, carriage 6 or 7 is used). Grooved
wood is transported from carriages 5, 6, and 7 to the chop saw working table.
One by one, the grooved wood material from the chop saw working table is fed into the
chop saw machine. The grooved wood material to be fed is inspected by the operator to see if
Case Study 7
FIGURE 1
The McGrawHill
Companies, 2004
699
Original Layout
Layout of production
facility at Southern
California Door
Company.
Carriage
1B
Carriage
2
Carriage
5
Carriage
6
Carriage
8
Carriage
3
Carriage
4
Moulding
sander (42)
Moulding
sander
Carriage
7
Sand finishing
Glue
trimming
Auto door
clamp
Storage racks
Preassembly
Storage racks
Glue
trimming
Double
end
tenoner
(45)
Auto door
clamp
Sand finishing
Sand finishing
Preassembly
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
Storage racks
Storage racks
Storage racks
there are any defects in the wood. Usable chopped parts from the chop saw machine are stored
in the chop saw storage shelves. Wood material that has defects is chopped into small blocks
to cut out the defective surfaces using the chop saw and thrown away to carriage 8.
The chopped parts in the chop saw storage shelves are stacked into batches of a certain
number and then packed with tape. From the chop saw storage shelves, some of the batches
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
700
Part III
The McGrawHill
Companies, 2004
FIGURE 2
Process sequences and present input/output ow for Southern California Door Company.
Quantity of machines
Rip saw
(1416)
13,744*
712
Moulding shaper
(712)
1640
576
DET*
(1840)
13,744*
1840
DET*
(13,744)
Triple sander
(560)
5568
560
1440
Preassembly
(1st op.)
1440
512
Preassembly
(2nd op.)
224
The present total output capacity of DET represents the number of units of a single product manufactured in an eighthour shift. The DET machine supplies parts for two other work centers, preassembly (1st op.) and sand nishing. In
reality, the DET machine has to balance the output between those two work centers mentioned; in other words, the
DET machine is shared by two different parts for two different work centers during an eight-hour shift.
are transported to the double end tenoner (DET) storage, while the rest of the batches are
kept in the chop saw storage shelves.
The transported batches are unpacked in the DET storage and then fed into the DET machine to be grooved on both sides. The parts coming out of the DET machine are placed on
a roller next to the machine.
The parts are rebatched. From the DET machine, the batches are transported to storage
racks and stored there until further processing. The batches stored in the chop saw storage
shelves are picked up and placed on the preassembly table, as are the batches stored in the
storage racks. The operator inspects to see if there is any defect in the wood. Defective
parts are then taken back from the preassembly table to the storage racks.
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
The McGrawHill
Companies, 2004
Case Study 7
701
The rest of the parts are given to the second operator in the same workstation. The second operator tries to match the color pattern of all the parts needed to assemble the door
(four frames and a center panel). The operator puts glue on both ends of all four frame parts
and preassembles the frame parts and center panel together.
The framepanel preassembly is moved from the preassembly table to the auto door
clamp conveyor and pressed into the auto door clamp machine. The pressed assembly is taken
out of the auto door clamp machine and carried out by the auto door clamp conveyor.
Next, the preassembly is picked up and placed on the glue trimming table. Under a
black light, the inspector looks for any excess glue coming out of the assembly parting
lines. Excess glue is trimmed using a specially designed cutter.
From the glue trimming table, the assembly is brought to a roller next to the triple sanding
machine (the auto cross grain sander and the auto drum sander). The operator feeds the assembly into the triple sander. The assembly undergoes three sanding processes: one through
the auto cross grain sander and two through the auto drum sander.After coming out of the triple
sander machine, the sanded assembly is picked up and placed on a roller between the DET and
the triple sander machine. The sanded assembly waits there for further processing. The operator feeds the sanded assembly into the DET machine, where it is grooved on two of the sides.
Out of the DET machine, the assembly is taken by the second operator and placed temporarily on a roller next to the DET machine. After nishing with all the assembly, the rst
operator gets the grooved assembly and feeds it to the DET machine, where the assembly
is grooved again on the other two sides. Going out of the machine, the grooved assembly
is then placed on a roller between the DET machine and the triple sander machine.
The assembly is stored for further processing. From the roller conveyor, the grooved
assembly is picked up by the operators from the sand nishing station and placed on the
table. The operators nish the sanding process on the table using a handheld power sander.
After nishing the sanding, the assembly is placed on the table for temporary storage.
Finally, the sanded assembly is moved to a roller next to the storage racks to wait for
further processes.
Work Measurement
A detailed work measurement effort was undertaken by Santoso to collect data on various
manufacturing processes involved. Table 1 summarizes the results of all the time studies.
The current number of machines and/or workstations and their output capacities are as
follows:
Output Capacities
Machine
Number of Machines
Units/Hour
Units/Shift
Rip saw
Moulding shaper
Chop saw
DET
Preassembly 1
Preassembly 2
Auto door clamp
Glue trimming
Triple sander
DET
Sand nishing
1
1
2
1
1
1
2
2
1
1
6
177
89
615
3426
696
90
14
32
70
460
12
1416
712
4920
13,744
5568
720
224
512
560
1840
576
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
702
Part III
The McGrawHill
Companies, 2004
Machine
Number
41
Operation
Cut raw material into
correct cross-sectional
dimensions
Task
Number
1
2
3
4
Moulding shaper
Chop saw
42
40, 43
Preassembly 1
45
Grooving frames
Double end
tenoner (DET)
Task Description
Throw away
defective parts to
carriage 8 and stack
chopped parts
Get 2 to 4 frames
from stack and feed
into DET
Task Observations
(seconds)
5.7,6.6,5.05,6.99,5.93,7.52,
5.37,7.21,8.96,6.68
6.79,6.3,7.52,6.15,6.53,6.03,
6.09,7.31,7,5.78,
12.4,11.53,11.26,12.88,11.56,
10.38,11.31,11.85,12.78,11.88
10.56,9.94,9.78,11.9,11.44,
8.87,7.35,10.93,12.47,10.34
11.52,12.83,14.64,8.25,12.58,
13.81,13.68,12.21,6.17,15.06,
11.93
16.61,16.58,14.43,21.16,
18.17,25.14,26.15,30.06,35.16,
25.06,24.37
28.13,29.41,29.07,31.41,30.75,
38.95,39.83,42.27,39.32,
40.12,36.3
2.68,2.08,2.24,1.61,2.3,2.99,
3.02,3.11,3.21,3.02,3.06,2.79,
2.51,2.96,3.23,2.37,2.64
16.81,14.56,18.81,20.13,
23.25,18.53,16.53,25.56,25.3,
24.78,15.42,13.92,15.48,20.51,
17.79,23.54,17.01
9.19,9.03,13.16,10.78,5.69,
4.1,6.9,9.16,3.6,3.22,8.83,
12.63,14.94,12.86,10.25,0.76,
10.63
9.33,14.7,10.18,14.47,13.12,
12.49,13.12,12.76,32.15,
33.94,13.23,11.97,9.21,24.86,
18.29,29.74,16.53,14.24,
12.78,15.38
60.21,58.77,57.23,59.81,
61.64,60.29,59.85,61.43,
63.59,62.71,61.2,59.19,58.47,
60.27,59.73,60.21,61.82,
62.85,58.94,57.23
10.4,11.57,19.15,16.94,12.68,
31.47,36.97,13,14.5,14.62,
15.76,26.82,32.14,30.67,22.43,
29.61,34.92,18.27,20.31,24.88
49.82,50.08,19.35,32.54,
35.31,33.43,37.84,42.17,
49.04,55.09
(continued)
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
Case Study 7
Machine
Name
Machine
Number
Operation
Task
Number
2
Preassembly 2
Preassembling and
gluing frames
52, 54
Clamping preassembly
3
Glue trimming
Triple sander
Trimming excess
glue out of the
assembly
46, 47,
48
Sand nishing
45
Grooving sanded
assembly
Task Description
Inspect for defects
and match frame
parts by color
Match center panel
and four frame parts
by color
Glue and preassemble
frame parts and
center panel
Place assembly in
auto door clamp
conveyor
Conveyor feeds
preassembled parts
(preassy) into
machine
Press the preassy
2
3
Double end
tenoner (DET)
The McGrawHill
Companies, 2004
Groove assy
Stack parts
703
Task Observations
(seconds)
36.52,35.99,29.09,57.43,53.6,
42.45,57.77,61.21,63.96, 56.41
8.13,8.32,7.43,10.63,6.28,
6.48,7.29,7.34,4.82,5.24
19.5,24.1,23.84,22.94,21.75,
22.47,23.66,25.63,29.59,30.09
4.38,2.71,4.35,3.69,3.04,2.62,3,
3.78,3,3.23
6.55,5.05,6.86,4.77,7.68,5.33,
5.24,7.3,5.71,6.55
221.28,222,220.35,224.91,
194.4,231.82,213.34,206.75,
223.62,227.44
4.22,5.69,7.15,5.78,5.1,4.75,
5.53,5.1,4.24,4.84
35.74,17.96,30.59,17.39,
21.48,10.15,16.89,10.87,
10.59,10.26,14.23,11.92,
24.87,10.91,11.77,15.48,
29.71,10.86,19.64
58.53,90.87,67.93,70.78,
70.53,77.9,85.88,86.84,78.9,
95.6,78.5,72.65,72.44,91.01,
86.12,84.9,72.56,79.09,77.75
2.45,3.56,3.18,3.16,3.32,3.58,
4.22,2.27,4.76,3.9
30.72,32.75,34.13,35.66,37,
36.31,36.84,37.03,37.44,38.54
3.31,6.54,5.03,5.51,5.22,5.84,
5.38,6.69,4.22,6.44
5.99,6.14,6.49,6.46,6.42,6.64,
3.21,4.11,3.71,4.2
31.97,32.93,35.11,33.67,
34.06,33.21,33.43,35.23,
33.87,33.72
3.84,3,3.06,2.93,3.06,2.85,
2.88,3.22,1.87,2.41
3.49,3.42,3.47,3.29,3.36,3.2,
5.73,3.02,3.39,3.54,3.71,3.48
215.8,207.57,244.17,254.28,
238.36,218.76,341.77,247.59,
252.63,308.06,221.27,233.66
2.26,2.95,2,1.41,3.79,2.74,4.7,
3.35,3.09,2.75,2.59,2.71
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
704
Part III
The McGrawHill
Companies, 2004
FIGURE 3
Groups of operators for Southern California Door Company.
Work Center/
Machine
Minimum
Quantity
Required
Number of
Operators
Working
Utilization
(Shift)
Group of
Operators
Sand nishing
1.00
Triple sander
0.51
****
***
Glue trimming
0.75
0.86
0.80
**
1
1
1
1
2
2
0.21
0.31
0.08
*****
****
****
Chop saw
0.47
****
Moulding shaper
0.54
*****
Rip saw
0.27
****
Rip saw
(0.27)
DET
(0.39)
Moulding shaper
(0.54)
Notes
Triple sander
(0.51)
Sand finishing
(1.00)
Chop saw
(0.47)
Glue trimming
(0.75)
Tasks
Build simulation models to analyze the following:
1. Find the manufacturing capacity of the overall facility. What are the current
bottlenecks of production?
2. How would you balance the ow of production? What improvements in capacity
will that make?
3. What would you suggest to reduce inventory?
4. How could you reduce the manufacturing ow time?
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
Case Study 8
The McGrawHill
Companies, 2004
705
5. The production manager is asking Mr. Santoso to nd out the stafng and
equipment resources needed for the current level of sales as well as 10, 25, 50, and
100 percent growth in sales volume.
6. Develop layouts for the facility for various levels of production.
7. What kind of material handling equipment would you recommend? Develop the
specications, amount, and cost.
CASE STUDY 8
MATERIAL HANDLING AT CALIFORNIA STEEL INDUSTRIES, INC.
Hary Herho, David Hong, Genghis Kuo, and Ka Hsing Loi
California State Polytechnic UniversityPomona
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
706
Part III
The McGrawHill
Companies, 2004
FIGURE 1
5-stand
Upender
CSM box
annealing
#2
galvanizing
Cleaning
Tin mill
Galvanize mill
#1
Galvanizing
Notes:
Full hard coils = 5%
Galvanized (#1 & #2) coils = 60%
Cold rolled coils = 35%
Mill clean = 30%
CSM clean = 70%
TM box annealing = 2/3
CSM box annealing = 1/3
Galvanized coils
shipped
transports coils that are to be moved around within a building. There are two 60-ton haulers
and two 40-ton haulers. Assume that one hauler will be down for maintenance at all times.
The following are the process times at each of the production units:
5-stand
#1 galvanizing line
#2 galvanizing line
Cleaning
Annealing
Normal(8,2) min
Normal(30,8) min
Normal(25,4) min
Normal(15,3) min
5 hr/ton
Annealing is a batched process in which groups of coils are treated at one time. The annealing bases at the cold sheet mill allow for coils to be batched three at a time. Coils can
be batched 12 at a time at the tin mill annealing bases.
Assume that each storage bay after a coil has been processed has innite capacity. Coils
that are slated to be galvanized will go to either of the two galvanizing lines. The #1 continuous galvanizing line handles heavy-gauge coils, while the #2 galvanizing line processes the
light-gauge coils.
The proposed layout (see Figure 2) will be very much like the original layout. The proposed material handling system that we are evaluating will utilize the railroads that connect
the three main buildings. The two rails will allow coils to be moved from the tin mill to the
cold sheet mill and the #1 galvanizing line. The top rail is the in-process rail, which will
HarrellGhoshBowden:
Simulation Using
ProModel, Second Edition
The McGrawHill
Companies, 2004
Case Study 8
FIGURE 2
TM4
Proposed coil
handling layout for
California Steel
Industries.
707
TM5
TM7
5-stand
bay
In-process
bay
1L
5-stand
1R
TM11
Coil skid
Finished
goods bay
2L
3L
TM10
2R
Finishedgoods bay
Coil transfer
car 3R
Rail to #1 galvanizing line,
CSM, and shipping
TM14
#2 galvanizing line
exit bay
TM15
#2 galvanizing line
enter bay
move coils that need to be processed at the cold sheet mill or the #1 galvanizing line. The
bottom rail will ship out full hard coils and coils from the #2 galvanizing line. The train coil
cars will be able to carry 100 tons of coils.
In addition, a coil transfer car system will be installed near the #2 galvanizing line. The
car will consist of a smaller baby car that will be held inside the belly of a larger mother
car. The mother car will travel northsouth and position itself at a coil skid. The baby car,
traveling eastwest, will detach from the mother car, move underneath the skid, lift the
coil, and travel back to the belly of the mother car.
Crane TM 7 will move coils from the 5-stand to the 5-stand bay, as in the current layout. The proposed system, however, will move coils to processing in the #2 galvanizing
line with the assistance of four main cranes, namely TM 5, TM 11, TM 14, and TM 15.
Crane TM 5 will carry coils to the coil skid at the north end of the rail. From there, the car
will carry coils to the south end of the rail and place them on the right coil skid to wait to
be picked up by TM 15 and stored in the #2 galvanizing line entry bay. This crane will also
assist the line operator to move coils into position to be processed. After a coil is galvanized, crane TM 14 will move the coil to the #2 galvanizing line delivery bay. Galvanized
coils that are to be shipped will be put on the southernmost coil skid to be transported by
the coil car to the middle skids, where crane TM 11 will place them in either the rail or
truck shipping areas.
One facility change that will take place is the movement of all the box annealing furnaces to the cold sheet mill. This change will prevent the back and forth movement of coils
between the tin mill and cold sheet mill.
Tasks
1. Build simulation models of the current and proposed systems.
2. Compare the two material handling systems in terms of throughput time of coils
and work-in-process inventory.
3. Experiment with the modernized model. Determine what will be the optimal
number of train coil cars on the in-process and nished-goods rails.