0% found this document useful (0 votes)
2 views

Lecture 2 s1_simulation Modelling

The document outlines fundamental simulation concepts applicable to operations management, focusing on a manufacturing facility model involving queues and processing systems. It discusses key performance indicators, analysis options, and the components of a simulation model, including entities, attributes, and events. Additionally, it emphasizes the importance of randomness in simulations and the use of software tools for effective modeling and analysis.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

Lecture 2 s1_simulation Modelling

The document outlines fundamental simulation concepts applicable to operations management, focusing on a manufacturing facility model involving queues and processing systems. It discusses key performance indicators, analysis options, and the components of a simulation model, including entities, attributes, and events. Additionally, it emphasizes the importance of randomness in simulations and the use of software tools for effective modeling and analysis.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 17

Advanced Modelling in Operations Management (EBAMO5A)

Advanced Modelling and Simulation (EBAMS5A)

Fundamental Simulation Concepts

Presented by www.vut.ac.za
Asser Letsatsi Tau (Pr Eng Tech Cand, MSAIChE, SACAA, PhD (Operations) , Master RD (Operations), MEng (Chemical), PDBA, SCT 41 (Civil Eng)
1
CHAPTER TWO: FUNDAMENTAL SIMULATION CONCEPTS

• The fundamental concepts of simulation are the same across any kind of simulation software, and some
familiarity with them is essential to understanding how Arena simulates a model you’ve built.
THE SYSTEM
Many simulation models involve waiting lines or queues as building blocks, thus consider a simple case of such a
model representing a portion of a manufacturing facility as represented by Fig 1.

Figure 1. A Simple Processing System

• “Blank” parts arrive to a drilling center, are processed by a single drill press, and then leave;
• If a part arrives and finds the drill press idle, its processing at the drill press starts right away; otherwise, it waits
in a First-In First-Out (FIFO), that is, first-come first-served queue. This is the logical structure of the model.

2
• The the numerical aspects of the model needs have to be specified (i.e. use time studies or observation where feasible) , including how
the simulation starts and stops. First, a decision on the underlying “base” units (i.e. sec, min, hr or day) with which
time will be measured used be defined, for example as expressed on Table 2-1.

• The system starts at time 0 minutes with no parts present and the drill press idle. This empty-and-idle assumption
would be realistic if the system starts afresh each morning, but might not be so great as a model of the initial
situation to simulate an ongoing operation.

3
GOAL OF THE STUDY
• Given a logical/numerical model like this, decision have to be made on what output performance measures (Key
Performance Indicators) you want to collect. The following are what can be decided to compute:
a) Total production (number of parts that complete their service at the drill press and leave) during the 20 minutes
of operation.
b) Average waiting time in queue of parts that enter service at the drill press during the simulation. This time in
queue records only the time a part is waiting in the queue and not the time it spends being served at the drill
press.
c) Maximum waiting time in queue of parts that enter service at the drill press during the simulation. This is a
worst-case measure, which might be of interest in giving service-level guarantees to customers.
d) Time-average number of parts waiting in the queue (again, not counting any part in service at the drill press).
“Time average,” means a weighted average of the possible queue lengths (0, 1, 2, . . .) weighted by the
proportion of time during the run that the queue was at that length.
e) Maximum number of parts that were ever waiting in the queue. Actually, this might be a better indication of
how much floor space is needed than is the time average if there is a need to be reasonably sure at all times.
f) Average and maximum total time in system of parts that finish being processed on the drill press and leave.
Also called cycle time, this is the time that elapses between a part’s arrival and its departure, so it’s the sum of
the part’s waiting time in queue and its service time at the drill press. This is a kind of turnaround time, so
smaller is better.
4
g) Utilization of the drill press, defined as the proportion of time it is busy during the simulation. Think of this as
another time-persistent statistic, but of the “busy” function.
Resource utilizations are of obvious interest in many simulations, but it’s hard to say whether it should be high (close
to 1) or low (close to 0). High is good since it indicates little excess capacity, but can also be bad since it might mean
a lot of congestion in the form of long queues and slow throughput.

There are usually a lot of possible output performance measures, and it’s probably a good idea to observe a lot of
things in a simulation since you can always ignore things you have but can never look at things you don’t have, plus
sometimes you might find a surprise. The only downside is that collecting extraneous data can slow down execution
of the simulation.

ANALYSIS OPTIONS
With the model, its inputs, and its outputs defined, figure out how to get the outputs by transforming the inputs
according to the model’s logic. The following are brief exploration of a few options:
o Educated guessing: A crude “back-of-the-envelope” calculation can sometimes lend at least qualitative insight
(and sometimes not).

5
o Queueing theory: Since this is a queue, why not use queueing theory. In some situations, it can result in simple
formulas from which you can get a lot of insight. Probably the simplest and most popular object of queueing
theory is the M/M/1 queue.

• The first “M” states that the arrival process is Markovian; that is, the interarrival times are independent and
identically distributed “draws” from an exponential probability distribution (see Appendices B and C for a brief
refresher on probability and distributions). The second “M” stands for the service-time distribution, and here it’s
also exponential. The “1” indicates that there’s just a single server. So at least on the surface this looks pretty
good for our model.
• Many people feel that queueing theory can prove valuable as a first-cut approximation to get an idea of where
things stand and to provide guidance about what kinds of simulations might be appropriate at the next step in the
project.
MECHANISTIC SIMULATION
“Mechanistic” means that the individual operations (arrivals, service by the drill press, etc.) will occur as they would
in reality. The movements and changes of things in the simulation model occur at the right “time,” in the right order,
and have the right effects on each other and the statistical-accumulator variables.

6
PIECES OF A SIMULATION MODEL
o Entities - Most simulations involve “players” called entities that move around, change status, affect and are
affected by other entities and the state of the system, and affect the output performance measures. Entities are the
dynamic objects in the simulation—they usually are created, move around for a while, and then are disposed of as
they leave.
o Attributes - To individualize entities, an attribute is attached to them. An attribute is a common characteristic of
all entities, but with a specific value that can differ from one entity to another. For instance, our part entities could
have attributes called Due Date , Priority , and Color to indicate these characteristics for each individual entity.
o (Global) Variables - A variable (or a global variable) is a piece of information that reflects some characteristic of
your system, regardless of how many or what kinds of entities might be around. There are two types of variables:
Arena built-in variables (number in queue, number of busy servers, current simulation clock time, and so on) and
user-defined variables (mean service time, travel time, current shift, and so on). In contrast to attributes, variables
are not tied to any specific entity, but rather pertain to the system at large. They’re accessible by all entities, and
many can be changed by any entity. If you think of attributes as tags attached to the entities currently floating
around in the room, then think of variables as (rewriteable) writing on the wall.
o Resources - A resource can represent a group of several individual servers, each of which is called a unit of that resource.
Entities often compete with each other for service from resources that represent things like personnel, equipment, or space
in a storage area of limited size. An entity seizes (units of) a resource when available and releases it (or them) when
finished. It’s better to think of the resource as being given to the entity rather than the entity being assigned to the resource
since an entity (like a part) could need simultaneous service from multiple resources (such as a machine and a person).
7
o Queues - When an entity can’t move on, perhaps because it needs to seize a unit of a resource that’s tied up by
another entity, it needs a place to wait, which is the purpose of a queue.
o Statistical Accumulators - To get output performance measures, various intermediate statistical-accumulator
variables need to be kept as the simulation progresses. For example:
a) The number of parts produced so far
b) The total of the waiting times in queue so far
c) The longest time spent in queue we’ve seen so far

o Events – Everything is centered around events. An event is something that happens at an instant of (simulated)
time that might change attributes, variables, or statistical accumulators. For example:
a) Arrival: A new part enters the system.
b) Departure: A part finishes its service at the drill press and leaves the system.
c) The End: The simulation is stopped at time 20 minutes. (It might seem rather artificial to anoint this as an event, but it certainly
changes things, and this is one way to stop a simulation.)
To execute, a simulation has to keep track of the events that are supposed to happen in the (simulated) future. In Arena,
this information is stored in an event calendar.
For example, when the logic of the simulation calls for it, a record of information for a future event is placed on the event calendar. This
event record contains identification of the entity involved, the event time, and the kind of event it will be.
8
o Simulation Clock - The current value of time in the simulation is simply held in a variable called the simulation
clock. Unlike real time, the simulation clock does not take on all values and flow continuously; rather, it lurches
from the time of one event to the time of the next event scheduled to happen.
The simulation clock interacts closely with the event calendar. At initialization of the simulation, and then after
executing each event, the event calendar’s top record (always the one for the next event) is taken off the calendar. The
simulation clock lurches forward to the time of that event (one of the data fields in the event record), and the
information in the removed event record (entity identification, event time, and event type) is used to execute the event
at that instant of simulated time.
o Starting and Stopping - Important, but sometimes-overlooked, issues in a simulation are how it will start and stop.
It should be easy to figure out how to translate them into values for attributes, variables, accumulators, the event
calendar, and the clock.

9
EVENT-DRIVEN HAND SIMULATION
Post outlining the action and defining how to keep track of things. The following exist:
Outline of the Action
Here’s roughly how things go for each event:
o Arrival: A new part shows up.
o Departure: The part being served by the drill press is done and ready to leave.
o The End: The simulation is over. Update the time-persistent statistics to the end of the simulation. Compute and report the final
summary output performance measures. In the case of an event calendar, after each event (except the end event), the event calendar’s
top record is removed, indicating what event will happen next and at what time. The simulation clock is advanced to that time, and
the appropriate logic is carried out.
Keeping Track of Things
All the calculations for the hand simulation. The other column groups are:
Event: This describes what just happened; for example with respect to arrival and a departure in the system being
modelled.
Variables: Are the values of the number of parts in queue and the server-busy function.
Attributes: Each arriving entity’s arrival time is assigned when it arrives and is carried along with it throughout.

10
Statistical Accumulators: Initialize and update these as we go along to watch what happens.
Event Calendar: These are the event records as described earlier. Note that, at each event time, the top record on the
event calendar becomes the first three entries under “Just-Finished Event” at the left edge of the entire table in the next
row, at the next event time.
Finishing Up -The only clean-up required is to compute the final values of the output performance measures, viz:
o The average waiting time in queue
o The average total time in system
o The time-average length of the queue
o The utilization of the drill press

Event and Process-Oriented Simulation


The hand simulation uses the event orientation since the modelling and computational work is centred around the
events, when they occur, and what happens when they do. This offers permission to:
a) control everything;
b) have complete flexibility with regard to attributes, variables, and logic flow;
c) and know the state of everything at any time.

11
Randomness in Simulation
• Entails discussion of how (and why) the model randomness in a simulation model’s input and the effect this can
have on output.
a) Random Input, Random Output
A simulation use input data to drive the simulation records, resulting in the numerical output performance measures.
o And since the arrival and service times of an entity (i.e. parts in the drill press system) on different occasions
would probably differ, so the numerical output performance measures will probably be different as well.
o Therefore, a single run of the example just won’t do since we really have no idea how “typical” our results are or
how much variability might be associated with them. In statistical terms, the results from a single run of a
simulation is a sample of size one, which just isn’t worth much to conclude simulation study with.
So random input looks like a curse. It must often be allowed to make the model a valid representation of reality, where
there may also be considerable uncertainty. The way it is usually modelled, instead of using a table of numerical input
values, is often necessary to specify probability distributions from which observations are generated (or drawn or
sampled) and drive the simulation with them.
The Arena Input Analyzer determine these input probability distributions. Arena internally handles generation of
observations from distributions you specify. Not only does this make your model more realistic, but it also sets you
free to do more simulation than you might have observed data for and to explore situations that you didn’t actually
observe. As for the tedium of generating the input observations and doing the simulation logic, that’s exactly what
Arena (and computers) like to do.
12
Replicating an Example (Drill press system)

• Any in-queue or in-process entity from the end of one run should not be carried to the beginning of the next as
they would introduce a link, or correlation, between one run and the next; any such unsuccessful entity just go
away.
• Such independent and statistically identical runs are called replications of the simulation, and computer
simulations makes it very easy to carry them out— by just entering the number of replications you want into a
dialog box on your screen. Example of final Output Performance Measures (i.e. KPIs) from Five Replications (of
the Hand Simulation drill press example) as depicted bellow:

NB: A great layout for KPIs monitoring ; All info one once

comparative sheet

13
Comparing Alternatives ; Extensions and Limitations

• Most simulation studies involve more than just a single setup or configuration of the system. People often want to
see how changes in design, parameters (controllable in reality or not), or operation might affect performance (i.e.
Scenario analysis).

(i.e. Changes in arrival distribution of entity, addition of resource or relay of process (i.e. Exponential Distribution to R andom distribution ; additional teller,
operator etc)

• For each performance measure, the results of the replications of each model should be indicated, and the result
from the first replication in each case is filled to comparison purposes of the alternatives.
Simulating with Spreadsheets
• Simulation by hand doesn’t have much of a future. Depending on the type and complexity of the model, and on
what add-in software is available, simulation through computer software gains traction.
Extensions and Limitations
Spreadsheet simulation is popular for static models, many involving financial or risk analysis. Commercial add-in
packages to Excel, like @RISK (Palisade Corporation, 2013) and Crystal Ball® (Oracle Corporation, 2013), facilitate
common operations, provide better random-number generators, make it easy to generate variates from many
distributions, and include tools for analysis of the results.
However, spreadsheets are not well suited for simulation of dynamic models.
14
Overview of a Simulation Study
No simulation study will follow a cut-and-dried “formula,” but there are several aspects that do tend to come up
frequently:
✓ Understand the system: Whether or not the system exists, you must have an intuitive, down-to-earth feel for
what’s going on. This will entail site visits and involvement of people who work in the system on a day-to-day
basis.
✓ Be clear about your goals: Realism is the watchword here; don’t promise the sun, moon, and stars. Understand
what can be learned from the study, and expect no more. Specificity about what is to be observed, manipulated,
changed, and delivered is essential. And return to these goals throughout the simulation study to keep your
attention focused on what’s important, namely, making decisions about how best (or at least better) to operate the
system.
✓ Formulate the model representation: What level of detail is appropriate? What needs to be modelled carefully and
what can be dealt with in a fairly crude, high- level manner? Get buy-ins to the modelling assumptions from
management and those in decision making positions.
✓ Translate into modelling software: Once the modelling assumptions are agreed upon, represent them faithfully in
the simulation software. If there are difficulties, be sure to iron them out in an open and honest way rather than
burying them. Involve those who really know what’s going on (animation can be a big help here).

15
✓ Verify that your computer representation represents the conceptual model faith-fully: Probe the extreme regions of the
input parameters, verify that the right things happen with “obvious” input, and walk through the logic with those familiar
with the system.
✓ Validate the model: Do the input distributions match what you’ve observed in the field? Do the output performance
measures from the model match up with those from reality? While statistical tests can be carried out here, a good dose of
common sense is also valuable.
✓ Design the experiments: Plan out what it is you want to know and how your simulation experiments will get you to the
answers in a precise and efficient way. Often, principles of classical statistical experimental design can be of great help here
✓ Run the experiments: This is where you go to lunch while the computer is grinding merrily away, or maybe go home for
the night or the weekend, or go on vacation. The need for careful experimental design here is clear. But don’t panic—your
computer probably spends most of its time doing nothing, so carrying out your erroneous instructions doesn’t constitute the
end of the world (remember, you’re going to make your mistakes on the computer where they don’t count rather than for
real where they do).
✓ Analyze your results: Carry out the right kinds of statistical analyses to be able to make accurate and precise statements.
This is clearly tied up intimately with the design of the simulation experiments.
✓ Get insight: This is far more easily said than done. What do the results mean at the gut level? Does it all make sense? What
are the implications? What further questions (and maybe simulations) are suggested by the results? Are you looking at the
right set of performance measures?
✓ Document what you’ve done: You’re not going to be around forever, so make it easier on the next person to understand what you’ve
done and to carry things further. Documentation is also critical for getting management buy-in and implementation of the
recommendations you’ve worked so hard to be able to make with precision and confidence.
16
…END…

[email protected]

17

You might also like