MSC Programme: School of Engineering and Design
MSC Programme: School of Engineering and Design
MSc Programme
Course notes
Alireza Mousavi
This course book is written in two parts, Part A and Part B. Part
A mainly covers the theoretical part of the module. Part B covers the
practical part of the module. So we may not exactly follow the
chapters and the subjects in the numerical sequence of 1, 2, 3… as
they appear in this book.
On Theory:
On Practice:
PART A ................................................................................................................ 10
CHAPTER 1.......................................................................................................... 10
CHAPTER 2.......................................................................................................... 22
Chapter 3 ............................................................................................................... 43
PART B ................................................................................................................ 79
Chapter 5 ............................................................................................................... 79
5.4.2 Attributes.......................................................................................... 84
5.4.5 Queues.............................................................................................. 86
Simulation and Modelling Using Arena, the Basic Process Panel ...................... 117
CHAPTER 1
1. Definition of Systems
A system may represent ongoing processes, and at any time the state of one or
more objects within the system may change (state change). A system consists of a
number of different elements that normally follow a specific logic and discipline1. The
property and behaviour of these elements contribute to the property and behaviour of
the system as a whole and in an organised manner. So a system can be defined as a
collection of components which are interrelated in an organised way and work
together - e.g. people and/or machines- towards the accomplishment of certain logical
and purposeful end. This definition implies that a system must have the following
features:
1
Although in modern systems and mathematics “Chaos” has become a fascinating subject.
3. A logical objective or purpose: For every output or effect, there exists a definite
set of inputs of causes that influence and produce the expected output.
Identify the components of the system that they are designing and/or
studying;
Understand the role and the relationship between the components of the
system;
Infer from the sets of inputs, outputs and the interrelationship between the
system components, the state of the system (if it is to be designed) or the
objectives of the system (if it already exits).
Systems Engineering therefore, not only requires theoretical knowledge but also
the ability to visualise things in their totality. So you could consider it to be a form of
Art!
Using the same analogy one could consider having the capability to design,
maintain and interpret the state of something using scientific means makes one a
Systems Engineer. A Mechanical Systems Engineer is an engineer that studies the
Other features of Mechanical systems are that they have minimal adaptability to
the changes that happen in their environments. They are designed as closed feedback
loops and any sudden changes to the environment may significantly impact their
performance and survival.
For a long time2, the Mechanists enjoyed and propagated their view of system
very successfully. But with better understanding of nature and theories of adaptation,
fuelled with advances in technology the first challenges to Machinists came from
biologists and later human relations theorists. The understanding of the principles of
natural selection and evolution of biological systems3 will make a very useful read to
appreciate the foundations of adaptive systems. But in order to keep this chapter short
and straight to the point, we will only concentrate on industrial adaptive systems in
which human relationships and smart computing generates synergetic properties.
2
Probably from the dawn of civilisation, and later development of science and technology.
3
Charles Darwin.
The Organists challenged the Machinists, by arguing that respect for people social
and psychological needs will improve the effectiveness and efficiency of operations.
Logic
The adaptive systems theorists managed to explain and design industrial systems
that were capable of capturing and interacting with the dynamism of their
environment. But their models are now falling short of interpreting and capturing the
more complex systems that have emerged in the interrelated world of 21st century.
You may consider the Viable Systems theorists as Holists. Holists describe
systems as interacting networks that in addition to their constituent elements govern
the complex interactions between functional, socio-economical, cultural and political
elements. These systems not only adapt to their environments but have the cerebral
capability to influence and change their environment to their advantage. Rather than
follow the trend, they have the ability to accurately predict the future and lead the
changes.
Viable systems emphasis is on: (a) aggressive prediction, (b) active learning and
(c) persistent monitoring and control of the environment. Viable systems aggressive
evolution and success is heavily dependent on enhancing their capacity and ability to
obtain data (Data acquisition) to utilise the information of the past, combine it with
the present data to understand the current state. Moreover, use the present and past
information to accurately predict the future. These system have substantial resources
to process information whilst actively monitoring their environment in real-time. They
are capable of not only adapting to changes but also influence and change the
The highly creative and innovative nature of viable systems allows them to
expand and contract at the right times during the global socio-economical
fluctuations.
• Fashion industry…
My aim here is to make you start thinking about all industrial systems around
you. Make a distinction between them and find ways to categorise them into the type
of system they are. Ask the question whether these systems are suitable candidates for
evolving into viable systems. Suggest the necessary technologies and capabilities that
those particular systems need to acquire for it to evolve into a viable system. You as a
Communication Construct
Data Processing
Communication Platform
Predictive
Data Processing Centre
Shopfloor Data Acquisition Partners and Suppliers (supply Customer Details and Demand
Equipment chain and logistics) (CRM)
In this section we briefly discuss the importance of data collection and the science
of translating input data into meaningful information.
The process of preparing and translating input data into meaningful information
for systems performance analysis is called data modelling. There are various
techniques that can be used for this purpose. These techniques can be as simple as
logical And, OR and IF statements for binary system to complex data mining
techniques such as: Statistical Process Analysis, Genetic Programming, Fuzzy
Inference Analysis, Bayesian Belief Networks, etc. These analytical and physical
models allow system analysts to interpret a series of input data into system state.
Normally the input data are captured in a given time span.
We have two types of information Historical and Real-time. The historical data is
collected over a period of time, validated and verified through statistical means and
presented for modelling purposes. For example, average time that an operator
processes a work that is assigned to her/him, or the average time it takes the computer
processor to implement an algorithm. This data is normally collected at different times
and for a period of time. By validating and verifying input data modellers can utilise
the information to produce Predictive data that is derived from historical data. For
example average number in a queue or waiting time can be estimated using the
information about average meantime between arrival of work at a work station and the
average processing time for that work station. Do not fret! This module is all about
this, or mainly about this!
With the advent of modern real-time data acquisition technologies and their
ubiquity, systems analysts are exploring the vast opportunities that access to real-time
data provides. We are now utilising real-time data inputs into quick response decision
management systems and also using Real-Time data to improve the quality of
previously gathered historical data. At this stage it may suffice to intrigue you and
conclude this chapter by Figure 1.7. This figure illustrates the relationship between
Data Acquisition systems, real-time data modellers and Discrete Event Simulation
packages. Combined together, the technologies produce one of the most sophisticated
systems performance analysis capabilities available to us.
Layer
DES Package
Pre Simulation
EventTracker
Layer
Manual, Automatic and Semi Automatic Data Acquisition Systems
Simulation is used to describe and analyse the behaviour of a system, ask what-if
questions about the real system, and aid in the design or improvement of real systems.
Both existing and conceptual systems can be modelled using simulation.
In short, simulation reflects the behaviour of the real world in a small and simple
way.
For most companies the benefits of using simulation go beyond simply providing
a look into the future. These benefits are mentioned by many authors (Banks et al.,
1996; Law and Kelton, 1991; Pegden et al., 1995; Schriber, 1991) and are included in
the following:
3. Understand why: Managers often want to know why certain phenomena occur
in a real system. With simulation you determine the answer to the “why”
questions by reconstruction of the scene and examining the system to
determine why that phenomenon occurs.
8. Visualise the plan: Depending on the software used, you may be able to view
your operations from various angles and levels of magnification and even in
three dimensions.
10. Prepare for change: We all know that the future will bring change. Answering
all of the what-if questions is useful for both designing new systems and
redesigning existing systems.
12. Train the team: Simulation models can provide excellent training when
designed for that purpose. It can provide the team and individual members
with decision inputs to the simulation models as it progresses.
The Symbolic simulation models are those which the properties and
characteristics of the real-system are captured in mathematical and/or symbolic form.
The components that flow in a discrete system, such as people, equipment, orders
and raw materials, are called entities. There are many types of entities and each has a
set of characteristics or attributes. In simulation modelling, groupings of entities are
called files, sets, lists or chains. The goal of a discrete simulation model is to portray
the activities in which the entities engage and thereby learn something about the
system’s dynamic behaviour. The purpose of this book is for us to discuss this form of
descriptive simulation i.e. the Discrete Event Simulation.
Principle 2: The secret to being a good modeller is the ability to remodel. Model
building should be interactive and graphical because a model is not only defined and
developed but is continually refined, updated, modified and extended. An up-to-date
model provides the basis for future models.
The twelve steps crucial for successful design, implementation and completion of
a discrete event simulation project are:
1. Problem definition: clearly defining the goals of the study. (Why are we
studying this problem? and what questions do we hope to answer?).
2. Project planning: being sure that we have the sufficient resources to do the job.
6. Input data preparation: identifying and collecting the data required by the
model.
8. Verification and validation: confirming that the model operates the way the
analyst intended (debugging) and that the output of the model is believable
and represents the output of the real system.
9. Final experiment design: designing an experiment that will yield the desired
information and determining how each of the test runs.
10. Experimentation: executing the simulation to generate the desired data and
perform a sensitivity analysis.
12. Implementation and documentation: putting the results to use, recording the
findings, and documenting the model and its use.
I suggest you carefully observe these steps in all your present and future
simulation projects. This also includes your assignments in this module.
Savolainen et al. (1995) indicate that simulation models are really formal
descriptions of real systems that are built for two main reasons. Firstly, to understand
conditions as they exist in the system; and secondly, to achieve a better system design
through performing ‘what-if’ analysis.
Law and Kelton (1991) and Banks et al. (1997) give many benefits for simulation.
Perhaps the most important benefit is that it is the most cost effective way to explore
new initiatives and changes.
a. Manufacturing systems
The process of validation is an iterative one. The modeller adds new details to the
model, runs the model, and presents the results to the project team. If the results are
not sufficiently accurate, the project team identifies other details that should be
Even though there are many types of manufacturing systems that produce a wide
variety of products available today, but there are common elements that describe most
manufacturing operations. These common elements should be the basis for input data
used by a simulation model. Table 2.1 shows these common elements in
manufacturing systems.
Tools/fixtures
To build an accurate simulation model, the data in this table should be validated
and verified.
Product: Part, lots or products are entities being manufactured. Products may
move in manufacturing groups called lots that are made up of a number of pieces.
d. Downtime
1. Ignore it
2. Do not model it explicitly but adjust processing time appropriately
3. use constant values for time-to-failure and time-to-repair
4. Use statistical distributions for time-to-failure and time-to-repair
Of the four options, using statistical distribution for time-to-failure and time-to-
repair is preferred. What this means to the manufacturer is that a sufficient number of
downtime data has to be collected to fit a statistical distributions with desirable
accuracy.
Events such as acts of nature, labour strikes and power failures can literally shut
down a manufacturing operation. Because they are not part of normal operation and
are very difficult to predict, catastrophic events can be ignored for most simulation
activities.
Re-entrant process flow occurs when a particular station or work cell must be
visited more than one time by the same part. Rework occurs when a part must be run
through a work cell because the prior processing step was not completed successfully.
Figure 2.2 shows the difference between rework and re-entrancy.
One of the challenges for modelling most manufacturing systems is the presence
of random events. Random events in manufacturing systems can be associated with
variances in:
Processing time
Setup time
Downtime time to fail and time to repair)
Yield percentage
Transportation time
Shipment
The methods used to measure model performance should be the same as those
used in real system. Otherwise, it may be difficult to validate the model. With any of
the performance measure, it is important to collect the average as well as the
variability. Variability is usually indicated by the standard deviation, but maximum
and minimum are also helpful in measuring performance. The following statistics are
typically collected from manufacturing systems and should thus be provided by
models of such systems:
Production Throughput
Production Cycle time
Queuing behind work stations
Transportation of material on the shopfloor
Work in process
Utilisation of resources (Equipment and labour)
System specific performance measure (scrap rate, waiting time at a process)
h. Analysis
Using the performance measures described in the last section, model users
(analysts and engineers) experiment with a model to understand the behaviour of the
system under changing conditions. The issues often encountered in system analysis
include:
Identifying the right area to change and improve is paramount to the overall
success of an organization. The dangers of implementing business process
improvement changes without a clear understanding of how the changes will impact
the entire process can be substantial. Therefore tools are needed to help managers
truly understand their business processes and appreciate the impact of modifications
to those processes on the overall performance of the company.
Another factor that has contributed to the increasing usage of the business
modelling method is the increasing pace of change in business. There is not enough
time to try out new products in reality, and correcting mistakes, once they have
occurred, is often extremely costly.
Typical uses of business modelling and simulation can be in the following areas:
For the past several decades, the design, analysis and control of transport systems
were carried out mostly by field engineers (civil, structural and traffic engineers) and
operations research (OR) scientists (Ashford and Covault, 1978; Hamzawi, 1986;
Ashford 1987). A large number of Logistics and Transportation (L&T) systems have
evolved over time and become fairly huge and complex. The primary goals of an L&T
business enterprise are to store, distribute and/or transport freight of varying size,
form and shape from its origin to its destination at the lowest cost in order to deliver
the right quantities at the right time to its customers who are geographically dispersed;
however he underlying logistics and transport systems that become extremely
complex and often require expensive administrative, information and decision support
systems (Ashford and Clark, 1975).
L&T networks are quite complex and involve a very large number of entities
and resources
Existing simulation software do not support all the modelling/analysis
features required.
There is unfamiliarity of simulation technology in logistics and transport
industry.
In general, L&T problems appropriate for simulation studies are divided into
three major categories:
1. New design
2. Evaluation of alternative designs
3. Refinement and redesign of existing operations
Accordingly, simulation models in L&T domains are built for the following
purposes:
• Off-line control
• Real-time satellite/telecommunication control
• Off-line scheduling
• Exception handling
• Real-time monitoring
Depending on the level of detail specified to generate the desired results, the
simulation/analyst may decide to represent some or all of the entities, resources and
activities in a logistics system.
In majority of cases, simulation models are developed to find the best locations
for warehouses, analyse transportation modes between plants, and flow of material
and customers. The input data required for these models include the following:
Number of plants
Number and location of warehouses
Number of customers
Customer demand to warehouse
Part numbers produced at different plants
Bill of materials
Transportation times
Between plants and warehouses
Between warehouses and customers
It should be mentioned that customer demand, transportation times and so on, are
stochastic in nature and vary over time. Accordingly, these data elements correspond
to probability distribution generated using the information collected over long periods
of time.
References
Note: In this course I am expecting that you have some basic background in
statistics and probability. If you do not, for the purpose of maximising your
learning experience I suggest you study a book or two on these two subjects.
As the word deterministic implies, events that are determined are the type of
events that will occur with 100% probability! Oops – I thought we were not talking
about probability yet! May be I better rephrase the title of this section or may be the
title of this chapter should be: “Our world of Probabilistic events”.
In this section we will briefly touch on the basics and the surface of probability
theory and you are advised to enrich your knowledge by further reading the references
I have mentioned earlier. I hope this short introduction would encourage you to read
more about probability theory.
P( S ) 1
P( E ) 0
if E1 , E2 , E3 ,...En are events where i j , and Ei E j then
P( E1 E 2 ...E n ) P( E1 ) P( E 2 ) ...P( E n )
This theorem implies that the probability of an event occurring in a sample space
is equal to 1 minus the probability of that event not occurring. Tossing a fair coin
where the sample space is S = {H, T}, P(S) = 1 so P(H) = 1-P(T) = 1/2.
Theorem 2: if E F , then P( E ) P( F ) .
P( E F )
P( E F ) or P( E F ) P( E F ).P( F ) or P( E F ) P( F E ).P( E )
P( F )
P( F ) 0
Example: A manufacturer in China produces two brands of Volleyballs (indoor (i)
and beach (b)) and sells each type in packs of 6. A random quality control exercise
requires an operator to open a pack and test the balls for any defect. The operator will
then report the number of defects and the type of the ball.
The sample space (type, no. of defects) in this example will be:
S {( i ,0 ),( i ,1 ),( i ,2 ),( i ,3 ),( i ,4 ),( i ,5 ),( i ,6 ),( b,0 ),( b,1 ),( b,2 ),( b,3 ),( b,4 ),( b,5 ),( b,6 )}
P( i ,0 ) 0.38 P( b ,0 ) 0.35
P( i ,1 ) 0.1 P( b ,1 ) 0.06
P( i ,2 ) 0.05 P( b ,2 ) 0.02
P( i ,3 ) 0.01 P( b ,3 ) 0.01
P( i ,4 ) P( i ,5 ) P( i ,6 ) 0.005 P( b ,4 ) P( b ,5 ) P( b ,6 ) 0.005
The probability that beach volleyball pack is selected and at most 2 of the balls to
be defective is:
A statistician’s job is to collect data and analyse that data for the benefit of
understanding a trend and predicting behaviour. In the real world even though the
conditions for measurements may remain the same, but results could vary. The
science of statistics is to determine the pattern despite variability. Statistics is all about
recognising the pattern of a random behaviour, minimise the errors in their
interpretation of the data with respect to the noise (anomalies in the data). The
variability in data is the recipe for uncertainty.
The best thing to do now is to introduce a few concepts that you need to get
familiar with and know to be able to become a discrete event simulation expert.
Have you ever wondered why goal keepers dive in the wrong direction when
trying to save penalties4? Who says there is no Maths in sports! The reason is very
simple; the professional goal keeper conducts a statistical inference. Before the
match, a good/professional goal keeper would watch 100 penalties that the opposition
star has taken. He or she realises that the opposition penalty taker has directed 80
(repeated occurrence) of the 100 penalties to the goal keepers’ right hand side and
only 20 (repeated occurrence) to the left. If you were the goal keeper how would you
dive (left or right) if this penalty taker steps behind the ball? I bet you are wondering
which players would be better penalty takers – the ones who make this prediction
difficult! But How?
Figure 3.2 a: Random variable Figure 3.3 b: Draw the random variable
distribution for the penalty shooter distribution expected from a fair Die
4
I should apologise if you are not interested in football and its rules. But I feel compelled here to
explain the penalty rule: A penalty is taken on referee’s instruction (by blowing into his whistle). The
goalkeeper can only dive when the player touches the ball with his foot; therefore, he/she has little time
guess the direction of the penalty kick. A keeper therefore, normally makes an instant decision on the
direction of his/her dive. So it is normally random and based on the keeper’s experience and
preferences. Imagine if I had to explain the off-side rule here!
Continuous random values can assume any real value, so instead of being
countable they can be in a continuous range, for example the length of time T that an
operator answers a telephone call in a call centre.
dFX ( a )
fX (a)
da
In order to better explain the randomness of random variable and the behaviour of
a distribution function we use measures such as mean, variance and standard
deviation.
E( X ) xf ( x ) u1 f ( ui ) u 2 f ( u 2 ) ... u k f ( u k )
xS
Uniform-Discrete: Imagine throwing a fair die several times (Figure 3.2b) and
counting the number of times each number comes. In long term you will observe that
the number of times each side shows as about 1/6 of the total throws. How would a
lottery numbers uniform discrete event probability mass function would look like?
Discrete Uniform
pmf 1/3
1/6
0
1 2 3 4 5 6
1
P( X k ) f ( k ) for k a , a 1,...,b
ba1
ab
E( N )
2
( b a 1 )2 1
V( N )
12
In simulation software packages you can also have non-uniform discrete event
probability function and you can use it for example in defining the percentage of
different job types enter a system, batch sizes, disassembly of an artefact and other
things. By the way how would a non-uniform discrete event distribution probability
mass function or cumulative distribution function would look like?
Bernouli: concerns experiments with two possible outcomes, for example tossing
a coin (k=0 for Tail and k=1 for Head). The random variable N would have a
Bernouli distribution provided:
p for x 1
P( X k ) f ( k )
1 p for x 0
E( N ) p
V ( N ) p( 1 p )
P( X 1 ) p
P( X 0 ) 1 p
0 p1
Binomial: is when you carry out an experiment that has two possible outcomes
for n number of times.
n!
P( X k ) f ( k ) p k ( 1 p )n k for k 0 ,1,..., n
k ! ( n k )!
E( N ) np
V ( N ) np( 1 p )
10!
P( 6 ) ( 0.5 )6 ( 0.5 )4 0.20
6! ( 4! )
1/2
2/5
1/3
1/5
0
0
1 2 3 4 5 6 …
Poisson: deals with the random number events that occur in a given time. For
example; the average number of people that may call a call centre is 9 people.
The random variable N therefore follows a Poisson distribution if there is a 0 so
that the probability mass function can be expressed as:
k e
P( X k ) f ( k ) for k 0 ,1,2...
k!
E( N ) V ( N )
1 x /
e for x 0
f ( x )
0
otherwise
where the parameter of distribution 1 /
P( X x ) e x /
E( X ) 1 /
V ( X ) 2 1 / 2
For example, in a busy airport, aircrafts arrive based on a Poisson process with
mean rate of 10 per hour on a single runway. What is the probability of the runway
waiting more than 8 minutes for the first aircraft to arrive?
Figure 3.7: Probability density function for exponential distribution with mean
2 2
e ( x ) /( 2 )
1
f(x)
2
E( X )
V( X )
f(x)
Figure 3.8: Probability density function for Normal distribution with mean of
2( x a )
( m a )( b a ) for a x m
2( b x )
f(x) for m x b
( b m )( b a )
0 otherwise
x [ a ,b ]
E( X ) ( a m b ) / 3
V ( X ) ( a 2 m 2 b 2 ma ab mb ) / 18
f(x)
a b
m
Figure 3.9: Probability density function for Triangular distribution
The purpose of introducing Markov Chains at this stage is for you to appreciate
one of the most important subjects in Discrete Event Simulation projects and model,
the Queuing principles.
Markov processes are powerful tools for describing and analysing dynamic
systems that are probability based. Markov processes constitute the fundamental
theory underlying the concept of queuing systems. Each queuing system can be
mapped onto an instance of a Markov process and then mathematically evaluated in
terms of this process.
P( X t n 1 sn 1 X t n sn , X t n 1 sn 1 ,..., X t0 s0 ) P( X t n 1 , sn 1 X t n , sn )
P( X n 1 sn 1 X n sn , X n 1 sn 1 ,..., X 0 s0 ) P( X n 1 sn 1 X n sn )
When one continues the Markov Chain, its evolution from state s0 to sn is step by
step and according to a transition probability. There are many applications for Markov
chains such as genetic programming and many other dynamic and evolutionary
processes in which the probability of state transition is known. The one-step transition
probabilities are usually summarized in a non-negative, stochastic transition matrix P:
A Discrete Time Markov Chain state transition can be expressed in the diagram
below:
1/2
3/4
1/2
0 1
1/4
0.75 0.25
P1
0.5 0.5
The probabilities of weather conditions, given the weather on the preceding day,
can be represented by the state transition matrix:
Reading the matrix, the probability of a day being sunny and the following to be
sunny is 0.9. The probability of sunny to rainy will be the remaining 0.1. Can you
decipher the second row?
If on day 0 the weather is sunny, then X 0 ( 1 0 ) meaning the day is sunny then
100% and rainy 0%.
0.9 0.1
X ( 1 ) X ( 0 ).P ( 1 0 ) ( 1 0.9 0 0.5 1 0.1 0 0.5 ) ( 0.9 0.1 )
0.5 0.5
Day 2
0.9 0.1
X ( 2 ) X ( 1 ) P X ( 0 ) P 2 ( 0.9 0.1 ) (.9 .9 .1 .5 .9 .1 .1 .5 ) (.86 .14)
0.5 0.5
X ( n ) X ( n 1 )P
X ( n ) X ( 0 )P n
In long run this terrain when number of n goes to infinity the steady state will be
( p0 p1 ) ( 0.833 0.167 ) in other words if you want to bet on a day to be sunny in
the future you better put your money on a sunny day!
All of us have experienced queues, especially the ones who are not privileged or
considered as an immediate priority by the service provider. Queues represent waiting
to be served either by a person or a machine. Queues form because there is difference
between arrival rates and processing time. If the world around us was a deterministic
one, then service providers would have been able to design their production or service
processes in a way that no queues will form and no waiting time incurred. For
example if the arrival time between two jobs 2 minutes and the processing time 1.5
minutes there would be no queues. But if the time between arrivals was random
between 1 minute to 5 minutes and the processing time fixed, at times you would
observe queues forming. We will try (or have tried this already in the lab) using the
simulation software having a single resource (machine or a person), one queue and a
random inter-arrival.
Therefore to measure the numbers of jobs in a queue or the waiting times, there
are three key data that need to be known denoted by (A/B/m). The A indicates the
distribution of inter-arrivals (e.g. Poisson for number or Exponential for time between
arrivals). The B indicates in distribution of the processing time and the m is the
number of servers (in Arena software package defined as resources). The Markovian
queues are then described as M/M/1 for random arrival rates, random processing times
with a single server queues. M/M/c denotes random arrival rates, random processing
time with c servers.
If is the average arrival rate and is the average processing time for c server
then the utilisation factor can be estimated as: .
c
Notation M/M/1
Probability of 0 jobs at the p( 0 ) 1 p
workstation
Expected no. of Jobs waiting in Lq 2
Queue
1
Expected no. of jobs at workstation L
1
Expected Queuing Time Wq
( 1 )
Expected Throughput time W 1
( 1 )
Example: A security and metal detection machine at an airport has a service rate
that follows an Exponential distribution with 10 passengers per minute.
Passengers arrive at the machine with an Exponential rate of 8 per minute. The
queuing rule is FCFS. Find the expected machine utilisation, passenger throughput
time and average waiting time.
8
M achine Utilisation :
c 10
Probabilit y that themachine would be idle : p(0) 1 0.8 0.2
1 1
Throughput time : W 0.5 min
(1 ) 10(1 0.8)
0.8
Waiting time : Wq 0.4 min
2
Within a discrete event simulation, there are two concepts of time and state that
are of paramount importance. Nance (1987) identifies the following primitives which
permit precise delineation of the relationship between these fundamental concepts
(see Figure 4.1):
An instant is a value of system time at which the value of at least one attribute
of an object can be altered.
An interval is the duration between two successive instants.
A span is the contiguous succession of one or more intervals.
The state of an object is the enumeration of all attribute values of that object
at a particular instant.
These definitions provide the basis for some widely used (and, historically, just as
widely misused) simulation concepts:
Discrete event simulation models are considered in the class of abstract, dynamic,
descriptive, and numerical models.
One of the primary reasons for using simulation is that the model of the real-
world system is too complicated to study using the stochastic processes models.
Examples of such random inputs include arrivals of orders to a job shop, times
between arrivals to a service facility, times between machine breakdowns, and so on.
The major sources of complexity are the interrelationship between different elements
within the system and how they process the input to achieve the desired output, so-
called the prevailing logic. Each simulation model input has a correspondent both in
the real-world system and in the simulation program, as shown in figure 4.3 [7].
If we let xt1 represent a given value at time t1, then we define a first-order
stationary as one that satisfies the following equation:
fx (xt1) = fx (xt1+τ)
The physical significance of this equation is that our density function, fx (xt1), is
completely independent of t1 and thus any time shift, τ. [8]
The most important result of this statement, and the identifying characteristic of
any first-order stationary process, is the fact that the mean is a constant, independent
of any time shift. Below we show the result for a random process, X, that is a discrete-
time signal, x[n].
X = mx[n]
= E[x[n]
= constant (independent of n)
fx(xt1,xt2) =fx(xt1+τ,xt2+τ)
From this equation we see that the absolute time does not affect our functions,
rather it only really depends on the time difference between the two variables. Looked
at another way, this equation can be described as:
These random processes are often referred to as strict sense stationary (SSS) when
all of the distribution functions of the process are unchanged regardless of the time
shift applied to them.
- Let Z be a single random variable with known distribution function, and set Z0 =
Z1 = ....Z. Note that in a realisation of this process, the first element, Z0, may be
random but after that there is no randomness. The process {Zi, i = 0, 1, 2, ...} is
stationary if Z has a finite variance.
Output data in simulation fall between these two types of process. Simulation
outputs are identical and mildly correlated (depends on e.g. in a queuing system how
large is the traffic intensity ). An example could be the delay process of the
customers in a queuing system.
Unlike in queuing theory where steady state results for some models are easily
obtainable, the steady state simulation is not an easy task. The opposite is true for
obtaining results for the transient period (i.e., the warm-up period).
Gathering steady state simulation output requires statistical assurance that the
simulation model reaches the steady state. The main difficulty is to obtain
independent simulation runs with exclusion of the transient period. The two
techniques commonly used for steady state simulation are the Method of Batch
means, and the Independent Replication.
None of these two methods is superior to the other in all cases. Their performance
depends on the magnitude of the traffic intensity. The other available technique is the
Regenerative Method, which is mostly used for its theoretical nice properties;
however it is rarely applied in actual simulation for obtaining the steady state output
numerical results.
Suppose you have a regenerative simulation consisting of m cycles of size n1, n2,
…nm, respectively. The cycle sums is:
Estimate = yi / ni, the sums are over i=1, 2, .., m (number of cycles)
The 100(1-/2)% confidence interval using the Z-table (or T-table, for m less
than, say 30), is:
The Batch Means method involves only one very long simulation run which is
suitably subdivided into an initial transient period and n batches. Each of the batches
is then treated as an independent run of the simulation experiment while no
observation is made during the transient period which is treated as warm-up interval.
Choosing a large batch interval size would effectively lead to independent batches and
hence, independent runs of the simulation, however since number of batches are few
one cannot invoke the central limit theorem to construct the needed confidence
interval. On the other hand, choosing a small batch interval size would effectively
lead to significant correlation between successive batches therefore cannot apply the
results in constructing an accurate confidence interval.
Suppose you have n equal batches of m observations. The means of each batch is:
The 100(1-/2)% confidence interval using the Z-table (or T-table, for n less than,
say 30), is:
Suppose you have n replications with of m observations each. The means of each
replication is:
The 100(1-/2)% confidence interval using the Z-table (or T-table, for n less than,
say 30), is:
In simulation, experimental design provides a way of deciding before the runs are
made which particular configurations to simulate so that the desired information can
be obtained with the least amount of simulating.
1. We have the opportunity to control factors such as customer arrival rates that
are in reality uncontrollable. Thus, we can investigate many more kids of
contingencies than we could in a physical experiment with the system.
If a model has only one factor, the experimental design is conceptually simple: we
just run simulation at various values of the factor, or levels, perhaps forming a
confidence interval for the expected response at each of the factor levels. For
quantitative factors, a graph of the response as a function of the factor level may be
useful. In the case of terminating simulations, we would make some number n of
independent replications at each factor level. At the minimum there would be two
factor levels, thus needing 2n replications.
For example, the simulation of an inventory model; we take s as input the reorder-
point parameter and the order size parameter d, and produced as output the average
total cost per month, a random variable. We could thus in principal define the:
Gradient estimation: One of the goals of simulation is to find how changes in the
input parameters affect the output performance measures. If the parameters vary
continuously, we are essentially asking a question about the partial derivatives of the
expected response function with respect to the input parameters. The vector of these
partial derivatives is called the gradient of the expected response function, and is
dimensionally equal to the number of input parameters considered. The gradient is
interesting in its own right, since it gives the sensitivity of the simulation’s expected
response to small changes in the input parameters. It is also an important ingredient in
many mathematical programming methods that we might try to use to find optimal
values of the input parameters, since many such methods rely on the partial
derivatives to determine a direction in which to research for the optimum.
d (λ / w) = λ / w² - λw [7]
Model verification is substantiating that the model is transformed from one form
into another, as intended, with sufficient accuracy. This requires an accurate
modelling construct that handles the transition from one state into another.
In this chapter the steps for a successful simulation project were described. Also
techniques for experimental design and simulation output analysis were discussed.
The chapter was concluded by explaining the approaches to model verification,
validation and testing.
References
[1] Nance, R.E. and Overstreet, C.M. (1987). “Diagnostic Assistance Using Digraph
Representations of Discrete Event Simulation Model Specifications”, Transactions
of the Society for Computer Simulation, Vol.4, No.1, pp. 33-57, January.
[5] Zeigler, B.P. (1976). Theory of Modelling and Simulation, John Wiley and Sons,
New York, NY.
[6] Balci, O. (1987). CS 4150: Modelling and Simulation Class Notes, Department of
Computer Science, Virginia Tech, Blacksburg, VA, pp. 10-13, Spring.
[8] www.cnx.org/content/m10684/latest
Chapter 5
The flowchart view accommodates the model’s graphics including the process
flowchart, animations and other drawing elements. The spreadsheet view if active
displays model data in any selected module. It provides an easy way to enter and edit
model data and set relevant parameters. Most model data can be entered and edited
through the flowchart view but the spreadsheet view gives access to many more data
at the same time, arranged in a way convenient for editing.
To the left of the Arena window is the project bar, which hosts various panels
containing the objects which are used as building blocks in Arena models. Figure 5.2
shows the basic process panel displayed. Above that are buttons for the advanced
process and advanced transfer panels. Arena displays only one panel at a time.
Clicking on the advanced process button will hide the basic process panel and display
the modules in the advanced process panels.
Project Bar
Model Window
Flowchart View
Model Window
Spreadsheet View
Status Bar
In order to remove a panel, right click anywhere in the panel and select detach.
Similarly, you can add a panel by right clicking in the project bar, selecting attach
then the name of the panel from the displayed dialogue. You may want to attach and
detach a few panels now.
Above the model window are the tools bars. These are mainly shortcuts to the
menu items just as in most windows applications.
Once you become familiar with the Arena modelling environment, the next
question is probably, how do we build a model in this environment? Well, before you
learn how, there are a few things you need to look at such as some of the basic
terminologies that are used in simulation in general and Arena in particular, what the
building blocks are and some description of the most basic building blocks. We will
start with a quick review of some of the basic concepts and terminologies in the next
section.
5.4.1 Entities
In every simulation model, entities are objects that undergo processes and move
along the system. Kelton et al (2010) describe entities as the dynamic objects in the
simulation. Thus they are usually created, move around for a while, and then are
disposed of as they leave the system. They further noted that in as much as all entities
have to be created, it is possible to have entities that are not disposed but keep
circulating in the model or system.
In any case however, entities represent the “real” things in a simulation. There
can be in a typical system, especially if there are different types of parts that are
processed in the modelled system.
5.4.2 Attributes
Attributes are common characteristics of entities but with specific values that can
differ from one entity to another. For example, in a hospital system all patients may
have an attribute called Arrival Time but the exact value of this arrival time attribute
for each patient will depend on the time that patient arrived into the system.
The key thing to note about attributes is that, their values are tied to a specific
entity or group of entities and would always remain the same until updated at some
5.4.3 Variables
Variables are used to store information or values that describe or reflect some
characteristics of your system, irrespective of the number, state or type of entities
around. The information variables are available to all entities and not specific to any.
There are two types of variables in Arena: Built-in variables and User-defined
variables. Some examples of Arena’s built-in variables are Work-In-Process (WIP),
current simulation time, current number in queue etc. User-defined variables depend
on the system modeller and needs to be built into the model.
Entities can access and change the value of variables but they do not take up the
values as they do with an attribute. Note however that the value of a variable may be
assigned to an attribute at anytime. For example if you are interested in knowing the
day of the week on which a product arrives into your system, you may have an
attribute called Arrival Day and a variable called DayOfWeek. DayOfWeek will be
incremented by 1 after every 24 hours of simulation time and would vary from 1
through 7 for each day of the week. Hence each time a product arrives you can assign
its Arrival Day attribute with the following expression:
In this way if the product arrived when the variable DayOfWeek is 2 then the
product’s (entity’s) Arrival Day value will be 2.
5.4.4 Resources
Resources are facilities or persons in a system that provides services to the system
entities. Resources usually have capacities and entities seize units of the resource
when they are available and must be released when processing is over. It is possible
for an entity to seize various units of different resources at the same time. An
example of this is for an entity patient to require the resources: doctor, bed and a
It must also be noted that a resource can also serve one or more than one dynamic
entity at the same time depending on its capacity. Entities will always wait in a queue
when a required resource is not available.
5.4.5 Queues
Entities normally compete with each other for resources. When the resource
required is not available, the entities need a place to wait until the required unit of the
resource is available for them to seize. This waiting place is called a Queue.
In Arena, queues have names and can also have capacities to represent, for
example, limited floor space for a buffer or storage. There are a number of rules that
determine how a resource serves entities waiting in a queue. Arena by default applies
the First-Come-First-Served (FCFS), or First-In-First-Out (FIFO) rule to all queues.
Other queuing rules are: Last In-First-Out (LIFO), Lowest Attribute Value First
(LVF), Highest Attribute Value First (HVF) or other criteria which might influence
the way entities can be served in the queue.
5.4.6 Transporters
5.4.7 Conveyors
Conveyors are similar to Transporters in that they are also used to transfer entities
in the system. Conveyors however, are devices that move entities from one station to
another in one direction only, such as escalators and horizontal (roller or belt)
unidirectional conveyors (figure 5.4).
The statistical accumulators are types of variables that “watch” (observe) what
goes on during a simulation run. They are “passive” elements in the model, they do
not participate but just observe. Most of them are built into Arena and are used
automatically but they may also be user-defined for special cases. Some examples of
statistical accumulators are:
All of the above need to be initialized to 0 at the start of the simulation. As the
simulation progresses, Arena updates all of them and at the end of the simulation run,
it uses them to calculate the output performance measures.
Time Persistent Statistics are those that result from taking the (time) average,
minimum and maximum of a plot of some attribute or variable during the simulation,
where the x-axis is continuous time. Time persistent statistics are also known as
continuous-time statistics.
Time (minutes)
Tally statistics, sometimes called discrete-time statistics are those that result from
taking the average, minimum, or maximum of a list of numbers. An example of this is
the average and maximum total time in system. These statistics are observed at
Counter statistics are accumulated sums of a specified statistics. They are usually
simple counts of how many times something happened during the simulation. An
example of counter statistics is to count the number of entities that have entered a
process. Counter statistics could also be accumulations of numbers that are not equal
to 1, such as accumulating the wait time for each entity at a particular process to
obtain the total waiting time at that process. This is a sum of all individual wait time
and not an average.
Select the flowchart module by clicking and Arena displays module parameters in
spreadsheet view.
Figure 5.6: Placing a module in the model window and ways to edit data
Data modules are primarily used to define the characteristics of various system
elements such as queues, resources, variables and entities. They are also used to create
variables and expressions. Some data modules in the basic process panel are Entity,
Queue, Resource, Variable, Schedule and Set. Refer to the reference text for further
discussion on data modules.
To define a data module, click once on the module's icon in the Project bar to
activate its spreadsheet. Double-click in the designated space to add a new row. (Each
row in the spreadsheet represents a separate module.) Then edit the data as you would
in a standard worksheet.
Data and flowchart modules differ in several ways. First, data modules exist only
in spreadsheet form, while flowchart modules exist both as an object in the model
workspace and as a row in the spreadsheet. Second, data modules can be added or
deleted via the spreadsheet, while flowchart modules can only be added or deleted by
placing the object or removing the object from the model workspace.
With only three modules in Arena, you can build and run a very simple
simulation model. These modules are the Create, Process and Dispose modules found
in the Basic Process Panel. We want to introduce you to these basic modules before
we start to do some basic modelling. We present a very detailed treatment of these
modules and different ways in which they may be used in a simulation model. Similar
treatment of all other modules in the Basic Process Panel and others in the Advanced
Process and Advanced Transfer Panels are presented in chapter 7 where we introduce
the module by module approach to learning Arena.
The main purpose of the Create Module is to provide a starting point for entities
in a simulation model. In other words, this module is used to create entities into the
simulation model. Entities can be created in four (4) major ways:
Figure 5.7 shows the module shape and its dialog. The Name field represents a
unique identifier or name that should be given to the module. This name is displayed
on the module shape. It is helpful to use names that are descriptive of the type of
entities that the module creates for example, “create parts”, create products”, “create
patients”, and “create customers” etc.
The Entity Type field is the name that would be given to the entities that would be
created from this particular instance of the module. This could be for example Parts,
Customers, Patients, Part 1, Customer 1, Product 1 etc. Arena sets this value to Entity
1 by default.
The group of fields labelled Time Between Arrivals determine the way in which
and the rate at which the entities are created. When Type is Random (Expo) then the
Value field represents the mean value of the exponential distribution and Units
represents the time units in Hours, Minutes, Seconds or Days. As can be seen from the
insert in figure 5.7, Random (Expo) is just one method of creating entities.
When Type is Constant, the dialog view is the same as in figure 5.7. The Value
field may be for example 30 and the Units minutes. This means that Arena should
create 1 entity (i.e. if the Entities per Arrival field is 1) every 30 minutes starting from
time 0.0 (i.e. if First Create Time is 0.0).
When Type is Expression, the dialog view remains as in figure 5.7 except that the
Value field changes to Expression and Arena gives you a drop down list of standard
expressions to choose from or to specify your own expression using the Arena’s
expression builder (figure 5.9). For example you could build the expression
DayOfWeek×5
DayOfWeek = 1
DayOfWeek×5 = 2×5 = 10
The Entities per Arrival field refers to number of entities that will enter the
system at a given time with each arrival. This may also be a single value or specified
as an expression.
The Maximum Arrivals field also refers to the maximum number of entities that
this module will generate. When this value is reached, the creation of new entities by
this module ceases. The value of this field may also be an expression as described
above.
Finally we have the field First Creation which refers to the time for the first
entity to arrive into the system. When Type is Schedule then this field does not apply
because the start of creation will be determined by the schedule.
The Process Module is the main processing method in the simulation model. With
module shape and dialog as shown in figure 5.10, the Process Module can be used for
both standard and “submodel” processes. When the process type is Standard as shown
in figure 5.10, there are four possible actions that can be taken. The first option is a
delay. When modelling a process that does not require the use of a resource, then this
may be an appropriate option.
The next option of Seize Delay is used when the process is such that an entity has
to seize one or more resources, delay them but will not release them until a later time
in the simulation period. When this option is selected Arena displays a different
dialog view as in figure 5.11 with an option to add resources. It can be seen from the
Resource dialog in figure 5.11 that the Type field may be either a Resource or a Set of
resources. When there is only one resource available for the process, then the type
would be resource and the Resource Name field would be the name of the resource for
example Machine, Doctor, Nurse, Cashier etc.
On the other hand when there is a group (or a defined set) of resources available
to the entity, then the type field should be Set. Selecting the type, Set changes the
resource dialog view to figure 5.12. The Set Name field is requires a unique name or
identifier since there may be more than just one resource set in a real model. The
Selection Rule field contains options such as Cyclical, Random, Preferred Order,
Specific Member, Largest Remaining Capacity or Smallest Number Busy. If you have
for example four (4) machines in a work area that do the same thing, you may want to
use them one after the other (cyclically) or just at random whenever a new entity
arrives at the process. However, if you have a senior nurse amongst a group of nurses,
who is the only one to decide on a patients condition, then when that patient (or
entity) arrives he or she needs to first see that specific member of the group (or set of
nurses). Therefore an appropriate selection rule will be the Specific Member option.
There is in fact not a right or wrong selection here. It only depends upon what
situation you are trying to model.
The Save Attribute field is requires an attribute name that would be used to store
the index number into the set of the member that is chosen. This attribute can later be
referenced with the Specific Member selection rule. This applies only when Selection
Rule is other than Specific Member. It does not apply when Selection Rule is Specific
Member. If Action is specified as Delay Release, the value specified defines which
member (the index number) of the set to be released. If no attribute is specified, the
entity will release the member of the set that was last seized.
The Quantity field thus refers to the number of resources of a given name or from
a given set that will be seized or released. For sets, this value specifies only the
number of a selected resource that will be seized or released (based on the resource’s
capacity), not the number of members of a set to be seized or released.
When the Action Seize Delay was selected as shown in figure 5.11, Arena added
another field labelled Priority. This requires the priority value of the entity waiting at
this module for the specified resource(s). It is used when one or more entities from
other modules are waiting for the same resource(s). A classic example of using this
option is when a Doctor sees both a minor category of patients and emergency
patients. You may have one process module for the minor category patients’ process
and another module for the emergency patients and make sure they seize the same
resource, the doctor. Now in order to let the emergency patients have the Doctor
whenever they need him or her, you set the priority in the emergency patient process
module to high (1) and that for the minor category patient process to medium (2).
Note that this field does not apply when Action is Delay or Delay Release, or when
the process Type is Submodel. We have so far been looking at the Delay and Seize
Delay Actions. The next we want to consider is the Seize Delay Release Action.
The Seize Delay Release Action means that a resource(s) will be allocated (or
seized) followed by a process delay and then the allocated resource(s) will be
released. The fields required for this action are the same as having a Seize Delay
action as in figure 5.11. This is the most common action in most discrete event
systems for example machines processing parts, cashiers serving customers, doctor
seeing a patient etc. Note that for a patient however the action on a bed resource
would rather be a Seize Delay since he or she would release the bed resource only
when about to leave the system after discharge thus later on in the process.
Before we finish with the Standard Process Type, let’s look at the set of fields to
the bottom of the dialog box. As shown in figure 5.10, the “Delay Type” refers to a
list of standard probability distributions that you can select from to describe the nature
of your process delay in this module. There is also the option to build your own
expression using the Arena Expression builder (see figure 5.9 and corresponding
section). Any type of expression you select in the list, Arena will provide all the
necessary fields to specify its parameters. For example selecting a triangular
distribution in figure 5.10, Arena provides the fields for the minimum value, modal
(most likely) value and the maximum value. For more on the statistical distributions
used in Arena, refer to Kelton et al, 2010.
The other important field on this dialog is Allocation. This determines how the
processing time and process costs will be allocated to the entity. The process may be
considered to be value added, non-value added, transfer, wait or other and the
associated cost will be added to the appropriate category for the entity and process. By
definition, a value added process or time is that which transforms a product or service,
causing it to be worth more, for example the process spraying a car in the
manufacturing system. Thus if on the other hand the process or time spent does not
add any value to the product then it is a non-value added process or time. The time
spent in moving the product around the system is allocated as transfer and that during
which the entity has to wait for another step of event to be allocated as wait. If the
description of the time allocation does not fit any of the above then this may be
assigned the allocation other.
Now, going back to the Type field, you will realise that we have only been
dealing with the standard process type till now. We will now look at the submodel
Type. Submodel indicates that the logic will be hierarchically defined in a "submodel"
that can include any number of logic modules. It is important to note that all the logic
that would be defined in the submodel should be understood as taking place within the
process that is represented by this particular instance of the process module.
When the Type Submodel is selected, the dialog view changes completely to what
is shown in figure 5.13. Notice the change in the module shape (a small downward
arrow at the top right corner of the module shape) to indicate that this is a submodel
process.
Arena displays the new dialog view with two pieces of information, one on how
to define the submodel logic and the other on how to close the submodel. To begin
your submodel, you first have to click the OK button to accept the submodel Type
selection and to close the dialog box. Now right click the module shape and select
“Edit Submodel” from the menu list that pops up as in figure 5.13. This will open a
Entry Exit
As shown in figure 5.15, we have defined the logic between the entry and exit
points of our submodel. In this case we decided to update a variable, split the
You might have realised that one option that is common to both the Standard
Type process dialog and Submodel Type process dialog as shown in figures 5.10 and
5.13 respectively is the Report Statistics check box. This option mainly specifies
whether or not statistics will automatically be collected and stored in the report
database for this process. Checking the box enables statistics collection and vice
versa. Arena by default will check this option each time you add a new process
module.
The Dispose Module (figure 5.16) is intended to be the exit point of the model
where all entities leave the system. The Name field is the unique identifier for the
module. The Record Entity Statistics check box determines whether or not the
incoming entity’s statistics will be recorded. Statistics include value added time, non-
value added time, wait time, transfer time, other time, total time, value added cost,
non-value added cost, wait cost, transfer cost, other cost, and total cost.
Arena uses this module to calculate how many entities have left the system
(Number out) and how many are currently in process (Work-In-Process, WIP).
Entities that have been put into temporary batches must be split before being disposed
else Arena will give an error when the entity is being disposed of. Similarly, all
entities must release any previously seized resources before being disposed. The
effect of unreleased resources is an accumulation of waiting entities at the process
where that resource is needed.
With a good understanding of flowchart and data modules as the building blocks
in Arena, you are now ready to build your first simulation model.
If you think of simulation as a journey towards reality, you can start from
anywhere so long as you are aiming to capture what happens in the real world. The
closer you get to it the better. In chapter 3, we looked at the major concepts in
simulation modelling including the concepts of systems. We realised that the key
things in the definition of a system are its scope and level. These refer to the
boundaries and levels of detail of the system, Stuart, 1998.
To start with, consider the simple single process system shown in figure 6.17. It
starts with parts entering the hypothetical system, going through a process and then
exiting the system. We need only three flowchart modules in Arena to model the logic
of this system.
To do this we have to first create the arriving parts, send them off to the
processing area where they will take some time as they are being worked upon. After
the process, the parts are then sent out of the system through the exit point.
System
To create entities or parts in Arena, we use the Create flowchart module. Drag
and drop a Create flow chart module into your model window flowchart view and
double click the module to display the property dialogue box. Fill in the required
information as shown in the figure 5.18.
Figure 5.18: The Create Property Dialogue Box for Model 5.1
The module name, Parts Arrive to System is mainly for identification purposes. It
will uniquely identify this instance of the Create module within the model. It is very
helpful as with other modules to make the name descriptive of the process for which it
represents for ease of identification and clarification. The Entity Type is specified as
Part to show that what come into the system are parts. This could be Patients in a
healthcare system or Customers in banking or other business systems. Note that once
the Create Module is selected, Arena displays the alternative view in the spreadsheet
view for the selected module as shown in figure 5.19.
In a similar way, add a process module to your create module. Arena should
automatically connect these two modules for you if you have your auto-connect
option on. Double-click the module and update its data as shown in figure 5.20. Refer
to section 5.6.2 for a detailed treatment of the process module the different ways in
which it may be used.
Finally, add a Dispose Module to your model and double-click on it to open its
dialog box as shown in figure 5.21. Ensure that the Record Entity Statistics box is
checked so that Arena collects statistics on the entities before they are disposed of.
Before running the model, we need to set the run conditions. That is to tell Arena
how long to run for, what kinds of statistics to collect and what king of report to
generate etc. This is done in the run setup dialog by selecting setup from the run
menu.
There are five (5) tabs in this dialog thus, Reports, Run Control, Run Speed,
Project Parameters and Replication Parameters. At this stage we will only briefly
look at two of the tabs, Replication Parameters and Project Parameters.
Figure 5.23: Run Setup dialog with Replication Parameters tab displayed
The dialog is shown in figure 5.23 with the Replication Parameters tab displayed.
The first item to the top left of the display if “Number of Replications”. This is the
Number of simulation runs to execute. For example, if your model runs for 100 hours
and the “Number of Replications” is set to 10, then Arena will execute your 100 hour
run over and over again for 10 times. This helps generate sufficient data for
statistically valid analysis. This value must always be an integer greater than or equal
to 1.
The “Date and Time” field is basically for associating a specific calendar date and
time to with the simulation start time of zero. If this field is not specified, Arena will
start from midnight of the current date. For example, if the current date and time on
the computer clock is "Feb 10, 2015 08:45:32", then the Start Date and Time will be
automatically set to "Feb 10, 2015 00:00:00".
The “Replication Length” is simply how long a simulation run should last and is
the time used to evaluate the system. This value may be a real value greater or equal
to 0.0. If no value is specified, the simulation model will run infinitely unless stopped
by some other means. Other methods of stopping a simulation run are by specifying
the maximum batches on a Create type module, specifying a terminating condition (as
described below) or defining a limit on a counter, as specified in a Statistic module or
Counters element.
The “Hours Per Day” field refers to the number of hours the model runs in each
day. This value depends upon the number of hours the real system operates in a day.
The default value for this field is 24 hours per day but can be any expression greater
than 0. Note that the number of hours per day specified will affect the number of slots
shown on the graphical schedule editor for any resource, arrival or other schedules.
This field is useful to exclude a part of the day from statistics when your entire facility
Arena needs to use a uniform unit for all time values collected in the simulation.
This is done with the setting of the “Base Time Units” field. This is the time units for
reporting, status bar, simulation time (TNOW) and animated plots. All time delays,
replication length, and warm-up period times will be converted to this base time unit.
“Time Units” defines the units of time used for the warm-up period and
replication length. These are used to convert the warm-up period and simulation run
length to the base time unit specified.
Now set your “Replication Length” field to 100 as shown in figure 5.24 and leave
the rest at their default settings.
The other tab we will look at is the Project Parameters tab. This tab provides
general information about the simulation project such as “Project Title”, “Analyst
Name” and “Project description” as shown in figure 6.24. Additionally, it also enables
you to choose which types of statistics may be collected. As shown, the entities,
resources, and queues boxes have been checked hence Arena would only collect
statistics on these objects during the simulation.
Running the models in Arena is rather easy. This is done by clicking the Go
button on the Standard toolbar as shown in figure 5.25. Alternatively, you may run the
model by clicking Go in the Run menu or by pressing the F5 function key on the key
board. The option Check Model in the Run menu, the F4 key or the check ( ) sign on
the Run Interaction toolbar may be used to check the model for errors before running.
However, if you begin to run the model without checking for errors, Arena will
automatically do the checking before running the model. If there is an error in your
Fast
Go Stop
Once the model is without errors, it will begin to run and you can watch the
entities moving from module to module as shown in figure 5.26. Notice that each of
the modules has an animated counter. That to the right end of the Create module
keeps track of the number of entities leaving that module. The counter below the
process module keeps track of the number of entities in process at this module and the
counter to the right end of the dispose module keeps track of how many entities have
left the system through this module. Arena uses all of these variables to calculate its
statistics.
You can choose to stop the run at any time using the Stop button shown in figure
5.25. If you do not stop the run, Arena will continue forever unless you have a
terminating condition specified in one way or another. In our case we specified only
one replication with length of 100 hours so the simulation will surely stop after 100
hours and by default, Arena will display the dialog shown in figure 5.27.
The dialog in figure 5.27 gives you the option to view the model report (this could
be changed in the setup to display the report without prompting). Clicking yes will
display either the view in figure 5.28 or 5.29 depending on which report type you
have selected to display in run setup dialog. Let us now have a look at some parts of
Arena’s reports.
If you’ve been building the model along with us then just click on the “yes”
button on the dialog to open the report. If the default report type has not been changed
then Arena will display the “Category Overview” report as shown in figure 5.29. To
change the report type, go back to the model, click on the “Run” menu and select
Setup. Click on the Reports tab and then pull down the “Default Report” field. The
second in the list (Category Overview) and the last (SIMAN Summary Report (.out
file)) are the ones we are considering here.
The summary report is normally divided into different categories (e.g. tallies,
discrete-change variables, counters and outputs), each one providing a specific type of
statistic.
“Tally Variables” display the tallies recorded in your model. Tally statistics
include entity and process costs and times.
The “Outputs” section displays statistics for the final value of a given variable the
model. Included in this category are costs of resource, total process costs and times
and work in process information.
The “Counters” section displays statistics for any counters identified in your
model. The number of entities into and out of the system is included in this category.
Note that there may be more or less categories of statistics depending on the types
defined in your model.
The “Half Width” column shown in the report is the 95% Confidence Interval
range around the average. This is included to help you determine the reliability of the
results from your replication. This column may either be a value (real number), said to
be “Insufficient” or “Correlated”.
“Insufficient” means that there is insufficient data to accurately calculate the half
width of the variable. This is because the formula used to calculate half width requires
the samples to be normally distributed. That assumption may be violated if there is a
small number (fewer than 320) of samples are recorded in the category. Running the
simulation for a longer period of time should correct this.
“Correlated” also means that the data collected for the variable are not
independently distributed. The formula used to calculate half width also requires the
samples to be independently distributed. Data that is correlated (the value of one
observation strongly influences the value of the next observation) results in an invalid
confidence interval calculation. Running the simulation for a longer period of time
should correct this as well.
If a value is returned in the Half Width category, this value may be interpreted by
saying "in 95% of repeated trials, the sample mean would be reported as within the
interval sample mean ± half width". The half width can be reduced by running the
simulation for a longer period of time.
The “Category Overview” report has more detailed information than the
“Summary Report”. As shown in figure 5.29, this displays a summary of the key
performance indicators on the first page. It is however organized into the following
sections: Key Performance Indicators, Activity Area, Conveyor, Entity, Process,
Queue, Resource, Transporters, Station, Tank, and user specified. The statistics
reported are a summary of values across all replications.
To view any item in the report, you only need to click on the item in the Reports
Panel or select the item in the reports tree structure. For example to view statistics on
the entities in the model we clicked on the “Entity” item in the tree structure and it
displayed statistics shown in figure 5.30 below.
The explanations for the various columns of the report are the same as presented
under the summary reports. Thus Half Width column for example may either have a
value (real number), said to “Insufficient” or “Correlated” as explained above.
In the next chapter, we will continue to present detailed description of all the
modules found in Arena’s “Basic Process Panel”. This will help you understand the
various uses of the modules and to help you solve future modelling problems in Arena
more easily.
In chapter 5, we introduced you to the Arena simulation software and took you
through the fundamentals of modelling using this software.
For quite a while now I have trained students in the use of discrete event
simulation using Arena. Our observation is that Arena is indeed a powerful and
flexible tool, but students usually find it difficult to grasp the fullness of its power and
to be able to use the tool for solving problems. This is the motivation for writing this
chapter on the module by module approach to learning simulation with Arena.
It is anticipated that apart from teaching the student all possible uses of all
modules in the Basic Process panel, this will also serve as a quick reference for
students in solving problems that require the use of some of these modules.
Note that the Create, Dispose and Process Modules have been discussed in
chapter 6 and are therefore not included in this chapter even though they are part of
the Basic Process panel.
The Basic Process panel contains the most common modelling constructs that are
very accessible, easy to use and with reasonable flexibility. This panel contains eight
(8) flowchart modules and six (6) data modules. These modules are shown in table 7.1
below.
Note that the Name field in this module dialog serves the same purpose as we
have already described in previous modules. Arena provides four (4) options in the
Type field as shown. Thus the decision types possible with this module are 2-way by
Chance, 2-way by Condition, N-way by Chance and N-way by Condition. Before we
look at these options in detail, it should be noted that the Decide Module has basically
two exit points. The first which is to the right end of the module shape is the “True”
exit whilst the second which is at the bottom end of the module shape is the “False” or
“Else” exit. These are the only exits available when Type is either 2-way by Chance or
2-way by Condition. When Type is N-way by Chance or N-way by Condition, the will
always be a number of a number of “True” exit points equal to the number of chance
values or conditions specified. All of these will be lined up vertically at the right end
of the module shape as shown in figure 6.2, but there will only be one “False” or
“Else” exit which will always be the exit at the bottom end of the module shape. We
The 2-way by Chance Type is the basic and default option for Arena. This has the
dialog view shown in figure 6.1. An example of this is to say that 50% of all entities
that enter this module require inspection whilst the remaining 50% don’t. You specify
this by assigning the value 50 to the Percent True field as shown and this will tell
Arena to send 50% of all entities that come into the module through the “True” exit
and everything else goes out through the “False” or “Else” exit. Note that the Percent
True value can be anything from 0 to 100.
The Type N-way by Chance is similar to the above except that you have to means
of specifying more than just one chance or probability. When this is selected, the
dialog view changes to that shown in figure 6.2. Clicking on the “Add” button on the
Decide dialog displays the Conditions dialog in which the chance or probability value
can be specified. Notice that there are three (3) exit points to the right of the module
shape equal to the number of percentages or conditions specified. Thus counting from
top, 10% of entities will leave through the first exit, 50% through the second exit,
15% through the third exit and 100 minus 75 (i.e. 10 +50 +15) will leave through the
“False” or “Else” exit at the bottom of the module shape.
Figure 6.2: Decide Module shape and dialog with N-way by Chance
Selecting Variable in the If field, changes the dialog’s view since additional
parameters have to be specified. This view is shown in figure 6.4. This is exactly the
same as when you select Attribute in the same field. Arena makes all variables or
attributes defined in your model available in the Named field list for you to select
from. There is also a means of specifying an evaluator (i.e. >, <, =, etc). The Value
field requires an expression that will be either compared to the attribute or variable.
When the condition is based on a Variable Array (1D), a Row field is added as in
figure 7.5. 1D means a one dimensional array which may be specified as Variable1
(10), where “variable1” is the name of the variable and “10” is the array size. An
example of this is having 10 different components in your system and wanting to keep
track of the number of each component that has entered a process. One approach will
be to define 10 different variables for each component type (i.e. from 1 through 10).
An easier approach will be to use a one dimensional array (see the section on the
Variable data module to learn how to define variables and arrays in Arena). We may
define our variable array as “NumberOfCompents (10)”. With this Arena will create
10 separate storage places (as 10 separate rows) for the number of components of
each of the 10 components in your system. To use this in your model, you may use the
following assign statements;
NumberOfComponents(ComponentType) = NumberOfComponents(ComponentType) + 1
Arena will thus check row number 5 of the “NumberOfComponents” array, add 1
to that value and use the result as the new value for that same row in the array. The
same steps would be carried out for any value of “ComponentType” hence with one
statement you can update the number of components for each component type
irrespective of which component arrives at which time.
Note however that assignments as above are done in the Assign Module. We will
talk about in section 6.2.7.
Let’s now look at another example of using one dimensional variable array in a
Decide Module as in figure 6.5. We have selected the one dimensional array named
“NumberOfComponents” and row number equal to “ComponentType”. Our desired
evaluator is “greater than or equal to” (>=) and the test value is 300. This will instruct
Arena to check if the value of the one dimensional array named
NumberOfComponents at row number “ComponentType” is greater than or equal to
300. If this condition is true, then Arena will send the entity out through the “True”
exit otherwise it is sent through the “False” or “Else” exit. Remember that
“ComponentType” can be any integer from 1 through 10. What will happen in this
module during the simulation run is that, an entity would be allowed to proceed in one
direction (through the “True” exit) so long as the number of its component type is less
than 300. As soon as it riches 300, Arena redirects all following entities of that type to
a different exit (“False” or “Else” exit possibly to a different process).
The Variable Array (2D) option is very similar to Variable Array (1D). The only
difference is that it has an extra dimension to its array definition as shown in figure
7.6. Whilst Variable Array (1D) has only rows, Variable Array (2D) has rows and
columns. These are defined as “Variable1 (Row, Column)” or “Variable1 (1, 1)
meaning row 1 column 1 of the Variable1 two dimensional array.
To illustrate the use of this, consider figure 6.7. Assume a system that receives
orders from customers with each order involving 10 different component types. In
other to keep track of how many components of each type is in each order you may
want to use the 2D variable array as shown. If there are for example 20 orders, then
the variable array may be defined as:
The values of the array may be assessed by using dynamic arguments like:
Therefore, the condition specified in figure 7.7 requires checking if the value of
the two dimensional array named NumberOfComponents at row number
“OrderNumber” and column number “ComponentType” is greater than or equal to
300.
Figure 6.7: Decide Module dialog with 2-way by Condition – Variable Array
(2D) example
Figure 6.8: Decide Module dialog with 2-way by Condition – Expression view
The N-way by Condition option can be treated in the same way as we did the N-
way by Chance. As shown in figure 6.9, the only difference is that we are now using
multiple conditions instead of the multiple probabilities we used in N-way by Chance.
Notice here again that the number of exit points as shown on the module shape
corresponds to the number of conditions specified in the dialog. The If field could also
be any of the items shown in the list in figure 6.3. You can specify as many conditions
as necessary by clicking on the “Add” button to display the “Conditions” dialog.
The Batch Module is used for grouping or batching entities within the simulation
model. Entities can be permanently or temporarily grouped in the simulation.
Temporary batches must later be split using the Separate module (Section 6.2.3).
Figure 6.10 shows the Batch Module shape and dialog box.
The Save Criterion field is a method for assigning representative entity’s user-
defined attribute values.
This module can be used in only two ways. That is to either duplicate an
incoming entity into multiple entities or to split a previously batched entity. The
module shape and dialogue is as shown in figure 6.12. When “Type” is “Duplicate
Original”, Arena allows you to make duplicate or no copies of the incoming entity.
That is no duplicate copies will be created when “# of Duplicates” is less than or
equal to zero. Note otherwise that the total number of entities exiting the module will
always be “# of Duplicates” plus one. The “Percent Cost to Duplicate (0-100)” field
is best explained with the following example from the Arena help file.
When “Type” is “Split Existing Batch”, the temporary representative entity that
was formed is disposed and the original entities that formed the group are recovered.
Note that to use this option, you might have previously created a temporary batch
using the Batch module (section 6.1.2) else Arena will find nothing to split. The
entities after splitting proceed sequentially from the module in the same order in
which they originally were added to the batch. With the split batch option, Arena
provides the “Member Attributes” field as shown in figure 6.12 (b). This field
determines what attribute values of the representative entity should be passed on to
the original entities after the batch is split. The three options as shown are to retain the
original entity values, thus before they were batched, to retain all the values of the
representative entity and to retain selected values. For the third option, Arena provides
extra dialogs as shown in figure 6.12 (c) to enable you selected the specific attributes.
(b)
(c)
The purpose of this module is to assign new values to variables, entity attributes,
entity types, entity pictures, or other system variables. Multiple assignments can be
made with a single Assign module. This module can be used anywhere in the model
where it is required define or reassign a new value to variables or attributes.
Figure 6.13 shows the module shape and dialog. Clicking on the “Add” button
displays the “Assignments” dialog in which the values of the attributes or variables
may be assigned. Notice that the “Type” field of the assignment dialog provides a list
of all the possible assignments that can be made with the module. You may want to
experiment with all of these types to find out how to use them. As in other modules,
the “New Value” field can be a constant value or an expression.
Note that if there are multiple assignments, Arena performs the assignments in the
order in which they appear in the list hence. This is important when assigning a value
to a variable and using that variable in an expression within the same Assign module.
The Record module is used to collect statistics in the simulation model. The
module shape and dialog are shown in figure 6.14. The main part of the dialog is the
“Type” field. As shown, there are five types of records; count, entity statistics, time
interval, time between and expression. We will briefly explain these record types.
Count: use this type when you only need to count the number of entities going
through the record module. Arena will increase this statistics by the value specified in
the “Value” field each time an entity enters the module. A negative value will
decrease the count. This can therefore be used to count the number of entities leaving
a process or going into a process.
Entity Statistics: these statistics include VA Cost, NVA Cost, Wait Cost, Transfer
Cost, Other Cost, Total Cost, VA Time, NVA Time, Wait Time, Transfer Time, Other
Time and Total Time. Using this record type will keep record these statistics each
time an entity enters the module.
Time Interval: this type helps record the time spent by an entity in a process or in
the system. When using Time Interval to collect interval statistics, an attribute is
required to hold the value of the start time of the required interval. For example to
determine the time spent by an entity in a process, set the attribute to the current
simulation time, TNOW (using the assign module), before the entity enters the
process and then record the time interval after the process using the attributes value as
the start time of the interval. What Arena does is to subtract the attributes value from
the current simulation time at which the entity enters the record module.
Time Between: this option is used to track and record the time between entities
entering the record module. An example of this would be to track the rate at which
parts are entering into a process by putting the record module with type Time Between
just before the process.
The Entity Module is a data module. It is used to define the various types of entity
in the model and their initial picture and cost and time values. The module and its
parameters are shown in figure 6.15. When the Entity Module is selected by clicking
on the icon shown, Arena typically displays all Entities that have been defined in the
model and the initial values of every parameter. For example there are four Entities
(Entity 1, Entity 2, Entity 3 and Entity 4) defined in the model from which figure 6.15
was taken Arena has a default list of Entity pictures that is displayed each time you
click on the any row in the “Initial Picture” column. Initial costing information and
holding costs are also defined for the entities. Each time you create an entity using the
Create Module, Arena automatically updates this module by adding the last entity
created to the list but you can also directly create a new entity here by double clicking
the space just below the last row.
This is also a data module. It is used to define all Queues in the model and to
change the ranking rule for members of a specified queue. The default ranking rule for
all queues is First-In-First-Out (FIFO) unless otherwise specified in this module.
There is an additional field that allows the queue to be defined as shared (not available
in Arena Basic Edition). A shared queue is one that may be used in multiple places
within the simulation model and can only be used for seizing resources.
You may add new Queues to this module by double-clicking below the last row.
By default Arena gives the names Queue 1, Queue 2, Queue 3 and so on to the
queues. However each time you add a process to your model that requires a resource
Arena will automatically add its queue to the Queue data module with the name
“ProcessName.Queue” (where “ProcessName” is the name of the specific process
module). Similarly, if you define a queue in any flowchart module for example in a
Seize Module, Arena will also add this to the list of queues in your Queue data
module.
An important part of the Queue spreadsheet view shown in figure 6.16 is the
“Type” column. As shown, the type of queue may be FIFO, Last-In-First-Out (LIFO),
Lowest Attribute Value (LAV) or Highest Attribute Value (HAV).
Figure 6.17: The Queue Data Module Spreadsheet view with Type LAV
This data module defines all the resources in the simulation model, and their
costing information and availability. All resources may either have a fixed capacity
that does not vary over the simulation run or may be based on a schedule defined in
the schedule module. All the resource states and failure patterns may also be defined
in this module. The Resource Module icon and spreadsheet view are shown in figure
6.18.
By default, Arena sets the “Type” column to “Fixed Capacity”. This means that
the resource will remain at the capacity specified in the “Capacity” column
throughout the simulation. There is however a more flexible option as shown in the
“Type” drop down menu in figure 6.18. If “Type” is set to “Based on Schedule”,
Arena adds two new columns for the “Schedule Name” and “Schedule Rule” as
shown in figure 6.19. The former is the name of a specific schedule which Arena will
apply to the specified resource and the latter is an instruction telling Arena what to do
when the resource capacity change (decrease) is about to occur whilst the resource is
busy. The three applicable rules Wait, Ignore, and Preempt as shown.
The Wait option will wait until the on-going process is completed and the entities
release their units of the resource before starting the actual capacity decrease. Thus if
for example a staff is supposed to go on break for 1 hour from 12 to 1pm but a
customer arrives at 11:58am, this rule requires that the staff will wait and attend to the
With Ignore, the resource starts the time duration of the schedule change or
failure immediately, but allows the busy resource to finish processing the current
entity before effecting the capacity change.
This data module, shown in figure 6.20, is used to define a variable’s dimensions
and initial values. Values of Variables can be referenced in other modules (e.g. the
Decide Module, section 6.2.1), can be reassigned with the Assign Module, and can be
used in any expression.
The three methods for manually editing the Initial Values of a Variable Module
are the standard spreadsheet interface, the module dialog and by the two-dimensional
spreadsheet interface.
To use the standard spreadsheet interface, first click on the module icon, then in
the module spreadsheet, right-click on the Initial Values cell and select the Edit via
To use the two-dimensional (2D) spreadsheet interface, just click on the Initial
Values cell in the module spreadsheet to display the spread sheet view. Note that to
see a two-dimensional spreadsheet view, you need to define the number of rows and
columns of the variable as shown for variable 4 in figure 6.20 (a).
To use the module dialog, select the module icon as before, then in the module
spreadsheet, right-click on any cell and select the Edit via dialog… menu item. This
displays the dialog view shown in figure 6.20 (b). Click on the “Add” button to add a
new value for the variable. The values for two-dimensional arrays should be entered
one column at a time. Array elements not explicitly assigned are assumed to have the
last entered value.
(b)
Figure 6.20: Variable Data Module Spreadsheet view with snapshots of array
views
The Schedule data module is normally used in conjunction with the Resource
Module to define the availability of resources or with the Create Module to define an
arrival schedule. A schedule may also be used and referenced to factor time delays
based on the simulation time. This module is only used for duration formatted
schedules. Calendar formatted schedules are defined by selecting the Calendar
Schedules, Time Patterns command from the Edit menu. Figure 7.21 shows the
(a)
(b)
The final module we will look at in the basic process panel is the Set module.
This data module’s icon and spreadsheet view are shown in figure 6.22 below. It is
mainly used to define various types of sets, including resource, counter, tally, entity
type and entity picture. Resource sets can also be used in the Process (and Seize,
Release, Enter and Leave of the Advanced Process and Advanced Transfer panels)
modules. The types of sets that may be defined in this module are shown in the pop-
up menu shown in figure 6.22. Counter and Tally sets can be used in the Record
module. Other types of sets for example, Queue sets can also be defined but not in this
module. To do these use the Advanced Set module in the Advanced Process Panel.
In this chapter, we have been focusing on explaining in detail the main functions
of all the flowchart and data modules in the basic process panel.
If you have understood the material in this chapter, you should become
considerably familiar with the Decide, Batch, Separate, Assign and Record flowchart
There are several other Modules in the Advanced Process and Advanced Transfer
panels. In the next chapter, we will take you through the process of modelling a
reverse logistics system. This will further provide more examples of the use of some
of the modules treated in this chapter and also introduce you to some new modules
and concepts in Arena.
We have so far laid enough foundation for tackling a real simulation problem.
Since the objective of this course is to help you understand the use of simulation for
modelling and analysing systems. Step by step we will now go through an example
together.
It is important to realise that Arena’s modelling concepts are always the same
irrespective of the kind of system you are modelling. The only difference is that you
need to understand how to interpret the elements of your particular system in order to
know the modelling features you can use in Arena5.
For example an entity in Arena is a generic concept. You need to understand that
if you are modelling a banking system then your entities may be customers, data or
financial transactions (or physical money). If you are modelling a manufacturing
system, your entities may be parts, or if you are modelling a healthcare system, your
entities may be patients. And since we are going to be modelling the return of
products in our Reverse Logistics system, our entities will be returned products.
In section 7.2 I will present the problem definition and define the limits of our
model. We will then continue to develop a modelling approach to our problem,
defining what modules we may need and how much detail we intend to capture in our
model. This will then lead us to building running and viewing the results of our basic
model.
In section 7.2 we will enhance the model by remodelling the resources more
realistically taking into account failures, schedules and resource states.
Section 7.3 adds further enhancements to the model for display purpose,
introducing entity pictures, resource pictures, variables and plots. We will finally end
5
I am not really biased over Arena. If you would like to use any other Discrete Event Simulation
software package feel free to do so. The principles are the same.
Dekker et al (2004) [1] observed that business activities of IBM, one of the major
players in the electronic industry, involve several types of “reverse” product flows.
They identified the following elements of the reverse logistic flow of IBM’s business
market;
2. Strategy
ii. The goal was to manage the recyclability and reusability of returned
items to maximise the total value recovered.
3. Process
i. Remarkable equipment may be refurbished and put back into the market.
iv. Equipment that does not yield sufficient value as a whole is sent to a
dismantling centre in order to recover valuable components such as hard-discs,
cards, boards etc which can be reused
vi. In 2000, IBM reported the processing of 51,000t of used equipment, of which
only a residual of 3.2% was landfilled.
Problem formulation
The units that are returned to this facility are named Products A through D. All
the products are collected and sorted at a separate facility outside the scope of this
model. Product A arrives at a rate described by an exponential distribution with a
mean of 6 (all time in minutes). These Products are transferred upon arrival to the
Product A prep area where they are prepared for inspection. The prep process follows
a TRIA (5, 7, 10) distribution. The product is then transferred to the inspection area.
Product A
TRIA(3,4,5)
Product B
TRIA(6,7,10)
Product C
Product A WEIB(8.9,10.5)
Product A Product D
Prep TRIA(2,8,15) Back to
TRIA(5,7, 10)
Refurbishment market
Inspection Recovered
Product C components
Product C
Prep
TRIA (3, 5, 9) 55%
At the inspection area functional checks are performed on the products to check if
they could be reused. The total process time for this operation depends on the product
type: TRIA (1,2,3) for Product A, TRIA (2,3,4) for Product B, WEIB (2.5, 5.3) for
Product C, TRIA (6,7,8) for Product D. After inspection 30% of Products are sent for
refurbishment, 20% are sent for remanufacturing which is outside the scope of this
model, 35% are sent for dismantling and subsequent recycling and the remaining 15%
are rejected and disposed for recycling.
At the refurbishment area, products that are still salvageable are repaired using
additional supply of parts and then returned into the market. The processing times in
this area are TRIA (3,4,5), TRIA (6,7,10), WEIB (8.9,10.5) and TRIA (2,8,15) for
Products A, B, C, and D respectively. Upon arriving in this area, the products have to
wait for an appropriate part to be available. The parts are supplied in batches of 4 and
come at the rate of twice a day.
It must be repeated at this stage that using Arena to build a model is only one
component of a simulation project. For a refresher of the discussion on the entire
simulation project see chapters 4 to 6 of this book. The point to note here is that in the
real world, there will not be a readily defined problem with data available or supplied
as our example. In the real world often this information does not exist in the company
and should painstakingly be collected. In fact input data collection, validation and
verification can take up to 70% of the total simulation project time and effort.
The focus of this section is on the approach to developing and modelling a typical
system. This is the stage where you as a modeller after understanding the problem and
clearly stating your goals have to define your system, collect and analyse the data you
require to specify your input parameters. At this stage you need to take a global view
of your system and develop the best way to represent it. To do this you may need to
segment your system into stations or sub models and decide on which Arena modules
you might require.
Assuming all previous steps have been carried out, we will now break our system
down into identifiable sections convenient for efficient modelling. The main aspects
of our model will therefore be to
• Create (4)
• Assign (4)
• Process (7)
• Decide (2)
• Record (3)
• Dispose (3)
Each Create module will represent the arrival of each Product type. Each Assign
module will also be used to assign Attributes to each product type. Four Process
modules will represent the Prep process for each product type, one for the Inspection
process, one for Refurbishment process and the last for the dismantling process. One
Decide module will be used after the inspection process to split the products for
refurbishment, dismantling and recycling. The other Decide module will be used to
separate recovered components from those to be recycled after dismantling. Before
each stream of products is disposed, we will use a Record module to collect statistics
on time they spent in the system. Finally the three Dispose modules will be used to
dispose products to market, recovered components and recycling.
Remember that arrival rates and Prep times are unique for each product type as
shown in figure 7.1. We will use two attribute called Inspection time and
Refurbishment time to assign the different times spent at the inspection and
refurbishment processes for the various products types. The time for the dismantling
process is constant for all products. We will call our resources Prep A through D,
Inspector, RefTechnician and DisTechnician respectively for the Prep processes,
inspection, refurbishment and dismantling. We are now ready to build our model.
Start Arena and open a new model window (If you are unable to do this, refer to
section 3.1 of Kelton et al (2010)). Place the required flowchart modules as mentioned
above in the flowchart view of the model window.
Your model window should now look somewhat like figure 7.2. It was explained
in chapter 5, how to place modules in the flowchart view by dragging and dropping.
As you drag and drop your modules, Arena should be connecting them automatically
if you have the auto-connect option on. To turn this off and on, select the objects
menu and click on auto-connect.
Let us start by updating the Create modules, so double click the Create 1 module
to display its dialog. Set its name to Create Product A and Entity type to Product A.
In the time between arrivals area, set the type to Random (Expo) with a mean value of
8 and select minutes for the Units. Let us assume for now that the products arrive in
singles so we set the Entities per Arrival to 1. Leave the Max Arrivals to the default
setting of Infinite and zero for the First Creation. After all that, your create dialog
should look like figure 7.3.
Similarly, the Create Modules for Products B, C and D are shown in figure 8.4
below.
Note that the main purpose of the Create Module is to provide a starting
point for entities in a simulation model. For further details, refer to chapter 6.
The above instances therefore are the starting points for all the products that
come into our system. In the Create Modules, we specified the Entity Types to
be Products A through D. This is not the only information we need about each
product. We may also want to know what time the products arrived in our
system, how much time that product will spend at inspection area etc. The
information that are specific to each product are known as Attributes and are
assigned to the entities by the Assign Module which we will discuss next.
We want to know the arrival time for each product. We also would like to
know how much time that product spends at the inspection and refurbishment
processes. For these, we define the following attributes, Arrival Time,
Inspection Time and Refurbishment Time. Let’s now open each of the Assign
Modules and enter the information required. Double click the Assign 1 Module
and enter information as shown in figure 7.4.
New assignments are added by clicking on the “Add” button which displays the
assignments dialog as shown above. This module is used for assigning new values to
variables, entity attributes, entity types, entity pictures, or other system variables.
Multiple assignments can be made with a single Assign module. Following the above
steps assign the corresponding attributes to Products B, C and D as shown in figure
7.5. TNOW is a standard Arena reserved variable that provides the current simulation
time.
The next thing is to edit our Process Modules to update their parameters. We will start
with the four Prep Processes. Double click on the Process 1 module to open its
dialog. The completed dialog is given in figure 7.6.
In a similar procedure as above, update the remaining Prep Process modules. The
completed modules are shown below.
The inspection process is very similar to the Prep processes except that it uses an
expression for the delay instead of the triangular distribution type we have been using
so far. This is the reason why we defined the inspection time attribute in the assign
module. When you choose the delay type to be an expression, you can define any
expression (sums or differences, products or rations of variables and attributes) and
Arena will evaluate that and use the resulting value as the process time for the entity
(product). In this case we only wanted to use a value we had pre-assigned to the
entity’s inspection time attribute.
Decisions or choices in Arena are modelled using the Decide Module. The
problem definition states that following inspection, 30% of products are sent for
refurbishment, 55% for dismantling and 15% for recycling. The Decide Module
includes options for making decisions based on one more conditions or based on one
or more probabilities. Since we have values given for the percentages, we will use the
options based on personalities which Arena refers to as by chance.
Now double-click on the Decide 1 Module to display its dialogue. Since we have
three possible outcome, we will select the N-way by chance option from type combo
box in the decide dialog. The Add button displays another dialog in which you specify
The next step in our process logic is to update the Process 6 Module. This is very
similar to the inspection process module presented above except that we use the
attribute Refurbishment Time instead of Inspection Time. The module name also
changes to Refurbishment process and the resource to RefTechnician. The completed
dialog should appear as shown below.
Repeat the above steps to complete the Process 7 Module. Check with the
completed dialog in figure 7.13 below.
At the end of the dismantling process, the products will be turned into
components. From the problem definition we realise that the number of components
in each product varies and is represented by a triangular distribution of parameters, 6,
9 and 12. This is akin to assuming that there are a random number of components in
each product. We model this by using a Separate Module from the Basic Process
panel.
Now double click on the Separate 1 Module to open its dialog box. Enter
Components in the “Name” and leave the “Type” field to the default value (i.e.
Duplicate Original). Also leave the “Percent Cost to Duplicate” field to its default
value. The last thing is to enter the value TRIA (6,9,12) in the “Number to Duplicate”
field. The completed dialog should be as shown in figure 8.14. You should realise that
There is another decision to be made after the dismantling process where 40% of
components are recovered for remanufacturing and the remaining 60% sent to
recycling. We will use another Decide Module and complete it as shown in figure
7.15. This has fewer values to specify since it’s only 2-way by chance.
Figure 7.16 shows the completed record dialog for recording the cycle times of
products that have been refurbished products. In order to determine the cycle time
which is the time from when the products arrive into our system to when they exit
from the system, we use the Time Interval type from the record module. This is the
main reason why we defined the Arrival Time attribute soon after the products were
created. In this module, we selected the Arrival Time attribute as the reference for
computing the cycle time (time interval). Arena makes this attribute available in the
drop down list under Attribute Name in the record dialog because we had previously
defined it. The cycle times observed for the entities will be recorded into a tally called
“Record Refurbished”.
Fill in the remaining record modules in the same way as above but with the names
Record Recovered Components and Record Recycled. Notice that Arena
automatically uses the module name you supply as the tally name.
The final set of modules we have to fill in is the Dispose Modules. This module is
intended as the ending point for entities in a simulation model. Entity statistics may be
The model can be run now but before we do so, there are a few things we need to
specify as to how the model should run. One of these is to tell Arena when to stop the
simulation. Without this the simulation will run forever because Arena doesn’t know
when to stop. This and other parameters necessary for the runtime behaviour of your
simulation and information on generated report can be established by selecting
“setup” from Arena’s “run” menu. We will have a look at only two of the five tabs on
the run setup dialog. Figure 7.18 shows the run setup dialog with the Project
Parameters tab selected.
This tab allows you to specify project information such as title, analyst name and
project description. It also allows you to specify what aspects of your model you want
to collect statistics on. We have checked resources, queues and processes for statistics
collections. There will therefore be no statistics generated on all the other components
of the model in the report that will be generated after the run.
The other tab we will look at in the run setup dialog is the “Replication
parameters” tab. This is displayed in figure 7.19 this is where we specify the run
length for the simulation. Based on our system description, we have set the replication
length to 32 hours (four consecutive 8-hour shifts). We also changed the base time
units to minutes and left the remaining fields at the default setting.
There are four different types of products coming into our system and it would be
nice to differentiate between them in the animation (the pictures that represent the
products). We will do this by assigning different pictures to their entities using the
entity data module. Bring up the Basic Process panel and click on the Entity data
module. The first two columns of your spreadsheet view with the entity data module
selected will look like figure 7.20. Arena automatically assigns an initial picture to
every entity you create. In this case Arena assigns the Picture.Report picture to all the
products which make them look the same during the simulation run.
Now when you click on the initial picture cell of the Product A, Arena gives you
a drop down list of all entity pictures that are currently available for use. You can
select different pictures from the list to represent each of your products. We used red,
yellow, green and blow balls respectively to represent products A through D. See
figure 7.21.
After all these, your completed model should look somewhat like figure 7.22.
Arena cannot run a model with errors; hence the next task before running the
model is to check for the errors in the model. This can be done by clicking on the
check button ( ) on the Run Interaction toolbar, the check model command from the
Run menu or by using the F4 key on the keyboard. If the model is without an error,
Arena displays the message box in figure 7.23 otherwise you will receive an error
message with find and edit buttons that will help you locate and fix the error. If there
are no errors in your model, then you can run your simulation by clicking on the Go
button ( ) on the standard toolbar or just by pressing the F5 key on the keyboard or
by selecting the Go command from the Run menu.
6
If you are running example 7.1 using Arena's academic/demo version you realise that a warning/error
will pop up after a while declaring that "you have exceeded 150 entities" allowed for the
academic/demo version of Arena. This is due to the fact that the model is generating too many entities
i.e. exceeding the allowance for the demo version. The reason for this error could be due to the parts
waiting behind resources to be processed or too many entities being generated. Do not worry - by
changing the time between arrivals or reducing the processing time you can solve this at this stage.
Later on you will be using the Balking example to monitor the number of entities behind queues and
will be able to troubleshoot this sort of problem.
[This was intentional for you to experience this sort of error in Arena]
However, your individual assignments are designed in a way that you should not have this issue at all.
Therefore, if you encounter this error whilst completing your individual assignment; this means that
there is a mistake in your model. You need to detect and resolve this mistake in your modelling
approach.
For detailed discussions on the various parts of the Arena report refer to section
5.7.4. Notice here that Arena provides on the report view, the name of the project,
number of replications simulated and the time units for all time values in the report.
This time unit is taken from the “Base Time Unit” field on the “Replication
Parameters” tab in the “Run Setup” dialog.
Figure 7.25 shows the Queue Summary data displayed in the simulation report.
The report displays all Queues in the model and the time spent by products waiting in
each Queue and the number of Entities waiting at each time. If you look at the average
values for the waiting time in queue and number waiting in queue, you will notice that
the dismantling process has a much longer waiting time and queue length than the
other processes. This is obviously a source of concern; either the process doesn’t
have enough capacity to handle its work or there is a great deal of variability at this
process.
Some reasons are that validation implies that the simulation behaves just like the
real system, which may not even exist so it’s impossible to tell. Even if the system
exists, it may not be possible to capture all its complexities in the model hence there is
bound to be some variation between model and real system data. An idealistic goal in
validation is to ensure that the simulation is good enough so that it can be used to
make decisions about the system.
Obviously, the difficulty in validation grows with the complexity of the system
being modelled. Thus with our current model, it is pretty easy to validate by just cross
checking with information given in the problem description.
Let us therefore assume that as part of this validation process you showed the
above results to your manager or client with all the assumptions of running the model
for 24 hours a day and only one resource at each process with no breaks during the 24
hours. Your manager’s first response is that your assumptions were wrong and makes
some suggestions for enhancing the model.
The next step will therefore be to enhance our model by making the necessary
changes based on the new information received. This is the subject for the next
section on Enhancing the Model.
Your manager realises that the system actually operates two shifts a day and he
suggests having three (3) technicians for the dismantling process during the first shift
and four (4) for the second shift to see the impact on the queue at that process.
The manager also noted that there is a failure problem at the inspection process.
An inspection device required by the inspector periodically breaks down. Historical
data on these failures have shown that the mean uptime (time from the end of one
In the next few sections, we will incorporate the above changes into our model
with the introduction of some new Arena concepts.
Arena automatically defines four Resourced States namely, Idle, Busy, Inactive
and Failed. Thus throughout the simulation period a resource can only be in one of
these States. Arena keeps track of the time each resource in the system was in each of
these States in order to report the required statistics. A resource is said to be Idle if
none of its units has been seized by any entity. That is to say the resource is totally
free, doing nothing. On the other hand, as soon as an entity seizes the resource its state
is changed to Busy, because it is no more free. When the resource is not available to
be used, for example a bank staff on break, Arena will set its state to Inactive. This is
the case when a resource’s capacity is reduced to zero (0). Finally, the state of the
resource would be changed to Failed when he it is not available because of a
breakdown.
When a failure occurs, Arena will make the entire resource unavailable and none
of its defined capacity can be seized by any entity.
a. Resource schedules
Our initial assumption that the system works 24 hours a day was obviously not
right. We will begin to implement the new changes by changing the “Hours per Day”
field in the “Run Setup” dialog to 16 hours (Arena will prompt you that some
calendar-related features require the hours per day to be 24. Ignore this) to correspond
to the two 8 hour shifts in a day. We will also change the “Replication Length” to 10
and the “Time Units” to Days.
If you built model 7-1 then open it now and click on the Resource data module in
the Basic Process panel. This should display all the resources within the model in the
spreadsheet view. We will schedule the dismantling process resource to have capacity
of 3 for the first 8 hours and a capacity of 4 for the last 8 hours. Before that, change
the “Type” column for the DisTechnician to “Based on Schedule”, enter Dismantling
Schedule for the “Schedule Name” and select Ignore for the “Schedule Rule” column.
Your spreadsheet view should be looking as in figure 7.26 bellow. Recall that when
the schedule rule is Ignore, the resource’s capacity is decreased at the set time but the
work being done on the current entity will be completed. Note also that these are not
the only columns in this view. There are others as would be seen later.
We now need to define the details of the schedule. This could be done by using
the spreadsheet schedule editor or by using a dialog option. We will be focusing on
the former for now.
Select the Schedule data module in the Basic Process panel to display the
Schedule spreadsheet view in the bottom of the screen. Double-click in the
spreadsheet view to open a new schedule called Schedule 1 by default. Click in the
“Name” field and select Dismantling Schedule from the drop down list. You will have
the view shown in figure 7.27.
Click again on the Durations field for that row and to display the Graphical
Schedule Editor. The horizontal axis represents the simulation time. Notice that it
displays only 16 hours in a day as we specified in the Run Setup dialog. The vertical
axis also represents the resource capacity. This is filled in simply by clicking a
required capacity and dragging horizontally over the period required. You also erase
your selection by clicking on the zero capacity line and dragging horizontally. Figure
7.28 shows the editor filled in for capacity of 3 for the first 8 hours and 4 for the last 8
hours.
Resource failures are defined in much the same way as we defined the resource
schedules except that it does not provide a graphical interface. As mentioned in
section 7.2.2 above, the complete resource data module’s spreadsheet view is shown
in figure 7.29.
The parameters for the failure are specified in the Failure Data Module. This
module is found in the Advanced Process Panel. Click on the “Name” field and select
Inspector Failure. Change the “Type” field to Time and the “Up Time” and “Down
Time” values to EXPO(180) and EXPO(5) respectively. Set both time units to
minutes. Your final view should look like figure 7.31.
c. Model results
Table 7.1 shows a comparison of the Average Waiting Time in Queue and
Average Number in Queue for Model 7-1 and Model 7-2. Recall also, the changes
made to Model 7-1 as summarised in table 7.2. The impact of these changes is quite
obvious. Generally, all differences may be attributed to the increase in the run length
from 32 hours (in Model 7-1) to 160 hours (i.e. 16 hours x 10 days in Model 7-2).
Particularly however, differences in results at the dismantling and inspection
processes will be understood to be due to the increase in capacity and modelling of
failure at those processes respectively.
We realise that the waiting time in queue at the dismantling process had reduced
after the capacity was increased in Model 7-2. This does make sense since now more
products can be dismantled than in the previous model. On the other hand, the waiting
time in queue at the inspection process increased in the new model. This may also be
well explained by the fact that the Inspector resource was not always available due to
periodic failure.
Table 7.2: Difference in parameters for Model 7-1 and Model 7-2
Your final animation may not look exactly like ours since we will not take you
through every detail of how we did it but just the main steps and leave you to try
figuring out the rest yourself.
To start with, open Model 7-2 and scroll down to a blank Model Window. You
may want to copy our style by laying an ellipse over a square as we deed. To do this
make sure you have your “Draw” tool bar displayed. If not, right-click on any toolbar
and choose “Draw” from the pop-up list. It looks like figure 7.33 bellow. Now try
drawing the shapes using the “Polygon” and “Ellipse” buttons and changing the “Line
Styles” and “Fill Patterns”.
Box
Fill pattern
Figure 7.33: Arena’s “Draw” toolbar
Before we start talking about entities and resources, let’s quickly look at how to
animate the queues in the model. You might have noticed by now that Arena
automatically animates queues whenever you use a module that has an in-built queue
for example the process module. When we want to animate the entire system as
shown in figure 7.32, we need to be able to move the queues wherever we wish.
Fortunately, Arena makes it possible to cut the queue objects from the modules and
paste them where needed. That’s exactly what we did as shown in figure 7.34. We cut
the queue from the Product A Process module and pasted in our animation the way
we want it.
We create and edit entity pictures in Arena’s entity picture placement window
accessed by selecting “Entity Pictures” from the Edit menu. A snap shot of this
window is shown in figure 7.36 bellow. The left side of the window represents all the
entity pictures currently available for use in a model whilst the right side represents
one of Arena’s picture library files (machines.backup.plb).
The “Add” button allows you to create your own entity picture. It basically
provides a blank picture space for you and double-clicking this would then open a
picture editor where you may create your entity picture. The “Copy” button also
makes a copy of an existing picture which you make changes to, and yet preserve the
original. The “Delete” button will only remove a selected entity picture from the list.
What we have done is to represent our products A through D with coloured balls with
the corresponding letters on them. To do this, we copied the existing balls in the list,
double-clicked to open the picture editor and placed the letters on top of the balls.
The corresponding buttons on the right hand side have the same functionality and
entity pictures may be moved between both sides by clicking on the entity picture to
move on one side and the destination location on the other side and then using the
arrow buttons ( , ) to perform the move.
Arena Provides a picture ID and value or name which we changed to Product “A”
etc. The names you specify here would be made available in the list of available
pictures when you are assigning pictures in the Assign Module. You may also do the
assignment by changing the initial picture in the Entity data Module to the name you
gave to your entity. Now try to create your entities and assign them before we be
begin to look at resource pictures.
In order to add a resource picture to your animation in the “Animate” toolbar first
click on the Resource button ( ). The Resource Picture Placement window (figure
7.37) will be displayed. This is similar to the entity picture placement widow we just
discussed and the buttons have similar functionality. Recall our discussion in section
7.2.1 on the resource states. Arena allows you here to specify four different resource
pictures to represent each of the four possible resource states (Idle, Busy, Inactive and
Failed).
Arena will always make the list of all resources in your model available when you
click on the identifier combo box as shown. Notice that you can move pictures from
the library on your left to the states list by using the arrow buttons as in the entity
placement window discussed.
In the same way, add animations for all your resources arranging them in your
animation environment as we did or in your own way.
To complete this current model, we will now add some variables and plots to our
animation. As shown in figure 7.32, we will add variables for the number of products
going out into the market after refurbishment, number out to remanufacturing and
number out to recycling. We will also add a plot for the number in queue at the
dismantling process.
The Dispose module keeps track of the numbers of products going out and has a
default animation of these variables next to the module shapes. Copy these variable
animations from each of the three Dispose modules and paste them in your animation
environment as we did if figure 7.32. You may resize the variable by highlighting it
and dragging its handles. You may also reformat the text by double clicking on the
variable to display its Variables dialog. An alternative and more general approach is
to click the “Variable” button ( ) on the “Animate” toolbar to open the “Variable”
widow as shown in figure 8.38.
When you click on the expression field, Arena displays a list of all the
expressions in your model that you may animate. You may also create your own
expression by right-clicking the field and selecting the “Build Expression” option.
This will open a new window where you may build the expression you desire.
Finally, let’s add a plot for the number of products in queue at the dismantling
process. Click the “Plot” button ( ) on the “Animate” toolbar to display the plot
window shown in figure 7.39. Clicking the “Add” button further opens the “Plot
Expression” window which allows you to select or build the expression (s) you wish
to plot. In our case we selected the number in queue for the dismantling process (i.e.
NQ (Dismantling Process.Queue)) as shown.
We set the maximum to 60 hoping that the queue length will not be more than
that. We set the “History Points” to 5000 and the time “Time Range” for our display
to 9600 base time units (minutes in our model, check this in Run Setup dialog). When
We have so far been gradually building our imaginary reverse logistic model by
trying to make it more and more realistic. Up until now, we have assumed that entities
(or products) in our model move from point to point without time delay. That is to say
they disappear from one point and appear at the other point. Well this obviously does
not happen in any real world system but that is what happens when you connect your
modules with the connector.
In this section, we will introduce two new Arena concepts that would enable us to
model entity transfers more realistically by specifying the time it takes to transfer
entities from point to point in the system. After discussing these new concepts of
Stations and Transfers, we will then add further enhancements to Model 8-3 to create
Model 8-4.
7.5.1 Stations
Arena provides a special flowchart module called Station for modelling this
concept. This module may be used to define a single station as well as a set of
stations. In this example however, we will only present the single station application.
Recall from section 7.2.1 that we initially divided the entire modelling problem
into the following steps:
5. Prep A station
6. Prep B station
7. Prep C station
8. Prep D station
9. Inspection station
With these we will be able to send any entity (or product) in to system to any of
the stations by using the Route module and specifying the unique identifier (Name) of
the station.
The Station Module shape and dialog is shown in figure 8.40 below.
7.5.2 Routes
The Route module transfers an entity to a specified station, or the next station in
the station visitation sequence defined for the entity. A delay time to transfer to the
next station may be defined.
When an entity enters the Route module, its Station attribute (Entity.Station) is set
to the destination station. The entity is then sent to the destination station, using the
route time specified.
The Route Module shape and dialog are shown in figure 7.41 below. The
module’s “Name”, “Route Time” and “Units” fields are similar to those already
discussed in other modules. When you click on the “Destination Type” field, Arena
gives you a drop-down list of various ways of specifying destinations as can be seen
in the figure. In this example, we will only use the Station option and this requires that
we specify the name of the station in the “Station Name” field.
Let us update Model 7-3. Note that Station and Route modules are found in the
“Advanced Transfer Panel” in Arena. If you don’t this panel, then attach it now by
going to File menu, Template panel, Attach and look for the file
AdvancedTransfer.tpo
Now open Model 7-3 and let’s begin to modify it. Remove the connectors after
each of the Assign Modules and add a Station and a Route Module to each. Before we
begin to define our stations, you should note that our addition of stations and routes
will affect both the model and the animation. For example if we want the animation to
show the products arriving at some point before being sent to the Prep areas, then we
should define that point as a station which we call in this case “Product A Arrival
Station” for all product “As”. Now double-click on the station module you have added
to update its parameters. We gave this module the name “Product A arrival Station”,
set the “Station Name” field to “ProductA” and left all other fields to their default
values. In a similar way, double –click on the route module and set its “Name” field to
“Route to Prep A”, “Route Time” field to 5, “Units” field to “Minutes”, “Destination
Type” field to Station (default value) and “Station Name” to Prep A. Your completed
station and route modules should look like figure 7.42.
Continue to update the station and route modules you have added to the
remaining assign modules as above. Remember use the product letters (B, C, and D)
respectively in place of A when updating the remaining modules. When you are done
with this part of your model, it should be looking like figure 7.43.
All we have done so far is to break our model down at various locations and add a
station module to define the location and a route module to transfer the product from
that location to another after processing is finished.
Thus the resulting logic for the inspection, refurbishment, and dismantling
stations are shown in figures 7.45 and 7.46 respectively. What you should also note is
that, as your define your stations, Arena keeps a list of them and would give you a
drop down at any point you need to define a station or select a previously defined one.
You may realise by now that there are only three exit points in our model. That is
the products are either sent to market, sent for remanufacturing or to recycling. We
have again modelled these points as stations mainly for the sake of animation. We
want to be able to see where the exits are located in our animation and the products
moving there after processing. The logics for these are shown in figure 7.47.
Station animation is quite straight forward. You will need the “Animate Transfer”
toolbar to be able to proceed. If it’s not displayed in your project bar then, right-click
on the toolbar and select the “Animate Transfer” icon from the pop-up list.
Now, repeat the above process again to put a route between all your station
animations. Note that you only need a route between stations for which you have
specified an entity transfer in your model logic. It is also important to know that a
route from say station “A” to station “B” is different from that from station “B” to
station “A”. That is if there are entities moving in both directions then you have to
animate routes for both directions.
When you have properly placed all your station and route animations then your
final animation view should be looking something like figure 7.50. Figure 7.51 shows
a run time snap shot of the completed model.
Route
Be reminded again that there is more to simulation modelling than just using
Arena. In this chapter, we have tried to take you through some of the key stages of
There are many more features and concepts in Arena that cannot feasibly be
covered in this course. However, if you have been following very closely and have
taken in all the material in the last three chapters, then you should able to build
models with considerable detail. I especially recommend that you follow the same line
and bring more features such as Transporters and Conveyors into your models.
What you may have to do is to practice more, read more of the reference text and
also consult the Arena help files in order to further develop your modelling skills.
Chapter Reference
[1] Dekker, R., Flieschman, M., Inderfurth, K., and Van Wassenhove, L. (2004),
Reverse Logistics for closed-loop supply chains, Springer Verlag.