Simulation Systems and Languages
Simulation Systems and Languages
Systems,
Languages, and
Modeling: An
Introduction
Simulation Languages – What and
Why?
A simulation language is a programming language or toolset
specialized for building simulation models. These languages offer
built-in features such as time handling, event scheduling, and
random variate generation, which simplify the process of
simulation coding. While general-purpose languages like C++ or
Java can be used for simulations, simulation languages save
developers from having to reinvent common logic elements like
queues and clocks. This results in faster model development and
allows automatic collection of statistics such as wait times and
resource utilization with minimal effort. Using a simulation
language is like working with a specialized toolkit—it reduces low-
level coding and lets developers focus more on the core logic of
the model.
General vs. Special-Purpose
Languages
General-purpose languages such as C, Python, and Java are standard
programming languages that do not include built-in support for simulation. Their
main advantage lies in their flexibility and widespread use, but they require
developers to implement all simulation mechanisms—such as event scheduling
and resource management—from scratch. In contrast, special-purpose
simulation languages like GPSS or SIMSCRIPT are designed specifically for
simulation tasks. These languages provide convenient constructs for handling
events like arrivals and queues, as well as for collecting statistics, which
streamlines the modeling process. However, they tend to be less flexible when
used for non-simulation tasks. For example, simulating a queuing system in a
special-purpose language might involve using a built-in QUEUE block, whereas in
C, the developer would need to manually program an event scheduler. Reflecting
current trends, many modern simulation tools now blend the strengths of both
approaches—offering visual interfaces and domain-specific features alongside
the ability to write custom code, as seen in tools like Simul8 or AnyLogic.
Examples of Simulation Languages
& Tools
• GPSS (General Purpose Simulation System) – Early discrete-event
simulation language (block diagram style) for queuing networks
• SIMSCRIPT – Another early simulation language, introduced object-
oriented features; used for process-oriented modelsWEB.
• Arena (SIMAN) – Commercial DES tool (graphical), built on SIMAN
simulation language for process modeling in industries
• MATLAB/Simulink – Environment for continuous system simulation
(e.g. engineering systems) with block diagrams solving equations
• Python with SimPy – Using a general language (Python) with a
simulation library (SimPy) to model discrete events in code
• AnyLogic – Modern tool supporting multiple paradigms (discrete-
event, agent-based, system dynamics) in one platform
Types of Simulation Models &
Languages (Overview)
• Different problems require different simulation
approaches. Simulation languages often specialize in
one of these paradigms:
1. Discrete-Event Simulation (DES) – System changes at distinct
event times (common for queuing and process flows)
2. Continuous Simulation – System state evolves continuously
over time, often described by equations (common in
engineering and physics models)
3. Agent-Based Simulation (ABS) – Simulates individual agents
(entities with behaviors) interacting, often event-driven but
focused on emergent behaviors
Types of Simulation Models &
Languages (Overview)
• Hybrid approaches combine these (e.g. a factory
might use DES for workflow + ABS for workers’
behavior)
Discrete-Event Simulation (DES)
• Discrete-Event Simulation: Models a system as
a sequence of events in time. Nothing changes
between events; changes occur
instantaneously at event times
• Examples of events: a customer arrival, service
completion, machine breakdown, etc. The
simulation jumps from one event to the next
(advancing a simulation clock to each event)
• Characteristics: State variables change at Figure: In DES languages like GPSS, models
discrete points (event occurrences). Efficient are built using block diagrams. Each block
for queuing systems, inventory systems, etc.,
where changes are event-triggered (e.g., GENERATE, QUEUE, DEPART)
• Real-world use: Evaluating waiting lines at a represents events or processes affecting
bank, throughput in a manufacturing line, entities. This visual approach makes it
network packet traffic, etc. easier to model a queuing system’s logic.
• Languages/Tools: GPSS, Arena, SimPy, AnyLogic
(process modeling mode) all support DES with
constructs for queues, servers, events
Continuous Simulation
• Continuous Simulation: Models systems with
continuously changing state over time, often described
by differential equations or calculated in small time steps
• How it works: Time progresses in fine increments (or via
solving equations) rather than jumping event-to-event.
The model continuously updates state (e.g., position,
temperature)
• Examples: Physical processes like the flight of a rocket
(governed by physics equations), population growth
modeled by differential equations, chemical reactions A simple stock-and-flow diagram
kinetics, etc. (continuous model). The stock (center
• System Dynamics (a form of continuous simulation): box) accumulates a quantity, with an
uses stock-and-flow diagrams to model accumulations inflow adding to it and an outflow
(stocks) and rates (flows) – useful for high-level policy
models (e.g., epidemiology, ecology) removing from it. Such diagrams are
• Languages/Tools: Simulink (MathWorks) for engineering
used in continuous simulation (system
systems, Modelica (equation-based language), Vensim or dynamics) to model things like water
Stella for system dynamics. These provide solvers for in a tank, population in a region, etc.
continuous equations
Agent-Based Simulation (ABS)
• Agent-Based Simulation: Models a system from the
perspective of individual agents, each with their own
behavior rules and interactions
• Agents can be people, animals, vehicles, or any
autonomous entities. They make decisions, move, and
interact with each other and the environment
• Emergent behavior: System-level patterns “emerge”
from many agents’ local interactions (e.g., traffic jams
emerging from many driver agents, or crowd behaviors in
an evacuation)
• Time in ABS is often advanced in steps or events, so ABS
can be seen as a type of discrete simulation focused on
individuals
• Real-world uses: Modeling epidemics (each agent is an
individual person who can infect others), crowd Figure: Overhead view of pedestrians
movement in a stadium, market simulations with crossing a street – analogous to agents
individual traders
moving in a shared environment. In an
• Languages/Tools: NetLogo (user-friendly for ABS, often agent-based model, each pedestrian would
used in education), Repast and Mason (Java-based agent
libraries), AnyLogic (agent-based mode), even games like be an agent following rules (like “don’t
SimCity use agent-based principles bump into others”), and crowd dynamics
emerge from these individual actions.
Comparison of Simulation Language
Types
Simulation Typical Example Key Features
Time Progression
Paradigm Applications Languages/Tools
Discrete-Event (DES) Queues, processes GPSS, Arena Built-in event
Jumps to next
(e.g. factories, (SIMAN), SimPy scheduling, entities
scheduled event;
checkout lines, (Python) and queue handling,
time advances
network packets) stats collection (wait
event-to-event
times, etc.)
Continuous Physical systems Simulink, Modelica, Solvers for
Advances in small (engineering, Vensim/Stella differential
time steps or by physics), system equations,
solving continuous dynamics continuous time
equations (population, tracking, integration
economy models) methods (e.g. Euler)
Agent-Based (ABS) Social systems NetLogo, Repast, Agents with
(crowds, epidemics), AnyLogic (agent attributes & behavior
Often step-based or
ecological, market mode) rules; focus on
event-driven; agents
simulations interactions; can
act autonomously
include random
each step
decision-making per
agent
Note: Some platforms (e.g. AnyLogic) support hybrid models combining these paradigms
letting modelers use the right tool for each part of a complex system
Describing Simulation Models vs.
Experiments
In simulation studies, it's important to distinguish between the model and
the experiments conducted on that model. The model description defines
the system’s logic and rules—essentially explaining what the system is and
how it behaves—without specifying any particular scenario or setup. On
the other hand, the experiment description outlines how the model is used
in a specific study, including initial conditions, input parameters, simulation
duration, number of runs, and the outputs to be observed. Separating
these two aspects allows a single model to be reused across many different
experiments, making it easier to test various scenarios and improving
clarity. This way, the assumptions and structure of the model can be
discussed independently from the way it is used for analysis. A helpful real-
world analogy is that the model is like a recipe, while each experiment is
like trying that recipe under different conditions—such as adjusting
ingredient amounts or oven settings—to see how the final result changes.
Means of Describing a Simulation
Model
• Verbal or Textual Description: Start with a plain
language narrative or pseudocode of how the system
operates (e.g., “Customers arrive, join a queue if server is
busy…”). This helps in conceptualizing the model
• Visual Diagrams: Flowcharts or block diagrams can map
out the model logic. For example, a flowchart for a bank
service: Arrival -> Queue -> Service -> Departure
• Mathematical Formulation: Define equations or rules
(e.g., service time distribution = exponential(5 min), state
variables, etc.). Continuous models often use differential
equations; discrete models might use state transition tables
Means of Describing a Simulation
Model
• Simulation Language Code: Ultimately, the model
can be described in a formal simulation language or
programming code. For instance, a GPSS model is
described by a series of block statements (GENERATE,
QUEUE, DEPART, etc.), while a Simulink model is drawn
as interconnected blocks
• Example: Model of a simple checkout system: one
could write a short description (“One server, customers
arrive ~Poisson(λ), service ~Normal(μ,σ)…”) and/or
draw a small queue diagram. This model description is
independent of any specific experiment we’ll run on it
Means of Describing Simulation
Experiments
• Experimental Setup: To describe a simulation
experiment, specify all conditions under which the
model will run:
• Initial state of the model (e.g., starting with an empty queue,
or initial stock levels)
• Input parameters values for this run (e.g., arrival rate = 10
customers/hour, 2 servers staffed)
• Simulation run length (e.g., simulate 8 hours of operation,
or 1000 events, etc.) and number of replications
(independent runs for statistical confidence)
Means of Describing Simulation
Experiments
• What to record: Define the output metrics to collect (e.g., average wait
time, maximum queue length, throughput)
• Scenario variations: An experiment description might include varying
certain parameters to compare scenarios. For example, experiment A: 2
servers, experiment B: 3 servers – to see impact on wait times
• Reproducibility: A well-described experiment often notes the random
number seed or ensures conditions so that others can reproduce the
results. In fact, there are even standardized formats (like SED-ML,
Simulation Experiment Description Markup Language) to encode simulation
experiments for reproducibility
• Example: Experiment on the checkout model: “Run the checkout model for
1000 customers, with 2 checkout counters, and measure average waiting
time. Repeat for 5 replications and report the mean and 95% confidence
interval.” – This fully specifies how to conduct the experiment on the model
Example – Model vs. Experiment
in Practice
• Scenario: Imagine a supermarket manager wants to evaluate if adding
a second checkout counter will reduce customer wait times
• Model (Checkout System): Customers arrive randomly (based on
real data or assumed distribution), queue if the counter is busy, get
serviced by a cashier, then leave. This model encapsulates the store’s
service process
• Experiment 1 (Single Server): Use the model with 1 cashier, run
the simulation for, say, 8 hours of store time, and record average wait
time and queue length. This mimics the current system
• Experiment 2 (Two Servers): Use the same model but change the
number of cashiers to 2 (a parameter change). Run the simulation
under identical conditions (8 hours, same arrival pattern), collect the
same outputs
Example – Model vs. Experiment
in Practice
• Comparison: By analyzing results from Experiment 1
vs. Experiment 2, the manager can see quantitatively
how much wait times improve with an extra cashier.
Here the model remained the same (customer behavior
and service mechanism), but the experiments differed
in a key parameter (number of servers) to test a what-if
question
• Takeaway: Clearly separating the model description
(what the system does) and the experiment details
(how we test it) makes such analysis possible and
clearer to communicate
Tools & Tips for Describing Models
and Experiments
• Documentation: It’s good practice to document your
simulation model (assumptions, logic) and each
experiment scenario. This can be done in a report or
even within the simulation software (comments/notes)
• Simulation Software Support: Many simulation tools
provide interfaces for setting up experiments. For
example, Arena has an “Experiment Designer” where
you can specify number of runs and output measures;
AnyLogic allows defining experiments (scenarios)
separately from the model logic in its project tree
Tools & Tips for Describing Models
and Experiments
Scripts & Markup: In advanced cases, experiments can
be described via scripts or markup languages:
• Example: Using Python, you might write a loop to run your
model function with different parameters (that script is
effectively describing a set of experiments programmatically).
• Example: SED-ML (an XML format) allows scientists to formally
specify which model to run, what parameter changes to make,
and what outputs to producesed-ml.org
• – ensuring others can understand and repeat the experiment
Tools & Tips for Describing Models
and Experiments
• Consistency: Ensure the model is validated before
trusting experiment results – a flawed model will yield
flawed conclusions no matter how well experiments are
described. (We’ll touch on validation next in design
principles.)
• Recap: Think of model and experiment as separate but
connected: first build a credible model, then design
targeted experiments to get answers you need. Good
descriptions of both parts make a simulation study
rigorous and transparent.
Principles of Simulation Study
Design (Overview)
• To conduct effective simulations, follow key principles of
simulation study design. These guide you from the problem
idea to a validated model and insightful results
• We will cover 5 fundamental steps/principles:
1. Problem Definition – clearly define what you’re trying to solve or learn
2. Model Development – build a valid representation of the system
(conceptual and then coded model)
3. Data Collection & Input Analysis – use real-world data to inform your
model and ensure inputs are realistic
4. Verification & Validation – make sure the model is implemented
correctly and accurately represents reality
5. Experimentation & Analysis – run simulation experiments, analyze
results, and draw conclusions
Principle 1 – Problem Definition
• Define the problem and objectives clearly: Before modeling, be explicit
about what question you want the simulation to answer or what problem to solve
• Determine the scope: What part of the real-world system will be included? What
level of detail? (e.g., are we simulating one store or an entire supply chain?)
• Identify performance metrics and criteria for success: e.g., “reduce average
wait time to <5 minutes” or “evaluate throughput under different setups”
• Example: Hospital ER Simulation – Problem: Excessive patient wait times.
Objective: Test if adding another doctor shift reduces wait times below 30
minutes on average. Scope: ER from patient check-in to admission/release
(exclude other departments).
• A well-defined problem prevents wasted effort – it ensures the model built and
experiments run will actually provide the answers or decisions needed
Principle 2 – Model Development
• Conceptual Modeling: Start with a conceptual model – a simplified abstract representation of
the system. This could be a flow diagram or written description of how the system works. Engage
domain experts to validate this conceptual model (“Does this diagram make sense for how your
ER operates?”)
• Choose modeling approach: Decide if it’s a discrete-event process, continuous, agent-based,
etc., based on the nature of the system (often the problem itself suggests the paradigm)
• Build the model (coding/implementation): Translate the conceptual model into a
computational model using a simulation language or software. This includes defining all
processes, resources, and randomness in code or blocks
• Break the model into components if possible (modularity makes understanding and debugging
easier)
• Document assumptions: Note any simplifications (e.g., “We assume patients leave if not seen
in 4 hours” or “We ignore minor paperwork time”). These assumptions should be realistic and
acceptable to stakeholders
• Example: For the ER, after conceptualizing patient flow, implement it in, say, Arena or SimPy:
set up modules for patient arrival, triage, doctor service, etc. Include probability distributions for
arrivals and service times based on hospital data
Principle 3 – Data Collection & Input
Analysis
• Garbage in, garbage out: A model is only
as good as the data driving it. Collect real-
world data to inform model parameters:
• Arrival rates (patients per hour, customers per day,
etc.)
• Service time distributions (maybe from historical
records or timing studies)
• Resource availability (e.g., number of servers,
schedules)
Principle 3 – Data Collection & Input
Analysis
• Input analysis: Analyze raw data to derive model inputs. For example, fit a
statistical distribution to interarrival times (are they Poisson? exponential?), or use
empirical distributions. Determine averages, variability, seasonality, etc., from data
• If real data isn’t available, use reasonable estimates or literature values, and
note these as assumptions. In such cases, consider doing a sensitivity analysis
later to see if results change under different input assumptions
• Data validation: Ensure the data makes sense and is representative. E.g., if
simulating an ER, use data from a typical week, not an outlier day, for arrival
patterns
• Example: For our ER model, gather hospital logs: patients arrival timestamps (to
model arrival process), treatment durations by acuity level (to model service time).
If analysis shows, say, service times roughly follow a lognormal distribution with
mean 1 hour, that goes into the model. If some data is missing (e.g., triage time),
make an informed assumption (like a fixed 5 minutes) and be ready to test
variations of that in experiments.
Principle 4 – Verification & Validation
Verification (Building the model right): Check that
the simulation model is implemented correctly
according to the design. Debug the program:
• Does the logic flow as expected? (e.g., watch a few
entities flow through the system to ensure they follow
the correct path)
• Are there errors in the code or unexpected behaviors
(like patients getting “stuck”)?
• Use trace runs or animation if available to visually verify
process flow
Principle 4 – Verification & Validation
• Validation (Building the right model): Ensure the
model’s output behavior matches reality (to an
acceptable degree):
• Compare simulation results with real system data that wasn’t
used in building the model. For instance, if historically the
average ER wait time is 45 min with one doctor, does the
model roughly reproduce that?
• Seek expert feedback: do clinicians or managers find the
model credible? (“Face validity”)
• Perform sensitivity analysis: change input values slightly
and see if outputs react plausibly (robustness check)
Principle 4 – Verification & Validation
• Often an iterative process: if the model is not
valid, revisit model structure or data assumptions
and improve it
• Example: After coding the ER model, run it with current
staffing and patient load and see if simulated average
wait = ~45 min (as actual data says). If simulation says
2 hours, something’s off – maybe the service process
was modeled too simply. Debug (verification) or adjust
assumptions (validation) until the model reasonably
reflects the real ER performance. Only then proceed to
use the model for experiments confidently.
Principle 5 – Experimentation &
Analysis
• With a verified, valid model, now systematically experiment to find
answers to the defined problem
• Design experiments: As covered earlier, plan simulation runs to test
different scenarios or policies. Use proper experimental design: e.g., if
multiple factors (staff level, patient arrival rate), consider a design that
varies each factor (like a factorial experiment) to understand their effects
• Run multiple replications to account for randomness and obtain
confidence intervals on metrics. This ensures results are statistically
significant and not just luck of one simulation run
• Analyze results: Use statistical analysis to compare scenarios. Plot
results for clarity (e.g., a bar chart of average wait time for 1 vs 2 vs 3
servers with error bars). Look for trade-offs (maybe 3 servers has
diminishing returns, etc.)
Principle 5 – Experimentation &
Analysis
• Draw conclusions & make recommendations: Relate results back to
the real-world problem. “Simulation suggests adding a second doctor
shift would cut wait times by 50% during peak hours.” Also discuss any
limitations (e.g., “We didn’t model nurse availability, which could also
impact results.”)
• Iteration: Sometimes results might prompt refining the model or
running additional experiments. Simulation study is rarely strictly linear –
you might loop back if something unexpected is observed (e.g., if adding
a server didn’t improve as much as thought, maybe model is missing
something or the system has another bottleneck)
• Communication: Finally, report findings in a clear manner to
stakeholders. The value of the simulation is realized when its insights
inform decisions (e.g., management approves hiring another staff
because the model showed it’s beneficial)
Conclusion and Takeaways
• We explored what simulation systems and languages
are, and saw that different types of simulation
languages (discrete-event, continuous, agent-based)
are suited for different kinds of problems. Each has its
strengths – understanding their differences helps in
choosing the right tool for a given task
• We learned the importance of separating the model
(the representation of the system) from the
experiments (how we use that model to ask
questions). This clarity enables reusing models and
systematically exploring scenarios
Conclusion and Takeaways
• Key principles of simulation study design were discussed: from
clearly defining the problem, through careful model building with real
data, to verifying/validating and finally experimentation. Following these
steps leads to credible and useful simulation outcomes
• Beginner’s insight: Simulation is both an art and science – it
requires creative abstraction of real systems and rigorous analysis. As
you practice, always tie theory to practice: use real-world examples (like
we did, with supermarkets, hospitals, etc.) to ground your understanding
• Next steps: With these fundamentals, you are equipped to delve
deeper – perhaps try a simple simulation in a language like
Python/SimPy or a tool like Arena. Remember, the ultimate goal is to use
simulations to gain insight and improve real-world systems! Good luck
with your simulation journey.