Chapter 7

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 17

Chapter 7

Verification and Validation of


Simulation Models
Overview
7.1 Verification and Validation
7.2 Verification of Simulation Models
7.3 Calibration and Validation of Models
From modeling to simulation: model
building, verification and validation
7.1 Verification and Validation
Verification
• verification of a model is the process of confirming that it is correctly implemented
with respect to the conceptual model (it matches specifications and assumptions
deemed acceptable for the given purpose of application)
• Deals with building the model correctly, correctly implemented with good input and
structure
• Verification pertains to the computer program prepared for the simulation model.
• Is the computer program performing properly?
• concerned with building the model right.
• If the input parameters and logical structure of the model are correctly represented
in the computer, verification has been completed.
• Verification answers the question "Have we built the model right?“.
7.1 Verification and Validation
Validation
• concerned with building the right model.
• building the correct model, an accurate representation of the real system
• Validation is the determination that a model is an accurate representation of
the real system.
• Validation is usually achieved through the calibration of the model, an iterative
process of comparing the model to actual system behavior and using the
discrepancies between the two, and the insights gained to improve the model.
• This process is repeated until model accuracy is judged acceptable.
• validation answers the question "Have we built the right model?”.
7.2 Verification of Simulation Models
• Verification makes sure the conceptual model is translated into a
computerized model properly and truthfully.
• Typically three intuitive mechanisms can be used.
1. Common sense
• Have the computerized representation checked by someone other than its developer.
• Make a flow diagram which includes each logically possible action of a system can take when
an event occurs, and follow the model logic for each action for each event type.
• Closely examine the model output under a variety of settings of input parameters.
• debugger is an essential component of successful simulation model building, which allows the
analysist or the programmer to test the simulation program interactively.
• If one is modeling a certain queueing system, then simulation results can be compared with
theoretical results as upper or lower bound.
7.2 Verification of Simulation Models
2. Through documentation:
• by describing in detail what the simulation programmer has done in
design and implementing the program, one can often identify some
problems.
3. Trace:
• go through a few steps in an actual program to see if the behavior is
reasonable
7.3Calibration and Validation of Models
• Validation is the overall process of comparing the model and its behavior to the real
system and its behavior.
• Calibration is the iterative process of comparing the model to the real system.
• Making adjustment or changes to the model, comparing the revised model to reality
and so on.
• The comparison of the model to reality is carried out by subjective and objective tests.
 A subjective test involves talking to people, who are knowledgeable about the system
making models and forming the judgment.
Objective tests involve one or more statistical tests to compare some model output with
the assumptions in the model
• Therefore, Calibration is the iterative process of comparing the model with real system,
revising the model if necessary, comparing again, until a model is accepted (validated).
7.3 Calibration and Validation of Models
7.3 Calibration and Validation of Models
• Naylor and Finger formulated a three step approach to the validation
process
1. Build a model that has high face validity.
2. Validate model assumptions
3. Compare the model input-output transformations to corresponding
input-output transformation for the real system.
7.3 Calibration and Validation of Models
Face Validity
Build a ``reasonable model'' on its face to model users who are knowledgeable
about the real system being simulated.
Validation of Model Assumptions
Model assumptions fall into two categories: structural assumptions and data
assumptions.
 Structural assumptions deal with such questions as how the system operates,
what kind of model should be used, queueing, inventory, reliability, and others.
Data assumptions: what kind of input data model needs? What are the
parameter values to the input data model?
7.3 Calibration and Validation of Models
Validating Input-Output Transformations
View the model as a black box
Feed the input at one end and examine the output at the other end
Use the same input for a real system, compare the output with the
model output
If they fit closely, the black box seems working fine
Otherwise, something is wrong
Model Validation
• Model validation is a necessary requirement for model application.
• To do a reliable validation, several steps must be taken and each of
them may be a source of errors which will influence the final result.
Validation: Errors
• As a general rule, if there are discrepancies between observed and
simulated data, the technical structure of a model should be the last
factor to suspect.
1. Model inadequate
2. Lack of calibration
3. Errors in the code
4. Errors in the use
5. Errors in the experimental data
Possible Solutions
1. Model Inadequacy
• Are all the important processes for a given environment included?
• Are the processes modeled correctly?
• Was the range of data used to develop model components for
process simulation wide enough to include our conditions?
2. Lack of calibration
• Perform calibration till the model is in its accepted form
Possible Solutions

3. Errors in the Code


• Following steps can be undertaken to check a code:
Do calculations using for instance a spreadsheet and compare with model results.
Verify that simulation results are within the known physical and biological reality.
Run simulations with highly contrasting inputs.

4. Errors in the Experimental Data


• The experimental data used to test model predictive capabilities are affected by experimental
error, which can be large.
• Only a large number of experimental data allows a meaningful evaluation of model performance
in statistical terms.
References
• Jerry Banks, John S. Carson, II Barry L. Nelson , David M. Nicol,
“Discrete Event system simulation”
• https://www.eg.bucknell.edu

You might also like