SimulationChapter 4
SimulationChapter 4
CSIT
Chapter 4
One of the most important and difficult tasks facing a model developer is the verification and
validation of the simulation model. · It is the job of the model developer to work closely with the
end users throughout the period (development and validation to reduce this skepticism and to
increase the credibility.
1. To produce a model that represents true system behavior closely enough for the model to be
used as a substitute for the actual system for the purpose of experimenting with system.
2. To increase an acceptable, level the credibility of the model ,so that the model will be used by
managers and other decision makers.
1. Verification is concerned with building the model right. It is utilized in comparison of the
conceptual model to the computer representation that implements that conception. It asks the
questions: Is the model implemented correctly in the computer? Are the input parameters and
logical structure of the model correctly represented?
2. Validation is concerned with building the right model. It is utilized to determine that a model
is an accurate representation of the real system. It is usually achieved through the calibration of
the model.
The first step in model building consists of observing the real system and the interactions among
its various components and collecting data on its behavior. Operators, technicians, repairs and
maintenance personnel, engineers, supervisors, and managers under certain aspects of the system
which may be unfamiliar to others. As model development proceeds, new questions may arise,
and the model developers will return, to this step of learning true system structure and behavior.
The second step in model building is the construction of a conceptual model – a collection of
assumptions on the components and the structure of the system, plus hypotheses on the values of
model input parameters, illustrated by the following figure.
The third step is the translation of the operational model into a computer recognizable form- the
computerized model.
Real System
calibration and
Computational
Validation
Validation
Conceptual Model
1. Assumptions on system components
2. Structural assumptions, which define the
interaction between system components
3. Input parameters and data assumptions
Model
verification
Operational model
(computerized
representation)
The purpose of model verification is to assure that the conceptual model is reflected accurately in
the computerized representation. The conceptual model quite often involves some degree of
abstraction about system operations, or some amount of simplification of actual operations.
Many common-sense suggestions can be given for use in the verification process:-
1. Have the computerized representation checked by someone other than its developer.
2. Make a flow diagram which includes each logically possible action a system can take when an
event occurs, and follow the model logic for each a for each action for each event type
3. Closely examine the model output for reasonableness under a variety of settings of input
parameters.
4. Have the computerized representation print the input parameters at the end of the simulation to
be sure that these parameter values have not been changed inadvertently.
6. If the computerized representation is animated, verify that what is seen in the animation
imitates the actual system.
(b) Attention can be focused on a particular line of logic or multiple lines of logic that
(c) Values of selected model components can be observed. When the simulation has paused, the
current value or status of variables, attributes, queues, resources, counters, etc. can be observed.
(d) The simulation can be temporarily suspended, or paused, not only to view information but
also to reassign values or redirect entities
Compare model
to reality Initial
model
Real
Revised
System
Compare revised First revision
model to reality of model
Revised
Revised
Verification and validation although are conceptually distinct, usually are conducted
simultaneously by the modeler. · Validation is the overall process of comparing the
model and its behavior to the real system and its behavior.
Calibration is the iterative process of comparing the model to the real system, making
adjustments to the model, comparing again and so on.
The above figure shows the relationship of the model calibration to the overall validation
process. The comparison of the model to reality is carried out by variety of test.
Tests are subjective and objective. Subjective test usually involve people, who are
knowledgeable about one or more aspects of the system, making judgments about the
model and its output.
Objective tests always require data on the system's behavior plus the corresponding data
produced by the model.
A possible criticism of the calibration phase, were it to stop at point, ie., the model has
been validated only for the one data set used; that is, the model has been "fit" to one data
set.
Validation Process
As an aid in the validation process, Naylor and Finger [1967] formulated a three step approach
which has been widely followed:-
1. Face Validity
The first goal of the simulation modeler is to construct a model that appears reasonable
on its face to model users and others who are knowledgeable about the real system being
simulated.
The users of a model should be involved in model construction from its conceptualization
to its implementation to ensure that a high degree of realism is built into the model
through reasonable assumptions regarding system structure, and reliable data.
Another advantage of user involvement is the increase in the models perceived validity or
credibility without which manager will not be willing to trust simulation results as the
basis for decision making.
Sensitivity analysis can also be used to check model's face validity.
The model user is asked if the model behaves in the expected way when one or more
input variables is changed.
Based on experience and observations on the real system the model user and model
builder would probably have some notion at least of the direction of change in model
output when an input variable is increased or decreased.
The model builder must attempt to choose the most critical input variables for testing if it
is too expensive or time consuming to: vary all input variables.
Model assumptions fall into two general classes: structural assumptions and data assumptions.
a) Structural assumptions involve questions of how the system operates and usually involve
simplification and abstractions of reality.
For example, consider the customer queuing and service facility in a bank. Customers may form
one line, or there may be an individual line for each teller. If there are many lines, customers may
be served strictly on a first-come, first-served basis, or some customers may change lines if one
is moving faster. The number of tellers may be fixed or variable. These structural assumptions
should be verified by actual observation during appropriate time periods together with
discussions with managers and tellers regarding bank policies and actual implementation of these
policies.
b) Data assumptions should be based on the collection of reliable data and correct statistical
analysis of the data. data were collected on:
1) Inter arrival times of customers during several 2-hour periods of peak loading ("rush-hour"
traffic)
3: Validating the assumed statistical model by goodness – of – fit test such as the chi-square test,
K-S test and by graphical methods.
In this phase of validation process the model is viewed as input – output transformation.
That is, the model accepts the values of input parameters and transforms these inputs into
output measure of performance. It is this correspondence that is being validated.
Instead of validating the model input-output transformation by predicting the future ,the
modeler may use past historical data which has been served for validation purposes that
is, if one set has been used to develop calibrate the model, its recommended that a
separate data test be used as final validation test.
Thus accurate “ prediction of the past” may replace prediction of the future for purpose of
validating the future.
A necessary condition for input-output transformation is that some version of the system
under study exists so that the system data under at least one set of input condition can be
collected to compare to model prediction.
If the system is in planning stage and no system operating data can be collected, complete
input-output validation is not possible.
Validation increases modeler’s confidence that the model of existing system is accurate.
Changes in the computerized representation of the system, ranging from relatively minor
to relatively major include :
4. Major changes involving a different design for the new system such as
computerized inventory control system replacing a non computerized system.
If the change to the computerized representation of the system is minor such as in items
one or two these change can be carefully verified and output from new model can be
accepted with considerable confidence. Partial validation of substantial model changes in
item three and four may be possible.
When using artificially generated data as input data the modeler expects the model
produce event patterns that are compatible with, but not identical to, the event patterns
that occurred in the real system during the period of data collection.
Thus, in the bank model, artificial input data {X1n, X2n, n = 1,2, , .} for inter arrival and
service times were generated and replicates of the output data Y2 were compared to what
was observed in the real system · An alternative to generating input data is to use the
actual historical record, {An, Sn, n = 1,2,...}, to drive simulation model and then to
compare model output to system data.
To implement this technique for the bank model, the data A1, A2,..., and S1 S2 would have
to be entered into the model into arrays, or stored on a file to be read as the need arose.
To conduct a validation test using historical input data, it is important that all input data
(An, Sn,...) and all the system response data, such as average delay(Z2), be collected
during the same time period.
responses (Y2 and Z2) depend on the inputs (An and Sn) as well as on the structure of the
system, or model.
Implementation of this technique could be difficult for a large system because of the need
for simultaneous data collection of all input variables and those response variables of
primary interest.
The ten reports are randomly shuffled and given to the engineers , who is asked to decide
which report are fake and which are real.
If engineer identifies substantial number of fake reports the model builder questions the
engineer and uses the information gained to improve the model.
If the engineer cannot distinguish between fake and real reports with any consistency, the
modeler will conclude that this test provides no evidence of model inadequacy.