0% found this document useful (0 votes)
25 views6 pages

Thesis Proposal Abstract - Deepesh Gotherwal

Thesis Abstract

Uploaded by

Shubhajyoti Saha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views6 pages

Thesis Proposal Abstract - Deepesh Gotherwal

Thesis Abstract

Uploaded by

Shubhajyoti Saha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

Verification & Validation of Simulation Models

(in business management context)

A Thesis Proposal by

Deepesh Gotherwal
2021FPM05
OM & QT Area

13th September 2024

Thesis Advisory Committee


Prof. Pritam Ranjan (Chair)
Prof. Bhavin J. Shah (Member)
Prof. Saurabh Chandra (Member)

Indian Institute of Management Indore


Verification & Validation of Simulation Models

Abstract

Simulation models have been extensively applied for various operation management
(OM) and operation research (OR) tasks in domains such as manufacturing, project
management, defense, healthcare, supply chain, network, etc. (Jahangirian et al.,
2010). The main objective of a simulation model is to imitate a real-world system
to analyze and find solutions for a problem that is, in general, difficult to solve
analytically (Banks, 1999). Law and Kelton (1991) called simulation a method of
last resort. In the OM-OR literature of manufacturing and business management,
simulation is claimed to be the second most popular technique, after mathemat-
ical modeling (Amoako-Gyampah and Meredith, 1989; Pannirselvam et al., 1999;
Jahangirian et al., 2010). Moreover, the field of simulation has gained momentum
with the advancement in computing power, which has facilitated the widespread
adoption of useful simulation methods and tools (Mourtzis, 2020).
While building a simulation model is not a cumbersome task, finding a reliable
and efficient one is a challenge. In this case, Verification and Validation (V&V) of
a simulation model refers to checking its validity, credibility, and usability (Sargent,
2020), and that, in fact, is an important concern (Harper et al., 2021). Numerous
scholars have offered explanations and definitions for the terms validation of a sim-
ulation model and verification of a simulation model. These definitions, although
with slight variations, generally convey the same meaning. The purpose of verifica-
tion is to ensure that the conceptual model (abstraction of a real system) has been
correctly programmed in a software (Sargent, 2020), whereas validation checks how
close a conceptual model is compared to a real system (Sargent, 2020). On the one
hand, it is essential to acknowledge that the development of a simulation model is
intended to analyze a system for a specific purpose(s); therefore, its validity should
be assessed solely for those purpose(s) only (Sargent, 2020). On the other hand,
model verification must result in a flawless computer code; however, model vali-
dation results in a reasonable conceptual model (Kleijnen, 1995). Different model
verification techniques help in debugging the computer simulation code, while vali-
dation techniques check the appropriateness of assumptions made for the conceptual
model and assess its comparison with the real-world system.
The literature on V&V of simulation models is vast and to analyze it quantita-
tively and statistically and uncover research gaps we have used two complementary
and analytical methodologies. Data for the analysis were searched and taken from
Web of Science and Scopus. Initially, a total of 17,224 documents were obtained
from the initial search. However, after applying the required filter, the number was
decreased to 11,303. The abstracts of each of these papers were carefully reviewed,
and finally, 300 papers were selected.
First, we conducted a scientometric analysis which provided insights into the

2
Verification & Validation of Simulation Models

evolution of the literature. The scientometric study helped us to identify key au-
thors and their research networks, prominent keywords and their co-occurrences,
bibliographically coupled documents, and sources. To provide an insightful and
thorough understanding of the literature, scientometric analysis provided a quanti-
tative assessment of it. Scientometric methods utilize extensive bibliometric data to
facilitate the identification of systematic literature-related findings. By establishing
connections between literary concepts, these tools enable us to uncover insights that
may have been missed in manual review investigations (Su and Lee, 2010). For de-
scriptive analysis, we have used the bibliometrix package in R (Aria and Cuccurullo,
2017), and for generating bibliometric maps, we have used the VOSviewer tool (Van
Eck and Waltman, 2010).
Second, we used topic modeling to identify and analyze important themes. Topic
modeling is a statistical process that uses unsupervised machine learning techniques
to identify clusters of similar words within a large body of text corpus. There
are many techniques available to perform topic modeling, such as Latent Semantic
Analysis (LSA), Probabilistic Latent Semantic Analysis (pLSA), Latent Dirichlet
Allocation (LDA), and a few others. Latent Dirichlet Allocation (LDA) is the most
commonly used technique for topic modeling (Vayansky and Kumar, 2020) and we
have used the same for our purpose. LDA enabled us to explore the areas of research
that attract the most attention, identify the underlying theme patterns, and help
us choose the most relevant publications for each field of study.
Data (either output or input-output) of a real system under study plays a vital
role in validating a simulation model (Kleijnen, 1995). Over time, multiple statisti-
cal methodologies have been developed, contingent upon the availability of data. We
have the evolution of these methodologies (not prescribed to any particular simula-
tion technique or application domain) in three categories: (a) Absence of real data,
(b) Output real data, and (c) Input-output real data. Validation tests such as naive,
novel, bootstrap, neural networks, and others are most appropriate when both input
and output data on the real system are available. If only output data are available,
then only comparing outputs of simulated and real systems is only viable option.
Different tests, such as analysis of variance, factor-analysis, kolmogorov-smirnov,
chi-square, student t-test, etc had been prescribed. If any data on real system is
not available then sensitivity analysis is the only suggested procedure for validation.
Literature talks about both numerical sensitivity and pattern sensitivity.
After a thorough and systematic literature review, we identified three research
gaps that we proposed to undertake for further study.

1. Schruben-Turing test: To test the validity of a simulation model, the out-


puts of the real system and the simulation model are presented to an expert
unbiasedly. If an expert can segregate the two sets of outputs, then the sim-

3
Verification & Validation of Simulation Models

ulation model is believed to be unreliable (Kleijnen, 1995). Despite the clear


nature of testing, our understanding indicates that this validity testing is con-
ducted manually by the expert. Given the advancements in machine learning
and artificial intelligence, an automated segregation technique can be devel-
oped for the two sets of outputs for the validity testing.

2. Pattern sensitivity analysis: Hekimoğlu and Barlas (2016) tried to address


the issue of pattern sensitivity and developed pattern sensitivity for system
dynamics models based on regression. Although the paper addresses a signifi-
cant gap, there are still several areas that could be further explored to enhance
the robustness of pattern sensitivity analysis.

• Develop methodology on how to handle input parameters from non-


uniform distribution or categorical in nature.
• Develop an algorithm that can help in the segregation of inputs that
produces different output behavior modes. This might be very useful
and will save a lot of time and increase accuracy.

3. Inverse Problem: The validation of a simulation model when the output,


and not the input, of the real system is available. The situation is akin to
an inverse problem in the area of computer experiments (Bhattacharjee et al.,
2019; Ranjan et al., 2008, 2016). Let g(x) denote the simulator response for
a given input x. Then the objective of the inverse problem is to find x (or
set of x’s) that generates the desired (pre-specified) output go . Take g0 as the
real-system output and g(x) as the simulation model output. Now, if there
exists a feasible inverse solution then the simulation model can be said to be
validated. Develop suitable methodology for DES, SD, ABS, hybrid models.

4
Bibliography

Amoako-Gyampah, K., & Meredith, J. R. (1989). The operations management re-


search agenda: An update. Journal of Operations Management, 8(3), 250–262.

Aria, M. & Cuccurullo, C. (2017) bibliometrix: An R-tool for comprehensive science


mapping analysis, Journal of Informetrics, 11(4), pp 959-975, Elsevier.

Banks, J. (1999). Introduction to simulation. Winter Simulation Conference Pro-


ceedings, 1, 7–13.

Bhattacharjee, N.V., Ranjan, P., Mandal, A. and Tollner, E.W. (2019), “A History
Matching Approach for Calibrating Hydrological Models” , Environmental and
Ecological Statistics, 26(1), 87-105.

Harper, A., Mustafee, N., & Yearworth, M. (2021). Facets of trust in simulation
studies. European Journal of Operational Research, 289(1), 197–213.

Hekimoğlu, M., & Barlas, Y. (2016). Sensitivity analysis for models with multiple
behavior modes: a method based on behavior pattern measures. System Dynamics
Review, 32(3-4), 332-362.

Jahangirian, M., Eldabi, T., Naseer, A., Stergioulas, L. K., & Young, T. (2010).
Simulation in manufacturing and business: A review. European Journal of Oper-
ational Research, 203(1), 1–13.

Kleijnen, J. P. C. (1995). Verification and validation of simulation models. European


Journal of Operational Research, 82(1), 145-162.

Kleijnen, J. P. C. (1995). Statistical validation of simulation models. European Jour-


nal of Operational Research, 87(1), 21-34.

Law, A.M. & Kelton, W.D. (1991) Simulation modelling and analysis. 2nd Edition,
McGraw-Hill, New York.

Mourtzis, D. (2020). Simulation in the design and operation of manufacturing sys-


tems: state of the art and new trends. International Journal of Production Re-
search, 58(7), 1927–1949.

5
Verification & Validation of Simulation Models BIBLIOGRAPHY

Pannirselvam, G. P., Ferguson, L. A., Ash, R. C., & Siferd, S. P. (1999). Operations
management research: An update for the 1990s. Journal of Operations Manage-
ment, 18(1), 95–112.

Ranjan, P., Bingham, D. and Michailidis, G. (2008), “Sequential Experiment Design


for Contour Estimation from Complex Computer Codes”, Technometrics, 50, 527-
541.

Ranjan, P., Thomas, M., Teismann, H. and Mukhoti, S., (2016), “Inverse problem for
time-series valued computer model via scalarization”, Open Journal of Statistics,
6, 528-544.

Sargent, R. G. (2020, December). Verification and validation of simulation models:


an advanced tutorial. In 2020 Winter Simulation Conference (WSC) (pp. 16-29).
IEEE.

Su, H. N., & Lee, P. C. (2010). Mapping knowledge structure by keyword co-
occurrence: A first look at journal papers in Technology Foresight. scientometrics,
85(1), 65-79.

Van Eck, N., & Waltman, L. (2010). Software survey: VOSviewer, a computer
program for bibliometric mapping. scientometrics, 84(2), 523-538.

Vayansky, I., & Kumar, S. A. (2020). A review of topic modeling methods. Infor-
mation Systems, 94, 101582.

You might also like