0% found this document useful (0 votes)
11 views16 pages

A Customizable Pattern-Based Software Process Simulation Model - Design, Calibration and Application

Uploaded by

Vahid Garousi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views16 pages

A Customizable Pattern-Based Software Process Simulation Model - Design, Calibration and Application

Uploaded by

Vahid Garousi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 16

SOFTWARE PROCESS IMPROVEMENT AND PRACTICE

Softw. Process Improve. Pract. 2009; 14: 165–180


Published online 7 May 2009 in Wiley InterScience
(www.interscience.wiley.com) DOI: 10.1002/spip.411

A Customizable
Pattern-based Software
Process Simulation Model:
Design, Calibration and
Application Research Section
Vahid Garousi,1 Keyvan Khosrovian1 and
Dietmar Pfahl1,2,3, *,†
1
Department of Electrical and Computer Engineering, Schulich School of
Engineering, University of Calgary, Canada
2
Simula Research Laboratory, Software Engineering Department, Lysaker,
Norway
3
Department of Informatics, University of Oslo, Norway
Software process analysis and improvement relies heavily on empirical research. Empirical
research requires measurement, experimentation, and modeling. However, whatever evidence
is gained via empirical research is strongly context dependent. Thus, it is hard to combine
results and capitalize upon them for the purpose of improvement in evolving development
environments. The process simulation model GENSIM 2.0 addresses these challenges. GENSIM
2.0 is a generic process simulation tool representing V-model type software development
processes. Compared to existing process simulation models in the literature, the novelty of
GENSIM 2.0 is twofold. Firstly, its model structure is customizable to organization-specific
processes. This is achieved by using a limited set of generic structures (macro-patterns).
Secondly, its model parameters can be easily calibrated to available empirical data and expert
knowledge. This is achieved by making the internal model structures explicit and by providing
guidance on how to calibrate model parameters. This article outlines the structure of GENSIM
2.0, gives examples on how to calibrate the model to available empirical data, and demonstrates
its usefulness through two application scenarios The first scenario illustrates how GENSIM 2.0
helps in finding effective combinations of verification and validation techniques under given
time and effort constraints. The second scenario shows how the simulator supports in finding
the best combination of alternative verification techniques. Copyright  2009 John Wiley &
Sons, Ltd.
KEY WORDS: customization; GENSIM 2.0; reuse; simulation; software process

1. INTRODUCTION AND MOTIVATION of software development into an engineering disci-


pline and hence improving overall performance of
Empirical research is essential for developing theo- software development activities. Engineering dis-
ries of software development, transforming the art ciplines on the other hand, require provision of
evidence on the efficiency and effectiveness of tools

Correspondence to: Dietmar Pfahl, Simula Research Labo- and techniques in varying application contexts. In
ratory, Software Engineering Department, P.O.Box 134, 1325 the software engineering domain, the number of
Lysaker, Norway

E-mail: [email protected] tools and techniques is constantly growing, and ever
more contexts emerge in which a tool or technique
Copyright  2009 John Wiley & Sons, Ltd. might be applied. The application context of a tool
Research Section V. Garousi, K. Khosrovian and D. Pfahl

or technique is defined, firstly, by organizational process simulation system that generates estimates
aspects such as process organization, resource allo- of the impact of local process changes on overall
cation, staffing profiles, management policies, and, project performance. For example, if derived from
secondly, by the set of all other tools and techniques case studies or laboratory experiments with the tool
applied in a development project. or technique, or from what the vendors of the tool
Since most activities in software development claim, the defect detection effectiveness of unit test-
are strongly human-based, the actual efficiency ing technique S is assumed to be locally 10% better
and effectiveness of a tool or technique can only than that of unit testing technique T in a given con-
be determined through real-world experiments. text; through simulation we may find out that using
Controlled experiments are a means for assessing technique S instead of technique T yields an overall
local efficiency and effectiveness, e.g. the typical positive impact of 2% on end product quality (plus
defect detection effectiveness of an inspection or test effects on project duration and effort) or we may
technique applied to a specific type of artifact, by a find out that the change yields an overall positive
typical class of developers (with adequate training impact of 20%. If simulations indicate that it has
and experience levels) regardless of the other only 2% overall impact or less, it may not be worth-
techniques and entities involved in the development while to run additional experiments to explore the
process. Global efficiency and effectiveness of a actual advantage of technique S over technique T
tool or technique, on the other hand, relates to (in a specific context).
its impact on the overall performance of the With the help of a simulation model and tool, even
development project, i.e. total project duration, total more complex situations could be investigated. For
project effort consumption (or cost), and quality example, one could assess the overall effectiveness
of the end product delivered to the customers of varying combinations of different development,
while considering all other entities involved in the verification, and validation techniques. Also, one
development process and their mutual influences. could ask how much workforce should be allocated
Typically, global efficiency and effectiveness are to development, verification, and validation activ-
evaluated through case studies. ities in order to achieve defined time, cost, and
Controlled experiments and case studies are quality goals. One can go a step further and use
expensive. Support for making decisions as to which process simulators to analyze how better devel-
experiments and case studies to perform would be oper skills improve project performance. This, in
helpful. Currently, these decisions are taken based turn, can be used to assess whether and to what
on expert opinion, mostly relying on experience extent investment in training or in hiring better
and intuition. This kind of decision-making has qualified developers would pay off. A core ele-
two major drawbacks. Firstly, numerous mutual ment of the proposed solution is the simulation
influences between entities, involved in a process, framework GENSIM 2.0 (GENeric SIMulator, Ver-
make it hard for an expert to estimate to what sion 2.0), which is an enhanced version of an older
extent a locally efficient and effective tool or research prototype GENSIM (Pfahl et al., 2001).
technique positively complements other locally The rest of the article is structured as follows:
efficient and effective tools or techniques applied in Section 2 discusses related work and how the
other activities of the chosen development process. research presented in this article differs from exist-
Secondly, for the same reasons as mentioned above, ing research. Section 3 presents an overview of the
it is hard for an expert to estimate how sensitive structure of GENSIM 2.0 and its reusable process
overall project performance dimensions will react structure components. Section 4 describes details of
to variations in local efficiency or effectiveness of GENSIM 2.0, more specifically how its reusable and
a single tool or technique. The second point is easily adaptable model structure reflects generic
particularly important if a decision has to be made software process structures, what model parameters
whether assumed improvements are worthwhile to are used for input, output, and model calibra-
be empirically investigated within various contexts. tion, and how the model is implemented. Section 5
In order to assist decision makers in situations presents two application scenarios related to differ-
as described above and help minimize the draw- ent types of problems decision-makers face when
backs of the current expert-based decision-making planning and performing software development
approaches, one can provide experts with a software projects. Finally, Section 6 provides conclusions
Copyright  2009 John Wiley & Sons, Ltd. Softw. Process Improve. Pract., 2009; 14: 165–180
166 DOI: 10.1002/spip
Research Section A Customizable Pattern-based Software Process Simulation Model

about the current state of research and suggests managerial aspects related to the performance of
further steps. software development projects. Even though GEN-
SIM is a good learning aid to familiarize software
engineering students with managerial concepts and
2. RELATED WORK issues of software development projects, it has a
simplified and limited scope which makes it unsuit-
The idea of using software process simulators able for comprehensive analyses of development
for predicting project performance or evaluating processes in real-world environments requiring
processes is not new. Beginning with pioneers detailed modeling of development activities and
like Kellner and Hansen (1989), Mi and Scacchi resources.
(1990), Abdel-Hamid and Madnick (1991), Gruhn An example of a model having the second short-
and Saalmann (1992), and Bandinelli et al. (1995), coming (S2) is reported by Raffo et al. (2004). The
dozens of process simulation models have been goal of building the model described by Raffo
developed for various purposes1 . However, all et al. (2004) is to facilitate quantitative assessment
known models have at least one of the following of financial benefits when applying independent
shortcomings: verification and validation (IV&V) techniques in
software development projects and determining
• (S1) The model is too simplistic to actually cap- the optimal alternatives regarding those benefits.
ture the full complexity of real-world processes. IV&V techniques can be applied during all phases
• (S2) The model structure and calibration is not of development by one or more persons, indepen-
comprehensively reported and thus cannot be dent from the developers of a system. The model
independently and easily adapted and used by described by Raffo et al. (2004) captures a NASA
others. project following the IEEE 12207 software develop-
• (S3) The model captures a specific real-world ment process standard in such a way that many
development process with sufficient detail but different IV&V configurations can be evaluated
fails to offer mechanisms to represent complex by simulation. Simulation results are then used to
product and resource models. This has typi- answer questions such as: What would be the costs
cally been an issue for models using the System and benefits associated with implementing a given
Dynamics (SD) (Forrester, 1961) modeling envi- IV&V technique on a selected software project?
ronments. How would the utilization of a particular combi-
• (S4) The model structure captures a specific nation of IV&V techniques affect the development
real-world development process (and associated phase of the project? Despite providing descriptions
products and resources) in sufficient detail, but and snapshots of the overall structure of the simu-
is not (easily) adaptable to new application lation model, the implementation of the model has
contexts due to lack of design for reusability not been made available to public use and there-
and lack of guidance for re-calibration. fore cannot be reused by others. This fact even
GENSIM (Pfahl et al., 2001), a predecessor of limits the usefulness of the published experimen-
GENSIM 2.0, is an example of a process simulation tal results, because the internal model mechanisms
tool containing a model with the first shortcoming that generate the results cannot be reproduced by
(S1). GENSIM is intended to be used for purely edu- others.
cational purposes. It models a basic waterfall-like In the study by Pfahl and Lebsanft (2000) the
development process with three phases (design, authors report experience with a tool called PSIM
implementation, and test) and mostly focuses on (Project SIMulator) having the fourth shortcoming
(S4). The development and application of PSIM was
part of a project conducted by Siemens Corporate
1
For an overview of software process simulation work done in Research within a Siemens business unit. Its pur-
the past 15 to 20 years, refer to Zhang et al. (2008). Currently,
the second stage of a systematic review is being conducted by pose was to assess the feasibility of SD modeling and
researchers at National ICT Australia (NICTA) that will offer its benefits in planning, controlling, and improving
a more comprehensive overview of work done in the field of software development processes in a real-world
software process simulation. The first stage of the systematic
review presenting first results has been published by Zhang environment. The simulation model contained in
et al. (2008) PSIM captures a development process comprising
Copyright  2009 John Wiley & Sons, Ltd. Softw. Process Improve. Pract., 2009; 14: 165–180
DOI: 10.1002/spip 167
Research Section V. Garousi, K. Khosrovian and D. Pfahl

the phases ‘high level design’, ‘low level design’, as Staffware Process Suite) and IBM WebSphere
‘implementation’, ‘unit test’, and ‘system test’. The MQ Workflow. All the tools evaluated are general
model structure is very detailed and model cali- purpose and none offers specific modeling features
bration benefited from the availability of different for software development processes. The Vensim
available information sources (e.g. empirical data tool was not included in that evaluation (Russell
and expert knowledge). Although the model con- et al., 2006).
sists of five submodels, the underlying design did More recently, following the approach taken by
not aim at reusability and thus it is very difficult to Senge but having software development processes
adapt the model to the needs of other development in mind, Madachy (2006) suggested a core set of
organizations. reusable model structures and behavior patterns.
While shortcomings (S2) and (S3) can easily be This set comprises several very specific micro-
resolved by publishing model source files and patterns (and their implementations) suited for
fully exploiting the modeling constructs offered by SD process simulation models. Madachy’s micro-
commercial process simulation environments such patterns (e.g. s-shaped growth, reinforcing feed-
as (http://www.imaginethatinc.com) Extendsim back, oscillation, delay) are well-thought reusable
and Vensim (http://www.vensim.com), the fourth process structures, with very specific purpose
issue has not yet been satisfactorily resolved, neither and focused scope. They can be interpreted as a
by researchers proposing proprietary process simu- bottom-up approach to support reusability of pro-
lation modeling environments [e.g. Little-Jil (Wise, cess simulation structure. However, there exist no
2006)] nor by researchers using commercial process guidelines that help modelers combine individual
simulation environments or tools [e.g. (Ur, 2007)]. micro-patterns to capture more complex, software
A first attempt to define a set of core struc- development-specific process structures.
tures of process simulation models, which could Although it could be argued that other tools might
be regarded as a set of basic building blocks of already support the rapid creation of executable
any process simulator was made by Senge in the simulation processes from process fragments, no
early 1990s (Senge, 1990). He identified ten ‘Systems systematic comparison of existing tools with respect
Archetypes’, i.e. generic process structures which to their strengths in supporting process simula-
embody typical behavior patterns of individuals tion exists. It was decided to use Vensim in this
and organizations, e.g. ‘limits to growth’, ‘spiral of work because of two major reasons: (a) its capabil-
success’. In the context of business processes, Russel ity of working with external dynamic link libraries
et al. (2006) present a list of 43 workflow control-flow (DLLs) to support organization-specific heuristics,
patterns, e.g. sequence pattern and parallel split e.g. developer allocation algorithms, and (b) the
pattern. The sequence pattern is the fundamental ability to provide rich analysis features during a
building block for work-flow processes and is used simulation, e.g. graphing of variables. Our evalua-
to construct a series of consecutive activities, which tion of TIBCO Business Studio, as an example other
execute in turn one after the other. The parallel tool, revealed that it is not capable of the above
split pattern constructs the divergence of a branch features. In summary, note that the contribution
into two or more parallel branches, each of which of this work is not the creation of only a process
execute concurrently. Although the archetypes pre- model, but rather a simulation modeling framework
sented by Senge (1990) and the control-flow patterns which is open, flexible, and can be extended by other
by Russell et al. (2006) are certainly good means software engineering researchers and practitioners.
for understanding individual and organizational Emerging from suggestions made several years
behavior modes, they are either too generic or too ago (Angkasaputra and Pfahl, 2004), the work
qualitative as to be directly applicable for the mod- presented in this article complements Madachy’s
eling of software development processes. micro-patterns by a modeling-language indepen-
The study by Russell et al. (2006) also presented dent approach providing an initial set of reusable
a comprehensive evaluation of 14 process-aware and adaptable macro-patterns of software develop-
information systems to determine the extent to ment processes. The suggested macro-patterns are
which those tools support the proposed 43 control- described in more detail in Section 3. The implemen-
flow patterns. Two of the evaluated products tation of the macro-patterns using an SD modeling
were: TIBCO Business Studio (formerly known language is exemplified by the research prototype
Copyright  2009 John Wiley & Sons, Ltd. Softw. Process Improve. Pract., 2009; 14: 165–180
168 DOI: 10.1002/spip
Research Section A Customizable Pattern-based Software Process Simulation Model

GENSIM 2.0. Besides capturing key structural and details can be presented in this article. Further
behavioral aspects of software development pro- details on implementation and calibration of GEN-
cesses of the V-model type, GENSIM 2.0 provides SIM 2.0 can be found in the work by Khosro-
a blueprint on how to integrate detailed prod- vian et al. (2007a) and Khosrovian et al. (2007b),
uct and resource models. In GENSIM 2.0, each respectively.
instance of a process artifact type and resource
type, i.e. roles involved in software development,
3.1. Generic Process Structures (Macro-Patterns)
is modeled individually. Since GENSIM 2.0 is
the starting point of a long-term research pro- The left-hand side of Figure 1 shows the macro-
gram supporting the integration of results from pattern that GENSIM 2.0 employs for develop-
empirical software engineering research conducted ment activities (comprising initial development
worldwide, this article also presents examples on and rework) and its associated verification (and
how to calibrate GENSIM 2.0 to empirical data re-verification) activities. As shown in the figure,
from different sources. No other publicly avail- it is assumed that a software artifact is verified,
able software simulation model has all the above e.g. inspected, right after its development has been
capabilities. completed. However, since not in all software devel-
There exist manual and automatic methods for opment projects all artifacts are verified, this type of
learning and configuring software processes in activity can be defined as optional in GENSIM 2.0.
the literature [e.g. (Cook and Wolf, 1998)]. The Associated with activities are input/output prod-
work by Cook and Wolf (1998) presents a method ucts and resources. In addition, each artifact, activ-
to discover software processes from event-based ity, and resource is characterized by attributes rep-
data available during a software development resenting states. ‘Learning’ is an example attribute
project, e.g. meetings and module compilation related to resources (e.g., developers), while ‘matu-
logs. The goal of this paper, however, is not to rity’ is an example of a state attribute related
mine nor configure software processes, but rather to artifacts and activities. The right-hand side of
to provide a core solution for flexible simulation Figure 1 shows state-transitions for development
of generic software development processes, and (top) and verification (bottom) maturity. The state
to assist developers and mangers in answering transition diagram for development defines that as
various process-related questions, e.g. ‘‘How do soon as the target size of the artifact that has to be
workforce size and skill variations impact project developed becomes greater than zero, the develop-
performance (project duration, effort consumption, ment activity transitions into the In Progress state.
code quality)’’? Other example questions which After development of an artifact is finished, if ver-
can be answered by GENSIM 2.0 are discussed ification is planned to be carried out, the artifact
in Section 4. is handed to the verification team and the devel-
opment activity transitions into the Complete state.
The same transition happens when rework of arti-
3. THE GENSIM 2.0 MODEL facts is finished and they are handed to validation
teams for testing activities. Hence, in the diagram,
Inspired by the idea of frameworks in software state transitions of the development activity is spec-
engineering, customizable software process sim- ified using the Total V&V status which represents
ulation models can be constructed using generic the state of all V&V activities together. After the
and reusable structures (Raffo et al., 2003; Arm- verification activity is finished, the development
burst, 2005; Müller and Pfahl, 2007) referred activity transitions into the In Progress state again
to as macro-patterns. This section describes two as the artifact has to be reworked. This transition
macro-patterns of software development processes, happens similarly in the situation where valida-
the key parameters associated with these pat- tion activities are finished and artifacts have to
terns, and their implementation in GENSIM 2.0. be reworked as a result. Whenever the develop-
Moreover, examples that illustrate how GENSIM ment activity of an artifact is finished and no
2.0 can be calibrated to empirical data collected more verification and validation needs to be car-
from specific software development environments ried out, the development activity of the artifact is
are presented. Due to space limitations, not all finalized.
Copyright  2009 John Wiley & Sons, Ltd. Softw. Process Improve. Pract., 2009; 14: 165–180
DOI: 10.1002/spip 169
Research Section V. Garousi, K. Khosrovian and D. Pfahl

Figure 1. Macro-pattern for development/verification activity pairs (with state-transition charts)

The state transition diagram of the verification development/verification macro-pattern shown in


activity defines that the verification activity transi- Figure 1. The developed test suite along with other
tions into the In Progress state as soon as the devel- necessary artifacts, i.e. software code artifacts, is
opment activity of an artifact is finished. Whenever the input to the validation activity. The output
the verification activity is finished, depending on the of the validation activity is a log of all detected
number of detected defects, it is either finalized or defects. This log is fed back to the development
transitions into the Completed but to be repeated state. activity to trigger rework of software code or other
If the number of defects detected is below a cer- development artifacts (not shown in Figure 2). Test
tain threshold, the verification activity is finalized; case development and validation activities, like any
otherwise it transitions into the Completed but to be other activity, use resources.
repeated state to indicate that due to great number of The right-hand side of Figure 2 shows two
detected defects, the artifact has to be verified once state transition diagrams specifying the maturity
more. However, since this policy (which implies states of the test case development (top) and
the enforcement of quality thresholds) might not artifact validation (bottom) activities. The test case
be followed in all organizations, it is optional. If development activity transitions into the In Progress
thresholds are not used, every verification activity state whenever software code or specification
is carried out at most once. artifacts are available. Determining whether or
The left-hand side of Figure 2 shows the macro- not test cases can be derived directly from the
pattern capturing validation activities in a software specification artifacts before the code artifacts are
development process, assuming that certain arti- ready, i.e. Specification artifact is greater than zero
facts, e.g. software code or specification documents, while Code to validate is still zero, depends on
are inputs to test case development activities. Test managerial policies and the nature of the specific
case development activities cannot begin if these testing activity itself. Whenever there are no more
artifacts are not in place. Output of a test case test cases to develop in order to test the code
development activity is a collection of test cases in artifacts, the test case development activity is
the form of a test suite. If the actual development finalized.
process requires that a test case verification activ- The macro-patterns shown in Figures 1 and 2 can
ity has to be modeled, the test case development be applied to different development phases of a
activity can be extended to include both test case generic development process. Each development/
development and verification activities by using the rework/verification subprocess can be comple-
Copyright  2009 John Wiley & Sons, Ltd. Softw. Process Improve. Pract., 2009; 14: 165–180
170 DOI: 10.1002/spip
Research Section A Customizable Pattern-based Software Process Simulation Model

Figure 2. Macro-pattern for test case development/validation activity pairs (with state-transition charts)

Figure 3. Use of macro-patterns in GENSIM 2.0 to model an instance of a V-model type software development process

mented by an optional validation (i.e. testing) mandatory. Depending on the organizational poli-
subprocess. cies and the type of product under development,
Figure 3 shows an example of such a generic verification and validation activities are optional. If
process (the well-known V-model) with three lev- defects are detected during verification or valida-
els of refinement, i.e. requirements development tion, rework has to be done (through the develop-
and verification (system level), design development ment activity in the macro-pattern of Figure 1). On
and verification (subsystem level), code develop- code level, rework is assumed to be mandatory no
ment and verification (module level), and their matter by which activity defects are found, while
validation (test) counterparts. Verification activities rework of design and requirements artifacts due to
could involve, e.g. requirement, design, and code defects found in other V&V activities than design
inspections (CIs). On each level, one or more arti- and requirements verification, respectively, may be
facts are developed, verified, and validated. In this optional depending on the organizational or project
example process, only development activities are policies.
Copyright  2009 John Wiley & Sons, Ltd. Softw. Process Improve. Pract., 2009; 14: 165–180
DOI: 10.1002/spip 171
Research Section V. Garousi, K. Khosrovian and D. Pfahl

3.2. Model Implementation that are able to work only on one artifact type
are assigned to the related activity. Next, those
GENSIM 2.0 was implemented using the SD simu- developers that can work on two artifact types
lation modeling tool Vensim, a mature commercial are assigned to the related activities proportional
tool widely used by SD modelers. Vensim offers to the number of waiting artifacts per type. This
three features in support of reuse and interoperabil- procedure is continued until all developers that can
ity: views, subscripts, and the capability of working be allocated are assigned to activities. The formal
with external DLLs. definition of the algorithm used by the developer
In order to capture the main dimensions of project assignment heuristic can be found in the work by
performance (duration, effort, and quality) together Khosrovian et al. (2007a).
with different artifact/activity/resource states, the The following list summarizes the most important
generic process instance shown in Figure 3 is assumptions underlying the model corresponding
implemented in four separate views: (a) product to the process instance shown in Figure 3. These
flow (process), (b) defect flow (product quality), assumptions are explicit and thus can be modified
(c) workforce allocation (developers, techniques, as needed to adapt the model to other organization-
tools), and (d) project states. A detailed description specific processes:
of each of these views can be found in the work by
Khosrovian et al. (2007a). • A downstream activity can only begin when
Specific product types are associated with differ- working on its input artifact has been completely
ent refinement levels of the development process, finished in the upstream activities.
i.e. system, subsystem, and module. If a new refine- • Working on different modules and subsystems
ment level is required, e.g. design shall be split can be done independently.
into high-level design and low-level design, exist- • The rates at which development/rework and
ing views can easily be reused. The subscripting V&V activities are carried out depend on the
mechanism provided by Vensim allows for mod- efficiencies of the chosen techniques, the head
eling individual work products as product type count, and average skill level of the assigned
instances. For example, if one system consists of development team and the learning status.
several subsystems, and each subsystem of several • Defect injection rates depend on the head count
modules, then each of these products would be and average skill level of the assigned developer
identifiable via its subscript value. Vensim’s sub- team, the development/rework rates, and the
scripting mechanism is also used to distinguish learning status.
defect origins and the ISO 9126 quality characteris- • Defect detection rates depend on the effective-
tics that they potentially affect. ness of the chosen V&V techniques, the head
Vensim’s capability of working with external count, and average skill level of the assigned
DLLs was utilized to extract organization-specific team and the learning status.
heuristics from the SD model and incorporating • Learning happens during development/rework
them into external DLL libraries where they can and V&V activities.
be changed easily without affecting the model
structure. The algorithm that allocates developers
3.3. Model Parameters
to development and V&V activities is an example of
such a heuristic. The DLL’s main allocation function GENSIM 2.0 has a large number of parameters.
takes as input head count and skill levels of the Parameters can represent model inputs and out-
available workforce, workload of all activities, and puts, or they are used to calibrate the model to
the minimum skill level required for developers in expert knowledge and empirical data specific to an
order to be assigned to different activities. Without organization, process, technique, or tool.
going into full detail, the heuristic works as follows. Table 1 shows a selected list of 17 (out of 28)
Developers are classified according to the number of parameters related to the macro-pattern code devel-
different types of activities they are able to conduct. opment (including rework) and verification. Cor-
Ability to conduct an activity is given whenever responding parameters exist for the requirements
the related skill level of a developer is higher than specification and design-related subprocesses. The
the minimum required level. First, all developers list of all 28 parameters in GENSIM 2.0 and their
Copyright  2009 John Wiley & Sons, Ltd. Softw. Process Improve. Pract., 2009; 14: 165–180
172 DOI: 10.1002/spip
Research Section A Customizable Pattern-based Software Process Simulation Model

Table 1. A subset of model parameters

# Parameter name Attribute of Type View

1 Verify code or not Process Input C-P


2 Number of modules per subsystem Product Input C-P
3 Code doc quality threshold per size unit Project Input C-Q
4 Required skill level for code dev Project Input C-W
6 Developers’ skill level for code development People Input C-W
8 Maximum code verification effectiveness Process Calibrated C-P
9 Maximum code verification rate per person per day Process Calibrated C-P
12 Minimum code fault injection rate per size unit Product Calibrated C-Q
14 Code rework effort for code faults detected in CI Product Calibrated C-Q
16 Code rework effort for code faults detected in IT Product Calibrated C-Q
18 Initial code development rate per person per day People Calibrated C-W
19 Initial code verification rate per person per day People Calibrated C-W
20 Code doc size (actual) Product Output C-P
22 Code development rate (actual) Process Output C-P
24 Code faults undetected Product Output C-Q
26 Code faults corrected Product Output C-Q
28 Code verification effort (actual) Process Output C-W

CI, code inspection; UT, unit test; IT, integration test; ST, system test.

descriptions can be found in the work by Khosro- are assumed to have been derived from sufficiently
vian et al. (2007a). similar contexts.
Input parameters represent project-specific infor- Table 2 shows snapshots of two literature-based
mation such as estimated product sizes and devel- calibrations (A and B) of model parameters related
oper skills, as well as project-specific policies that to code defect injection, detection, and correction.
define which verification and validation activities For the parameters shown, three sources were used:
should be performed. Input parameters also spec- Frost and Campo (2007), Wagner (2006), and Damm
ify whether requirements and design documents et al. (2006). Frost and Campo (2007) provide an
should be reworked if defects are found in codes example of a defect containment matrix from which
(by different V&V activities) that actually originate values for the calibration fault injection rates and
from design or requirements defects. verification effectiveness can be derived. Table 2
Calibration parameters represent organization- shows only the code-related parameters. Wagner
specific information. An example of how model (2006) provides much data on typical (average)
parameters are calibrated is given in Section 3.4. verification and validation rates, and rework efforts
Output parameters represent values that are for defects of various document types. According to
calculated by the simulation engine based on Wagner, defect rework efforts vary depending on
the dynamic cause–effect relationships between the type of defect detection activity. He observed
input and calibration parameters. The type of that the later defects are detected, the higher the
output values which are of interest depends on defect rework effort will be. For example, as shown
the simulation goal. Typically, project performance in Table 2 (Calibration A), a code defect detected
variables such as product quality, project duration, in system test (ST) requires about 2.4 times as
and effort are of interest. much rework effort than a code defect detected
during unit test (UT). Unfortunately, Wagner does
not state clearly in which context his numbers are
3.4. Model Calibration
valid. Since there exist other studies (e.g. Damm
The possibility to calibrate GENSIM 2.0 is essen- et al., 2006) which report a much higher distance
tial for the validity of simulation results. Generally, between rework efforts for code defects found in
parameters can be calibrated by using expert judg- UT as compared to integration test (IT) and ST,
ment or empirical data. Empirical data can either be Table 2 shows an alternative calibration. Calibration
gathered from organization-specific measurement B applies factors of 2.5 and 13 on correction effort
and experimentation or from published data that for defects found in UT in order to calculate the
Copyright  2009 John Wiley & Sons, Ltd. Softw. Process Improve. Pract., 2009; 14: 165–180
DOI: 10.1002/spip 173
Research Section V. Garousi, K. Khosrovian and D. Pfahl

Table 2. Examples of coding related calibration parameters

Calibration parameter Value

Calibration A Calibration B

Minimum code fault injection rate per size unit 14.5 Defect/KLOC (Frost and Campo, 2007)
Maximum code verification effectiveness 0.53 (Frost and Campo, 2007)
Max. code verification rate per person per day 0.6 KLOC/PD (Wagner, 2006)
Code rework effort for code faults detected in CI 0.34 PD/Def. (Wagner, 2006)
Code rework effort for code faults detected in UT 0.43 PD/Def. (Wagner, 2006)
Code rework effort for code faults detected in IT 0.68 PD/Def. (Wagner, 2006) 1.08 PD/Def. (Damm et al., 2006; Wagner, 2006)
Code rework effort for code faults detected in ST 1.05 PD/Def. (Wagner, 2006) 5.62 PD/Def. (Damm et al., 2006; Wagner, 2006)

KLOC, kilo lines of code; PD, person per day, Def, defect.

rework effort for code defects detected in IT and Underlying assumptions of the application are as
ST, respectively. Many other calibrations based follows:
on data from published sources were made in
GENSIM 2.0 but cannot be shown here due to • A project has a given set of features (which are
space limitations. A complete description of the assumed to define the size of work products)
calibration of GENSIM2.0 can be found in the work and a target deadline.
by Khosrovian et al. (2007b). • The project manager wants to know which
verification and validation techniques should
be combined to hold the deadline (priority 1)
4. MODEL APPLICATION and deliver high-quality code (priority 2).
• The developer team and their skills are given.
Software process simulation in general – and GEN- • The requirements, design, code implementation
SIM 2.0 in particular – can support software methods, and tools are given.
decision-makers in many ways. The following list
of possible applications is not exhaustive but gives
4.1. Scenario 1 – Impact of V&V Activities on
an idea of the diversity of questions that could be
Project Performance (using Calibrations A and B)
addressed:
This application scenario investigates the impact
• What combinations (and intensity levels) of
of different combinations of V&V activities on
development, verification, and validation tech-
project duration, product quality, and effort for both
niques should be applied in a given context to
calibrations (A and B) of GENSIM 2.0. Verification
achieve defined time, quality, or cost goals?
activity types include requirement inspections (RI),
• How do workforce size and skill variations
design inspections (DI) and CIs. Validation activity
impact project performance (project duration,
types include UT, IT, and ST. In Scenario 1,
effort consumption, code quality)?
exactly one type of technique with given efficiency
• Does investment in training pay off for specific
and effectiveness measures is available per V&V
development contexts and goals?
activity. A V&V technique is either applied to
• Do investments in improving development,
all artifacts of the related type (e.g. requirements,
verification, and validation techniques pay off
design, and code documents) or it is not applied
for specific development contexts and goals?
at all. Figure 4 shows simulation results for model
• What are the promising areas of research
calibrations A (above) and B (below). The related
for improving development, verification, and
data set can be found in the work by Khosrovian
validation techniques?
et al. (2007c).
To demonstrate the applicability and usefulness Data points on the right-hand side represent
to relevant problems faced by software decision- (quality, duration) result value pairs, where quality
makers, the remainder of this section summarizes is measured as the total number of undetected
results of a model application related to the first (remaining) code faults. Since the effectiveness of
question listed above in two different scenarios. V&V activities for calibrations A and B are assumed
Copyright  2009 John Wiley & Sons, Ltd. Softw. Process Improve. Pract., 2009; 14: 165–180
174 DOI: 10.1002/spip
Research Section A Customizable Pattern-based Software Process Simulation Model

Figure 4. Effort versus duration and quality versus duration (Scenario 1 – calibrations A and B)

to be identical (according to second row in Table 2, result value pair and then move right as long as the
average code verification effectiveness equals 53%), estimated project duration is still within the target
the estimated product quality expressed as the deadline.
number of undetected defects at project end is On average, simulations with calibration B
approximately the same. Filled triangles in Figure 4 take longer and consume more effort. This can
represent nondominated (quality and duration) be explained by the fact that calibration B
results, i.e. simulation results to which no other assumes larger per defect rework effort for code
simulation exists with both less undetected defects defects found in IT and ST than calibration
and less duration. Obviously, for both calibrations A. Another – more interesting – difference between
A and B, there exists a trade-off between quality and calibrations A and B is the effect on project duration
duration. Filled triangles represent nondominated and effort consumption when adding a verification
solutions, i.e. solutions for which there exists no activity to fixed combinations of validation activi-
combination of V&V activities resulting in both ties. While the effect on quality is almost identical,
shorter project duration and lower number of time and effort savings are significantly higher in
undetected defects at project end. Thus, if the goal of calibration B situations as compared to calibration
a decision-maker is to explore which combinations A. This result is consistent with the general belief
of V&V activities should be applied to achieve the that performing verification activities during the
target duration, in case there are several eligible constructive phases in V-model type development
V&V combinations, a decision-maker could pick processes does not only improve quality but also
the nondominated solution with the lowest number saves time and effort, particularly if the rework
of undetected defects. The diagrams on the left- effort for defects found in late validation activities
hand side of Figure 4 indicate that there is a strong such as integration and system test is high (as in the
positive correlation between duration and effort, case of calibration B).
implying that a decision-maker would always start When looking in detail at the case where val-
with the leftmost nondominated (quality, duration) idation activities are always performed (for both
Copyright  2009 John Wiley & Sons, Ltd. Softw. Process Improve. Pract., 2009; 14: 165–180
DOI: 10.1002/spip 175
Research Section V. Garousi, K. Khosrovian and D. Pfahl

Figure 5. Effort versus duration and quality versus duration (Scenario 1 – calibrations A and B)

calibrations), one observes another interesting phe- The results of Scenario 1 suggest that the best
nomenon. It turns out that, for calibration B, per- combination of V&V activities is sensitive to the
forming all verification activities not only yields the context in which these activities are applied. Thus,
best quality and consumes the least effort (as is also for example, it does not make sense to make gen-
the case for calibration A) but also finishes first. This eral recommendations regarding the adoption of
phenomenon is visualized by the spider diagrams verification techniques without taking the specifics
of Figure 5. of validation activities under consideration. The
Figure 5 shows the details of the situation when complexity of the situation is analyzed further in
UT, IT, and ST are always performed (indicated Scenario 2.
by value 1 in those columns). The left-hand side
shows the results for the eight possible cases of 4.2. Scenario 2 – Impact of V&V Activities on
performing (1) or skipping (0) the three inspection Project Performance (using Calibration B only)
activities for calibration A; the right-hand side This scenario uses only model calibration B to
shows the corresponding results for calibration B. investigate the impact of different combinations
The measurement unit for time is day, for effort of verification activities and techniques on project
it is person per day (PD), and for quality it is duration, product quality, and effort. Different from
undetected number of defects (UD). The data for Scenario 1, this scenario assumes that all validation
each calibration are sorted in ascending order with activities UT, IT, and ST are always performed,
regard to duration. In the spider diagrams (bottom), while verification activities RI, DI, and CI can be
values related to duration, effort, and quality are performed or not (are optional). Moreover, if a
shown as relative numbers, with 1.00 equaling the verification activity is performed, one of alterna-
maximum value of the performance dimension per tive techniques of S-type or T-type can be applied.
calibration. Compared to S-type verification techniques, T-type
Copyright  2009 John Wiley & Sons, Ltd. Softw. Process Improve. Pract., 2009; 14: 165–180
176 DOI: 10.1002/spip
Research Section A Customizable Pattern-based Software Process Simulation Model

Table 3. Assumed local effectiveness and efficiency of S-type and T-type verification techniques

Requirements inspection (RI) Design inspection (DI) Code inspection (CI)

S-type Effectiveness 75% 76% 53%


Efficiency 8 pages/PD 30 pages/PD 0.6 KLOC/PD
T-type Effectiveness 82.5% 83.6% 58.3%
Efficiency 6 pages/PD 22.5 pages/PD 0.45 KLOC/PD

KLOC, kilo lines of code; PD, person per day.

Table 4. Simulation results of Scenario 2


Row RI DI CI RI-tech. DI-tech. CI-tech. Duration (Day) Effort (PD) Quality (UD)

1 0 1 1 T S 1281∗ • 24163 49
2 1 1 1 S T S 1296 21341 39
3 1 1 1 T T S 1297 20996 37
4 0 1 1 T T 1299 24068 47
5 1 1 1 S T T 1302 21189 36
6 1 1 1 T T T 1306 20881∗ 35∗
7 1 1 1 S S S 1308 22160 44
8 1 1 1 T S T 1313 21610 39
9 1 1 1 T S S 1313 21769 42
10 1 1 1 S S T 1323 21989 41
11 0 1 1 S S 1326 25697 58
12 0 1 1 S T 1333 25395 54
13 1 0 1 T S 1417 27409 77
14 1 0 1 S T 1432 28156 78
15 1 0 1 T T 1435 27147 73
16 0 1 0 T 1448 27827 90
17 1 0 1 S S 1449 28818 86
18 1 1 0 S S 1465 25545 83
19 1 1 0 S T 1466 24569 76
20 1 1 0 T S 1466 25065 81
21 0 1 0 S 1468 29790 104
22 1 1 0 T T 1469 24209 75
23 1 0 0 T 1563 32507 132
24 1 0 0 S 1571 34080 142
25 0 0 1 T 2138 37386 116
26 0 0 1 S 2177 38223 126
27 0 0 0 2704 48584 232

techniques are 10% more effective (i.e. find 10% three performance dimensions. By looking at these
more of all defects contained in the related artifact) five nondominated rows, one observes that there are
and 25% less efficient (i.e. 25% less document size strong trade-offs between duration and effort, and
can be verified per person per day). Table 3 summa- duration and quality values (correlation coefficients
rizes the assumed local effectiveness and efficiency equal −0.93 and −0.98, respectively) while there
of S-type and T-type verification techniques. is an almost perfect positive correlation between
The simulation of all possible combinations gen- effort and quality (correlation coefficient equals
erates 33 = 27 different results as shown in Table 4. 0.98). This is interesting as it indicates that improv-
Gray-shaded rows correspond to the eight cases ing quality does not necessarily require higher effort
shown in Figure 4 (right-hand side). Minimum val- consumption. It does, however, cost time – at least
ues per dimension are indicated by asterisk (∗ ). in the specific project context to which the sim-
Rows 1, 2, 3, 5, and 6 dominate the data set in the ulation results apply, i.e. for the products to be
sense that for each of these five rows, there is no developed, the used process (V-model type, no
other row that has better performance values in all reinspection and retest), the available workforce
Copyright  2009 John Wiley & Sons, Ltd. Softw. Process Improve. Pract., 2009; 14: 165–180
DOI: 10.1002/spip 177
Research Section V. Garousi, K. Khosrovian and D. Pfahl

(head count and skills), and the chosen workforce To what extent the presented results are reli-
allocation strategy. If any of these context factors able depends on the validity of the simulation
changes, the results might look different. This is model. Model validity is mainly affected by three
also true for the optimal selection of S-type and T- factors: (a) proper implementation of cause–effect
type verification techniques. It is interesting to see structures, (b) proper representation of real-world
that neither are pure S-type configurations always attributes by model parameters, and (c) proper cal-
better than pure T-type configurations nor are pure ibration. As all underlying assumptions (and thus
T-type configurations always better than pure S-type limitations) of GENSIM 2.0 are explicit, it is always
configurations. While there is a tendency that simu- possible to judge to what extent the model structure
lation results with pure T-type verification activities and parameters correspond to an actual organiza-
dominate results with pure S-type verification activ- tional process. Moreover, the design of GENSIM 2.0
ities, a mix of both types yields the best results if allows for easy adaptation where and when needed.
reducing project duration is of importance. Calibration to available organization-specific data
and expert knowledge is simple as demonstrated
by the example shown in Section 3.4 using exter-
5. DISCUSSION
nal source of information, i.e. data published in
The scenarios presented in the previous section scientific articles.
exemplify how GENSIM 2.0 can be applied for Since GENSIM 2.0 is a core simulation model
decision support in the context of V-model like soft- framework for software development processes, the
ware development processes. Many other practical validity of any model adapted from GENSIM 2.0 to a
scenarios could have been presented, addressing specific process in use should be analyzed according
various aspects of the questions listed at the begin- to the following three validity criteria:
ning of Section 4. Two observations can be made
with regard to the simulation results in Scenarios
• 1. Structural validity: Does the specific model
1 and 2. First, the results are only partly consis-
tent with empirical evidence about the effects of capture the building blocks and elements of the
performing V&V activities. While code quality can corresponding real-world process?
always be improved by adding V&V activities, it • 2. Parametric validity: Are the model parameters
is not always true that adding V&V activities in (i.e. those discussed in Section 3.3) suitable for
earlier development is better than adding them the instrumentation and analysis of the real-
in later phases. For example, while it is true that world process?
adding DIs is almost always better than adding CIs • 3. Empirical validity: Has the model been prop-
ceteris paribus, it is not true that adding RIs is bet- erly calibrated using data from the correspond-
ter than adding DI or CI ceteris paribus. Secondly, ing real-world process?
the results show that effects of V&V activities on
project duration and effort consumption are com-
plex. They depend on the combinations and types The purpose of GENSIM 2.0 is to model the core
of V&V techniques and the overall project context. elements of the V-model process type. It was shown
Thus, it is not feasible to generalize the results pro- that GENSIM 2.0 captures essential building blocks
duced by GENSIM2.0. Rather, one has to carefully of the V-model process, and that all its parameters
adjust the simulator to the actual project condi- can be calibrated successfully. Since GENSIM 2.0
tions and then evaluate the various options at hand. does not represent a real-world instance of a spe-
Many factors influence the outcomes of simulation cific V-model type process, it was calibrated using
runs. The complex behavior depends on the values data from literature (Damm et al., 2006; Wagner,
of input parameters and calibration parameters, as 2006). Additional validity evaluations of models
well as on the structure of the underlying develop- based on GENSIM 2.0 must be performed on a
ment process and specific nonlinear relationships case-by-case basis (corresponding to the modeled
between model variables representing workforce real-world process) using the existing model vali-
learning during project execution and the chosen dation techniques explained in the system dynamics
workforce allocation heuristic. literature (e.g. Barlas, 1989, 1994).
Copyright  2009 John Wiley & Sons, Ltd. Softw. Process Improve. Pract., 2009; 14: 165–180
178 DOI: 10.1002/spip
Research Section A Customizable Pattern-based Software Process Simulation Model

6. CONCLUSIONS AND FUTURE WORK Research Council (NSERC). Vahid Garousi was sup-
ported by the New Faculty Award no. 200600673
GENSIM 2.0 is a customizable and publicly avail- funded by Alberta Ingenuity.
able software process simulation model. Different
from most SD software process simulation mod-
REFERENCES
els, GENSIM 2.0 allows for detailed modeling of
work products, activities, developers, techniques, Pfahl D, Klemm M, Ruhe G. 2001. A CBT module with
tools, defects, and other entities by exploiting the integrated simulation component for software project
subscription mechanisms of Vensim. Moreover, the management education and training. J. of Systems and
possibility to use external DLL libraries gives the Software 59(3): 283–298. DOI:10.1016/S0164–1212(01)
opportunity to extract potentially time-consuming 00069-3.
algorithms from the SD model and thus speed up Abdel-Hamid TK,Madnick SE. 1991. Software projects
model execution. dynamics – an integrated approach Prentice-Hall: Engle-
Future work on GENSIM 2.0 will address some wood Cliffs, NJ, USA.
of its current limitations. For example, currently it
is not possible to represent incremental software Bandinelli S, et al.. 1995. Modeling and improving an
industrial software process. IEEE Transactions on Software
development processes easily. Mechanisms will be Engineering 21(5): 440–453. DOI: 10.1109/32.387473.
added to the model, which allow for concurrent
execution of development cycles. Gruhn V, Saalmann A. 1992. Software process validation
GENSIM 2.0 is part of a long-term research pro- based on FUNSOFT nets. European Workshop on
Software Process Technology Trondheim, Norway;
gram that aims at combining results from empirical 223–226.
studies and company-specific measurement pro-
grams with process simulation. While writing this Kellner MI, Hansen GA. 1989. Software process
article, GENSIM 2.0 is calibrated to data from a Ger- modeling: a case study. Proceedings of Hawaii
man research institute and its industrial partners. International Conference on System Sciences, Hawaii;
175–188.
Once completely calibrated to this data, simulations
will be performed to explore which combination Mi P, Scacchi W. 1990. A knowledge-based environment
of V&V activities (and applied techniques) is most for modeling and simulating software engineering
suitable to achieve certain product quality goals, processes. IEEE Transactions on Knowledge and Data
under given resource and time constraints. The Engineering. 2(3): 283–294. DOI: 10.1109/69.60792.
quality goals will be defined according to standard Müller, M, Pfahl, D. 2007. Simulation methods. In
ISO 9126. Advanced Topics in Empirical Software Engineering, Singer J,
Also, there is a need for systematic studies to Shull F, Sjøberg D (eds). Springer London, England:
compare existing process-aware tools with respect 117–153. DOI: 10.1007/978-1-84800-044-5 5.
to their strengths in supporting software process Zhang H, Kitchenham B, Pfahl D. 2008. Reflections on
simulation. Vensim was used in this work because 10 years of software process simulation modeling:
of its reuse and interoperability features, i.e. views, a systematic review. Proceedings of International
subscripts, and the capability of working with Conference on Software Process, Leipzig, Germany;
external DLLs. However, the suitability of other 345–356. DOI: 10.1007/978-3-540-79588-9 30.
such tools such as TIBCO Business Studio should Forrester JW. 1961. Industrial Dynamics. Productivity
be systematically studied [for the list of the other 13 Press: Cambridge.
tools, the reader is referred to the work by Russell
et al. (2006)]. Raffo DM, et al. 2004. Using software process simulation
to assess the impact of IV&V activities. Proceedings
of Software Process Simulation Modeling Workshop,
Edinburgh, Scotland; 197–205.
ACKNOWLEDGEMENTS
Pfahl D, Lebsanft K. 2000. Knowledge acquisition and
process guidance for building system dynamics
Keyvan Khosrovian and Dietmar Pfahl were sup- simulation models: an experience report from software
ported by Discovery Grant no. 327665-06 of industry. International Journal of Software Engineering and
the Canadian Natural Sciences and Engineering Knowledge Engineering 10(4): 487–510.
Copyright  2009 John Wiley & Sons, Ltd. Softw. Process Improve. Pract., 2009; 14: 165–180
DOI: 10.1002/spip 179
Research Section V. Garousi, K. Khosrovian and D. Pfahl

Wise A.2006. Little-JIL 1.5 Language Report. of Software Process Simulation Modeling Workshop,
Technical report UM-CS-2006-51, Department of Portland, OR, USA.
Computer Science, University of Massachusetts,
Massachusetts. Khosrovian K, Pfahl D, Garousi V. 2007a. A customizable
system dynamics simulation model of generic software
Ur S, Yom-Tov E, Wernick P. 2007. An open source development processes. Schulich School of Engineering,
simulation model of software testing; hardware and University of Calgary: Calgary.
software, verification and testing. Proceedings of
International Haifa Verification Conference, Haifa, Israel Khosrovian K,Pfahl D, Garousi V. 2007b. Calibrating a
124–137. customizable system dynamics simulation model of generic
software development processes. Schulich School of
Senge P. 1990. The Fifth Discipline. Doubleday: New Engineering, University of Calgary: Calgary.
York.
Frost A, Campo M. 2007. Advancing defect containment
Russell N et al. 2006. Workflow control-flow patterns: to quantitative defect management. CrossTalk – The
a revised view. Business Process Management (BPM) Journal of Defense Software Engineering 12(20): 24–28.
Center Report, BPM-06-22.
Wagner S. 2006. A literature survey of the quality
Madachy R. 2006. Reusable Model Structures and Behaviors economics of defect-detection techniques. Proceedings
for Software Processes, Vol. 3966, Wang Q et al. (eds). of International Symposium on Empirical Software
Springer-Verlag, Heidelberg, Germany SPW/ProSim: Engineering, Rio de Janeiro, Brazil: 194–203. DOI:
222–233. 10.1145/1159733.1159763.

Angkasaputra, N, Pfahl D. 2004. Making software Damm L, Lundberg L, Wohlin C. 2006. Faults-slip-
process simulation modeling agile and pattern-based. through: a concept for measuring the efficiency of the
Proceedings of software process simulation modeling test process. J. of Software Process: Improv. and Practice
workshop, 222–227. DOI:10.1049/ic:20040462. 11(1): 47–59. DOI: 10.1002/spip.253.

Cook JE, Wolf AL. 1998. Discovering models of software Khosrovian K, Pfahl D, Garousi V. 2007c. Application
processes from event-based data. ACM Transactions on scenarios of a customizable system dynamics simulation model
Software Engineering and Methodology 7(3): 215–249. DOI: of generic software development processes Schulich School of
10.1145/287000.287001. Engineering, University of Calgary: Calgary.

Armbrust, O. 2005. Simulation-based software process Barlas Y. 1989a. Multiple tests for validation of system
modeling and evaluation. Handbook of Software dynamics type of simulation models. European Journal
Engineering & Knowledge Engineering, Advanced Topics, of Operational Research 42: 59–87. DOI: 10.1016/0377-
Vol. 3, Chang SK (eds). World Scientific, Singapore: 2217(89)90059-3.
333–364.
Barlas Y. 1994b. Model Validation in System Dynamics.
Raffo D, Spehar G, Nayak U. 2003. Generalized Proceedings of the International System Dynamics
simulation models: what, why and how?. Proceedings Conference, Sterling, UK: 1–10.

Copyright  2009 John Wiley & Sons, Ltd. Softw. Process Improve. Pract., 2009; 14: 165–180
180 DOI: 10.1002/spip

You might also like