0% found this document useful (0 votes)
566 views33 pages

SPM Unit-5

1. The document discusses process automation and the importance of establishing an automated project environment. It covers the different stages of a project environment from prototyping to development to maintenance. 2. Key aspects of automating the project environment that are discussed include implementing round-trip engineering for consistency, change management tools for handling iterations, and establishing configuration baselines and a configuration control board for change requests. 3. The core component of change management is the software change order, which is used to authorize, track, and disposition changes. It includes fields for title, description, estimates, resolution details, assessment method, and final disposition.

Uploaded by

vamsi kiran
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
566 views33 pages

SPM Unit-5

1. The document discusses process automation and the importance of establishing an automated project environment. It covers the different stages of a project environment from prototyping to development to maintenance. 2. Key aspects of automating the project environment that are discussed include implementing round-trip engineering for consistency, change management tools for handling iterations, and establishing configuration baselines and a configuration control board for change requests. 3. The core component of change management is the software change order, which is used to authorize, track, and disposition changes. It includes fields for title, description, estimates, resolution details, assessment method, and final disposition.

Uploaded by

vamsi kiran
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 33

UNIT V

Process Automation: Automation Building blocks, The Project Environment.


Project Control and Process Instrumentation: The seven core Metrics, Management indicators,
quality indicators, life cycle expectations, pragmatic Software Metrics, Metrics automation.
Project Estimation and Management : COCOMO model, Critical Path Analysis,
PERT technique, Monte Carlo approach
PROCESS AUTOMATION
Introductory Remarks:
The environment must be the first-class artifact of the process.
Process automation & change management is crical to an erative process. If the change is
expensive then the development organization will resist .
Round-trip engineering & integrated environments promote change freedom & effective
evolution of technical artifacts.
Metric automation is crucial to effective project control.
External stakeholders need access to environment resources to improve interaction with the
development team & add value to the process.
The three levels of process which requires a certain degree of process automation for the
corresponding process to be carried out efficiently.
Metaprocess (Line of business): The automation support for this level is called an infrastructure.
Macroprocess (project): The automation support for a project’s process is called an
environment.
Microprocess (eration): The automation support for generating artifacts is generally called a
tool.
5.1.Automation Building blocks

1.Management:

1
 Nowadays, there are several opportunities and chances available for automation of
project planning and control activities of management workflow.
 For creating planning artifacts, several tools are useful such as software cost estimation
tools and Work Breakdown Structure (WBS) tools.
 Workflow management software is an advanced platform that provides flexible tools
to improve way you work in an efficient manner.
 Thus, automation support can also improve insight into metrics.
2.Environment:
 Automating development process and also developing an infrastructure for supporting
different project workflows are very essential activities of engineering stage of life -
cycle.
 Environment that generally gives and provides process automation is a tangible artifact
that is generally very critical to life-cycle of system being developed.
 Even, top-level WBS recognizes environment like a first-class workflow.
 Integrating their own environment and infrastructure for software development is one
of main tasks for most of software organizations.
3.Requirements:
 Requirements management is a very systematic approach for identifying, documenting,
organizing, and tracking changing requirements of a system.
 It is also responsible for establishing and maintaining agreement between user or
customer and project team on changing requirements of system.
 If process wants strong traceability among requirements and design, then architecture
is very much likely to evolve in a way that it optimizes requirements traceability other
than design integrity.
4.Design:
 Workflow design is actually a visual depiction of each step that is involved in a
workflow from start to end.
 It generally lays out each and every task sequentially and provides complete clarity
into how data moves from one task to another one.
 Workflow design tools simply allow us to depict different tasks involved graphically
as well as also depict performers, timelines, data, and other aspects that are crucial to
execution.
 Visual modeling is primary support that is required and essential for design workflow.
 Visual model is generally used for capturing design models, representing them in a
human-readable format, and also translating them into source code.
5.Implementation:
 The main focus and purpose of implementation workflow are to write and initially test
software, relies primarily on programming environment (editor, compiler, debugger,
etc.).
 But on other hand, it should also include substantial integration along with change
management tools, visual modeling tools, and test automation tools.
 This is simply required to just support iteration to be productive.
 It is main focus of Construction phase. The implementation simply means to transform
a design model into executable one.
6. Assessment and Deployment:
 Workflow assessment is initial step to identifying outdated software processes and just
replace them with most effective process.
2
 This generally combines domain expertise, qualitative and quantitative information
gathering and collection, proprietary tools, and much more.
 It requires and needs every tool discussed along with some additional capabilities
simply to support test automation and test management.
 Defect tracking is also a tool that supports assessment.

5.2. The Project Environment:


The project environment artifacts evolve through three discrete states.
(1) Prototyping Environment (2) Development Environment (3)Maintenance Environment.
1.The Prototype Environment includes an architecture test bed for prototyping project
architecture to evaluate trade-offs during inception & elaboration phase of the life cycle.
2.The Development environment should include a full suite of development tools needed
to support various Process workflows & round-trip engineering to the maximum extent
possible.
3.The Maintenance Environment should typically coincide with the mature
version of the development.
There are four important environment disciplines that are critical to
management context & the success of a modern iterative development process.

1.Round Trip Environment


 Tools must be integrated to maintain consistency & traceability.
 Round-Trip engineering is the term used to describe this key
requirement for environment that support iterative development.
 As the software industry moves into maintaining different information
sets for the engineering artifacts, more automation support is needed
to ensure efficient & error free transition of data from one artifacts to
another.
 Round-trip engineering is the environment support necessary to
maintain Consistency among the engineering artifacts.

3
2.Change Management
Change management must be automated & enforced to manage multiple iterations &
to enable change freedom.
Change is the fundamental primitive of iterative Development.
2.1.Software Change Orders
2.2.Configuration Baseline

2.3.Configuration Control Board (CCB)

2.1.Software Change Orders


 The atomic unit of software work that is authorized to create, modify or
obsolesce components with in a configuration baseline is called a software change orders
( SCO )
 The basic fields of the SCO are Title, description, metrics, resolution, assessment
& disposition

4
The basic fields of the SCO are title, description, metrics, resolution, assessment, and
disposition.
• Title. The title is suggested by the originator and is finalized upon acceptance by the
configuration control board (CCB). This field should include a reference to an
external software problem report if the change was initiated by an external person (such as a
user).
• Description. The problem description includes the name of the originator, date of origination,
CCB-assigned SCO identifier, and relevant version identifiers of related support software.
The textual problem description should provide as much detail as possible, along with attached
code excerpts, display snapshots, error messages, and any other data that may help to isolate
the problem or describe the change needed.
 Metrics. The metrics collected for each SCO are important for planning, for scheduling,
and for assessing quality improvement. Change categories are type 0 (critical bug), type 1
(bug), type 2 (enhancement), type 3 (new feature), and type 4 (other), as described later in this
5
section. Upon acceptance of the SCO, initial estimates are made of the amount of breakage
and the effort required to resolve the problem. The breakage item quantifies the volume of
change, and the rework item quantifies the complexity of change.

Resolution. This field includes the name of the person responsible for implementing the
change, the components changed, the actual metrics, and a description of the change. Although
the level of component fidelity with which a project tracks change references can be tailored,
in general, the lowest level of component references should be kept at approximately the level
of allocation to an individual. For example, a "component" that is allocated to a team is not a
sufficiently detailed reference.
• Assessment. This field describes the assessment technique as either inspection, analysis,
demonstration, or test. Where applicable, it should also reference all existing test cases and
new test cases executed, and it should identify all different test configurations, such as
platforms, topologies, and compilers.
• Disposition. The SCO is assigned one of the following states by the CCB:
• Proposed: written, pending CCB review
• Accepted: CCB-approved for resolution
• Rejected: closed, with rationale, such as not a problem, duplicate, obsolete change, resolved
by another SCO
• Archived: accepted but postponed until a later release
• In progress: assigned and actively being resolved by the development organization
• In assessment: resolved by the development organization; being assessed by a test
organization
• Closed: completely resolved, with the concurrence of all CCB members
A priority and release identifier can also be assigned by the CCB to guide the prioritization and
organization of concurrent development activities.
2.2.Configuration Baseline
A configuration baseline is a named collection of software components &Supporting
documentation that is subjected to change management & is upgraded, maintained, tested,
statuses & obsolesced a unit
There are generally two classes of baselines
1. 1.External Product Release
2. ..Internal testing Release
Three levels of baseline releases are required for most Systems
1. Major release (N)
2. Minor Release (M)
3. Interim (temporary) Release (X)
Major release represents a new generation of the product or project
A minor release represents the same basic product but with enhanced
features, performance or quality.
Major & Minor releases are intended to be external product releases that
are persistent & supported for a period of time.
An interim release corresponds to a developmental configuration that is
intended to be transient.
Once software is placed in a controlled baseline all changes are tracked such
that a distinction must be made for the cause of the change.

6
Change categories are
Type 0: Critical Failures (must be fixed before release)
Type 1: A bug or defect either does not impair (Harm) the usefulness of the
system orcan be worked around
Type 2: A change that is an enhancement rather than a response
to a defect
Type 3: A change that is necessitated by the update to the
environment
Type 4: Changes that are not accommodated by the other
categories.

7
2.3.Configuration Control Board (CCB)
A CCB is a team of people that functions as the decision
Authority on the content of configuration baselines
A CCB includes:
1. Software managers
2. Software Architecture managers
3. Software Development managers
4. Software Assessment managers
5. Other Stakeholders who are integral to the maintenance of the
controlled software delivery system?
3.Infrastructure
The organization infrastructure provides the organization’s capital
assets including two key artifacts- Policy & Environment
I Organization Policy
II Organization Environment
I Organization Policy:
A Policy captures the standards for project software development processes
The organization policy is usually packaged as a handbook that defines the
life cycles & the process primitives such as
 Major milestones
 Intermediate Artifacts
 Engineering repositories
 Metrics
 Roles & Responsibilities

8
II Organization Environment
The Environment that captures an inventory of tools which are building blocks from which
project environments can be configured efficiently & economically

4.Stakeholder Environment
Many large scale projects include people in external organizations that

represent other stakeholders participating in the development process they


might include
 Procurement agency contract monitors
 End-user engineering support personnel
 Third party maintenance contractors
 Independent verification & validation contractors
9
 Representatives of regulatory agencies & others.
These stakeholder representatives also need to access to development resources
so that they can contribute value to overall effort. These stakeholders will be access
through on-line
An on-line environment accessible by the external stakeholders allow
them to participate in the process a follows
Accept & use executable increments for the hands-on evaluation.
Use the same on-line tools, data & reports that the development organization
uses to manage & monitor the project
Avoid excessive travel, paper interchange delays, format translations,
paper shipping costs & other overhead cost

UNIT-5 CHAPTER-2
Project Control and Process Instrumentation:
The seven core Metrics, Management indicators, quality indicators, life cycle
expectations, pragmatic Software Metrics, Metrics automation.

INTERODUCTION:
 Software metrics are used to implement the activities and products of the software
development process.
 Hence, the quality of the software products and the achievements in the development
process can be determined using the software metrics.
Need for Software Metrics:
 Software metrics are needed for calculating the cost and schedule of a software product
with great accuracy.

10
 Software metrics are required for making an accurate estimation of the progress.
 The metrics are also required for understanding the quality of the software product.
5.1. The seven core Metrics:

INDICATORS:
An indicator is a metric or a group of metrics that provides an understanding of the
software process or software product or a software project. A software engineer
assembles measures and produce metrics from which the indicators can be derived.
Two types of indicators are:
(i) Management indicators.
(ii) Quality indicators.
1.Management Indicators
The management indicators i.e., technical progress, financial status and staffing progress are
used to determine whether a project is on budget and on schedule. The management
indicators that indicate financial status are based on earned value system.
2.Quality Indicators
 The quality indicators are based on the measurement of the changes occurred in
software.
 Software metrics instrument the activities and products of the software
development/integration process.
 Metrics values provide an important perspective for managing the process.
 The most useful metrics are extracted directly from the evolving artifacts.
 There are seven core metrics that are used in managing a modern process.

11
5.2 MANAGEMENT INDICATORS:
1. Work and progress
2. Budgeted cost and expenditures
3. Staffing and team dynamics
1.Work and progress
This metric measure the work performed over time. Work is the effort to be
accomplished to complete a certain set of tasks. The various activities of an iterative
development project can be measured by defining a planned estimate of the work in an
objective measure, then tracking progress (work completed overtime) against that plan.
The default perspectives of this metric are:
Software architecture team: - Use cases demonstrated.
Software development team: - SLOC under baseline change management, SCOs
closed Software assessment team: - SCOs opened, test hours executed and evaluation
criteria meet. Software management team: - milestones completed.
The below figure shows expected progress for a typical project with three major
releases

2.Budgeted cost and expenditures


This metric measures cost incurred over time. Budgeted cost is the planned
expenditure profile over the life cycle of the project.

To maintain management control, measuring cost expenditures over the project life
cycle is always necessary.

Tracking financial progress takes on an organization - specific format. Financial


performance can be measured by the use of an earned value system, which provides
highly detailed cost and schedule insight.

The basic parameters of an earned value system, expressed in units of dollars, are
as follows:
12
Expenditure Plan - It is the planned spending profile for a project over its planned
schedule. Actual progress - It is the technical accomplishment relative to the planned
progress underlying the spending profile.
Actual cost: It is the actual spending profile for a project over its actual schedule.
Earned value: It is the value that represents the planned cost of the actual progress.
Cost variance: It is the difference between the actual cost and the earned value.
Schedule variance: It is the difference between the planned cost and the
earned value. Of all parameters in an earned value system, actual progress is
the most subjective
Assessment: Because most managers know exactly how much cost they have incurred
and how much schedule they have used, the variability in making accurate assessments
is centered in the actual progress assessment. The default perspectives of this metric are
cost per month, full-time staff per month and percentage of budget expended.

13
3.Staffing and team dynamics
This metric measure the personnel changes over time, which involves staffing additions
and reductions over time.

An iterative development should start with a small team until the risks in the requirements
and architecture have been suitably resolved.

Depending on the overlap of iterations and other project specific circumstances, staffing
can vary.

Increase in staff can slow overall project progress as new people consume the productive
team of existing people in coming up to speed.

Low attrition of good people is a sign of success. The default perspectives of this metric
are people per month added and people per month leaving.

14
These three management indicators are responsible for technical progress, financial
status and staffing progress.

Fig: staffing and Team dynamics

5.3. QUALITY INDICATORS:


1.Change traffic and stability:
This metric measures the change traffic over time. The number of software change
orders opened and closed over the life cycle is called change traffic. Stability specifies
the relationship between opened versus closed software change orders. This metric can
be collected by change type, by release, across all releases, by term, by components, by
subsystems, etc.
The below figure shows stability expectation over a healthy project’s life cycle

Fig: Change traffic and stability

2.Breakage and modularity


This metric measures the average breakage per change over time. Breakage is defined as the
average extent of change, which is the amount of software baseline that needs rework and
measured in source lines of code, function points, components, subsystems, files or other
15
units.
Modularity is the average breakage trend over time. This metric can be collected by rework
SLOC per change, by change type, by release, by components and by subsystems.

3.Rework and adaptability:


 This metric measures the average rework per change over time. Rework is defined as
the average cost of change which is the effort to analyze, resolve and retest all changes
to software baselines.

 Adaptability is defined as the rework trend over time. This metric provides insight
into rework measurement.

 All changes are not created equal. Some changes can be made in a staff- hour, while
others take staff-weeks. This metric can be collected by average hours per change, by
change type, by release, by components and by subsystems.

4.MTBF and Maturity:


 This metric measures defect rater over time. MTBF (Mean Time Between Failures)
is the average usage time between software faults.

 It is computed by dividing the test hours by the number of type 0 and type 1 SCOs.
Maturity is defined as the MTBF trend over time.
 Software errors can be categorized into two types deterministic and nondeterministic.

16
Deterministic errors are also known as Bohr-bugs and nondeterministic errors are also
called as Heisen-bugs.
 Bohr-bugs are a class of errors caused when the software is stimulated in a certain
way such as coding errors.
 Heisen-bugs are software faults that are coincidental with a certain probabilistic
occurrence of a given situation, such as design errors.
 This metric can be collected by failure counts, test hours until failure, by release, by
components and by subsystems.
 These four quality indicators are based primarily on the measurement of software
change across evolving baselines of engineering data.

4.LIFE -CYCLE EXPECTATIONS:

17
5.pragmatic Software Metrics

18
19
6.METRICS AUTOMATION:
 Many opportunities are available to automate the project control activities of a software
project.

 A Software Project Control Panel (SPCP) is essential for managing against a plan. This
panel integrates data from multiple sources to show the current status of some aspect of
the project.

 The panel can support standard features and provide extensive capability for detailed
situation analysis.

 SPCP is one example of metrics automation approach that collects, organizes and
reports values and trends extracted directly from the evolving engineering artifacts.
SPCP:
To implement a complete SPCP, the following are necessary.
 Metrics primitives - trends, comparisons and progressions
 A graphical user interface.
 Metrics collection agents
 Metrics data management server
 Metrics definitions - actual metrics presentations for requirements progress,

implementation progress, assessment progress, design progress and other progress


dimensions.
 Actors - monitor and administrator.
Monitor defines panel layouts, graphical objects and linkages to project data.
Specific monitors called roles include software project managers, software development
team leads, software architects and customers.

Administrator installs the system, defines new mechanisms, graphical objects and
linkages. The whole display is called a panel. Within a panel are graphical objects, which
are types of layouts such as dials and bar charts for information. Each graphical object
displays a metric. A panel contains a number of graphical objects positioned in a
particular geometric layout.

A metric shown in a graphical object is labeled with the metric type, summary level
and insurance name (line of code, subsystem, server1).

Metrics can be displayed in two modes


20
– value, referring to a given point in time and graph referring to multiple and consecutive
points in time.
Metrics can be displayed with or without control values. A control value is an
existing expectation either absolute or relative that is used for comparison with a
dynamically changing metric. Thresholds are examples of control values.
The basic fundamental metrics classes are trend, comparison and progress.

21
22
The format and content of any project panel are configurable to the software
project manager's preference for tracking metrics of top-level interest. The basic
operation of an SPCP can be described by the following top -level use case.

Chapter 3 -Project Estimation and Management

COCOMO model, Critical Path Analysis, PERT technique, Monte Carlo approach

1.COCOMO model (constructive cost model):


Cocomo (Constructive Cost Model) is a regression model based on LOC, i.e number of
Lines of Code. It is a procedural cost estimate model for software projects and is often
used as a process of reliably predicting the various parameters associated with making a
project such as size, effort, cost, time, and quality.
The Constructive Cost Model (COCOMO) is an algorithmic software cost estimation
model developed by Barry W. Boehm. The model uses a basic regression formula with
parameters that are derived from historical project data and current project characteristics.

23
24
25
26
2.Critical Path Analysis
Critical path analysis (CPA) is a project management technique that requires mapping out
every key task that is necessary to complete a project. It includes identifying the amount
of time necessary to finish each activity and the dependencies of each activity on any
others.
Also known as the critical path method, CPA is used to set a realistic deadline for a
project and to track its progress along the way.

Purpose of Critical Path Analysis


In the late 1950s, James Kelley of Remington Rand and Morgan Walker
of DuPont developed a project management technique called the critical path method
(CPM).
Critical path analysis identifies the sequence of crucial and interdependent steps that
comprise a work plan from start to finish. It also identifies non-critical tasks. These may
also be important, but if they hit an unexpected snag they will not hold up any other tasks
and thus jeopardize the execution of the entire project.

The concept of a critical path recognizes that completion of some tasks in a project is
dependent on the completion of other tasks. Some activities cannot start until others are
finished. Inevitably, that presents the risk of bottlenecks.

How to Use CPA


CPA detects and defines all of the critical and noncritical tasks involved in a work plan
and identifies both the minimum and the maximum amount of time associated with each.
It also notes those dependencies among activities, and that tells them the amount
of float or slack time that can be associated with each in order to arrive at a reasonable
overall deadline date.

The project plan must be tracked through the course of a project to make sure every task is
on track and no adjustments need to be made. The timeline in a CPA is often expressed as
a Gantt chart, a type of bar chart that is designed to illustrate the key dependencies in a
complex project.

CPA is used widely in industries devoted to extremely complex projects, from aerospace
and defense to construction and product development. Today, project scheduling software

27
is used to automatically calculate dates for CPA, aiding in time efficiency, tracking
performance, and creating a unified workflow.

How Do You Analyze a Critical Path?


The core of analyzing a critical path is identifying both critical and noncritical tasks and
how to schedule these tasks most effectively. The goal is to reach the project deadline
with the lowest cost possible. Analyzing a critical path involves identifying which tasks
are dependent or independent from each other.

To create an optimal critical path, one can analyze if the time to complete tasks can be
reduced. For example, say a contractor is building a home. To reduce the number of days
it takes to build the frame, the contractor may choose to have more carpenters assigned to
the job. As a result, the overall project may be completed a day earlier.

It's worth noting that the contractor may have key questions to ask when analyzing the
critical path. Would the costs of this decision outweigh the savings of completing the
project a day earlier? Is there enough equipment to make this possible? Looking closely at
these interconnected variables is important for determining the critical path.

What Is an Example of Critical Path Analysis?


Consider the following example of critical path analysis used in the aerospace industry.
Say airline Company A has low profitability. Management has identified that excess
capacity is one reason behind its lower profitability levels. To increase the utilization of
aircraft, it may choose to increase daily utilization from 10 to 11 hours a day. Here, the
company finds that an extra hour will result in $100,000 in profit per aircraft annually.
The company could in turn schedule a greater number of flights for aircraft that would
have otherwise stood idle.

What Are the Benefits of Critical Path Analysis?


Critical path analysis (CPA) has a number of advantages, in particular for large and
complex tasks. Using CPA can improve the efficiency and clarity of a project, provide
accurate timescales, and provide estimates to stakeholders

3.Project Evaluation and Review Technique (PERT)


Project Evaluation and Review Technique (PERT) is a procedure through which
activities of a project are represented in its appropriate sequence and timing. It is a
scheduling technique used to schedule, organize and integrate tasks within a project.
PERT is basically a mechanism for management planning and control which provides
blueprint for a particular project. All of the primary elements or events of a project have
been finally identified by the PERT.
In this technique, a PERT Chart is made which represent a schedule for all the specified
tasks in the project. The reporting levels of the tasks or events in the PERT Charts is
somewhat same as defined in the work breakdown structure (WBS).
Characteristics of PERT:
The main characteristics of PERT are as following :
1. It serves as a base for obtaining the important facts for implementing the
decision-making.
2. It forms the basis for all the planning activities.

28
3. PERT helps management in deciding the best possible resource utilization
method.
4. PERT take advantage by using time network analysis technique.
5. PERT presents the structure for reporting information.
6. It helps the management in identifying the essential elements for the completion
of the project within time.
Advantages of PERT:
It has the following advantages :
1. Estimation of completion time of project is given by the PERT.
2. It supports the identification of the activities with slack time.
3. The start and dates of the activities of a specific project is determined.
4. It helps project manager in identifying the critical path activities.
5. PERT makes well organized diagram for the representation of large amount of
data.
Disadvantages of PERT:
It has the following dis advantages :
1. The complexity of PERT is more which leads to the problem in implementation.
2. The estimation of activity time are subjective in PERT which is a major
disadvantage.
3. Maintenance of PERT is also expensive and complex.
4. The actual distribution of may be different from the PERT beta distribution
which causes wrong assumptions.
5. It under estimates the expected project completion time as there is chances that
other paths can become the critical path if their related activities are deferred.

4.Monte Carlo approach

he Monte Carlo Analysis is a risk management technique, which project managers use to
estimate the impacts of various risks on the project cost and project timeline. Using this
method, one can easily find out what will happen to the project schedule and cost in case
any risk occurs. It is used at various times during the project life cycle to get the idea on a
range of probable outcomes during various scenarios.
example

Suppose you are estimating the timeline of a project and have come up with the best-case
scenario and the worst-case scenario. If everything goes according to your plan, there will
be no delays with respect to tasks. As a result, you will complete the project in 12 months.

However, if anything goes haywire, the project completion time will increase maximum up
to 15 months. This the worst-case scenario as far as business growth is concerned. This is
where Monte Carlo Analysis comes into the picture as it lets you find quantified estimates:

 Chances of completing the project in 12 months: 15%


 Chances of completing the project in 13 months: 50%
 Chances of completing the project in 14 months: 80%
 Chances of completing the project in 15 months: 100%

29
Monte Carlo Analysis – How it Works

Let us understand this technique in a stepwise manner.

 All the tasks of a project are allotted and the data is fed to the Monte Carlo
automation.
 Various timelines are shown by the tool such as the probabilities of completion of
one task in a specific number of days (as discussed in the example given above).
 Now that the probable timelines of the various tasks are generated, a number of
simulations are done on these probabilities. The number of Monte Carlo
simulation project management ranges in a few thousand and all of them generate
the end dates.
 Hence, the output of the Monte Carlo Analysis is not a Single value, but,
a Probability Curve. This curve depicts the probable completion dates of various
tasks and their probability values.
 This curve enables project managers to come up with the most probable and
intelligent schedule of the project completion and submit a credible report of a
project timeline to the clients and higher management.
 Similarly, the Monte Carlo project management technique is used to generate the
costing or budget for a project.

Monte Carlo Analysis – Benefits

 Offers objective and insightful data for decision making


 Empowers the project managers to create a more practical project schedule and
cost plan
 Better assessment regarding project milestones
 Better assessment of cost overruns and schedule overruns
 Better risk quantification

Monte Carlo Analysis – Limitations

 Initial estimation of the best-case scenario, the most likely scenario, and the
worst-case scenario are done by the project manager.
 It works in the best possible manner if you provide pertinent values in the first
place. So, the evaluating process can go all wrong if the erroneous data gets
entered.
 The Monte Carlo simulation in project management works for an entire project,
instead of individual tasks. So, everything has to be sorted out before using it.

30
31
32
33

You might also like