SPM Unit-5
SPM Unit-5
1.Management:
1
Nowadays, there are several opportunities and chances available for automation of
project planning and control activities of management workflow.
For creating planning artifacts, several tools are useful such as software cost estimation
tools and Work Breakdown Structure (WBS) tools.
Workflow management software is an advanced platform that provides flexible tools
to improve way you work in an efficient manner.
Thus, automation support can also improve insight into metrics.
2.Environment:
Automating development process and also developing an infrastructure for supporting
different project workflows are very essential activities of engineering stage of life -
cycle.
Environment that generally gives and provides process automation is a tangible artifact
that is generally very critical to life-cycle of system being developed.
Even, top-level WBS recognizes environment like a first-class workflow.
Integrating their own environment and infrastructure for software development is one
of main tasks for most of software organizations.
3.Requirements:
Requirements management is a very systematic approach for identifying, documenting,
organizing, and tracking changing requirements of a system.
It is also responsible for establishing and maintaining agreement between user or
customer and project team on changing requirements of system.
If process wants strong traceability among requirements and design, then architecture
is very much likely to evolve in a way that it optimizes requirements traceability other
than design integrity.
4.Design:
Workflow design is actually a visual depiction of each step that is involved in a
workflow from start to end.
It generally lays out each and every task sequentially and provides complete clarity
into how data moves from one task to another one.
Workflow design tools simply allow us to depict different tasks involved graphically
as well as also depict performers, timelines, data, and other aspects that are crucial to
execution.
Visual modeling is primary support that is required and essential for design workflow.
Visual model is generally used for capturing design models, representing them in a
human-readable format, and also translating them into source code.
5.Implementation:
The main focus and purpose of implementation workflow are to write and initially test
software, relies primarily on programming environment (editor, compiler, debugger,
etc.).
But on other hand, it should also include substantial integration along with change
management tools, visual modeling tools, and test automation tools.
This is simply required to just support iteration to be productive.
It is main focus of Construction phase. The implementation simply means to transform
a design model into executable one.
6. Assessment and Deployment:
Workflow assessment is initial step to identifying outdated software processes and just
replace them with most effective process.
2
This generally combines domain expertise, qualitative and quantitative information
gathering and collection, proprietary tools, and much more.
It requires and needs every tool discussed along with some additional capabilities
simply to support test automation and test management.
Defect tracking is also a tool that supports assessment.
3
2.Change Management
Change management must be automated & enforced to manage multiple iterations &
to enable change freedom.
Change is the fundamental primitive of iterative Development.
2.1.Software Change Orders
2.2.Configuration Baseline
4
The basic fields of the SCO are title, description, metrics, resolution, assessment, and
disposition.
• Title. The title is suggested by the originator and is finalized upon acceptance by the
configuration control board (CCB). This field should include a reference to an
external software problem report if the change was initiated by an external person (such as a
user).
• Description. The problem description includes the name of the originator, date of origination,
CCB-assigned SCO identifier, and relevant version identifiers of related support software.
The textual problem description should provide as much detail as possible, along with attached
code excerpts, display snapshots, error messages, and any other data that may help to isolate
the problem or describe the change needed.
Metrics. The metrics collected for each SCO are important for planning, for scheduling,
and for assessing quality improvement. Change categories are type 0 (critical bug), type 1
(bug), type 2 (enhancement), type 3 (new feature), and type 4 (other), as described later in this
5
section. Upon acceptance of the SCO, initial estimates are made of the amount of breakage
and the effort required to resolve the problem. The breakage item quantifies the volume of
change, and the rework item quantifies the complexity of change.
Resolution. This field includes the name of the person responsible for implementing the
change, the components changed, the actual metrics, and a description of the change. Although
the level of component fidelity with which a project tracks change references can be tailored,
in general, the lowest level of component references should be kept at approximately the level
of allocation to an individual. For example, a "component" that is allocated to a team is not a
sufficiently detailed reference.
• Assessment. This field describes the assessment technique as either inspection, analysis,
demonstration, or test. Where applicable, it should also reference all existing test cases and
new test cases executed, and it should identify all different test configurations, such as
platforms, topologies, and compilers.
• Disposition. The SCO is assigned one of the following states by the CCB:
• Proposed: written, pending CCB review
• Accepted: CCB-approved for resolution
• Rejected: closed, with rationale, such as not a problem, duplicate, obsolete change, resolved
by another SCO
• Archived: accepted but postponed until a later release
• In progress: assigned and actively being resolved by the development organization
• In assessment: resolved by the development organization; being assessed by a test
organization
• Closed: completely resolved, with the concurrence of all CCB members
A priority and release identifier can also be assigned by the CCB to guide the prioritization and
organization of concurrent development activities.
2.2.Configuration Baseline
A configuration baseline is a named collection of software components &Supporting
documentation that is subjected to change management & is upgraded, maintained, tested,
statuses & obsolesced a unit
There are generally two classes of baselines
1. 1.External Product Release
2. ..Internal testing Release
Three levels of baseline releases are required for most Systems
1. Major release (N)
2. Minor Release (M)
3. Interim (temporary) Release (X)
Major release represents a new generation of the product or project
A minor release represents the same basic product but with enhanced
features, performance or quality.
Major & Minor releases are intended to be external product releases that
are persistent & supported for a period of time.
An interim release corresponds to a developmental configuration that is
intended to be transient.
Once software is placed in a controlled baseline all changes are tracked such
that a distinction must be made for the cause of the change.
6
Change categories are
Type 0: Critical Failures (must be fixed before release)
Type 1: A bug or defect either does not impair (Harm) the usefulness of the
system orcan be worked around
Type 2: A change that is an enhancement rather than a response
to a defect
Type 3: A change that is necessitated by the update to the
environment
Type 4: Changes that are not accommodated by the other
categories.
7
2.3.Configuration Control Board (CCB)
A CCB is a team of people that functions as the decision
Authority on the content of configuration baselines
A CCB includes:
1. Software managers
2. Software Architecture managers
3. Software Development managers
4. Software Assessment managers
5. Other Stakeholders who are integral to the maintenance of the
controlled software delivery system?
3.Infrastructure
The organization infrastructure provides the organization’s capital
assets including two key artifacts- Policy & Environment
I Organization Policy
II Organization Environment
I Organization Policy:
A Policy captures the standards for project software development processes
The organization policy is usually packaged as a handbook that defines the
life cycles & the process primitives such as
Major milestones
Intermediate Artifacts
Engineering repositories
Metrics
Roles & Responsibilities
8
II Organization Environment
The Environment that captures an inventory of tools which are building blocks from which
project environments can be configured efficiently & economically
4.Stakeholder Environment
Many large scale projects include people in external organizations that
UNIT-5 CHAPTER-2
Project Control and Process Instrumentation:
The seven core Metrics, Management indicators, quality indicators, life cycle
expectations, pragmatic Software Metrics, Metrics automation.
INTERODUCTION:
Software metrics are used to implement the activities and products of the software
development process.
Hence, the quality of the software products and the achievements in the development
process can be determined using the software metrics.
Need for Software Metrics:
Software metrics are needed for calculating the cost and schedule of a software product
with great accuracy.
10
Software metrics are required for making an accurate estimation of the progress.
The metrics are also required for understanding the quality of the software product.
5.1. The seven core Metrics:
INDICATORS:
An indicator is a metric or a group of metrics that provides an understanding of the
software process or software product or a software project. A software engineer
assembles measures and produce metrics from which the indicators can be derived.
Two types of indicators are:
(i) Management indicators.
(ii) Quality indicators.
1.Management Indicators
The management indicators i.e., technical progress, financial status and staffing progress are
used to determine whether a project is on budget and on schedule. The management
indicators that indicate financial status are based on earned value system.
2.Quality Indicators
The quality indicators are based on the measurement of the changes occurred in
software.
Software metrics instrument the activities and products of the software
development/integration process.
Metrics values provide an important perspective for managing the process.
The most useful metrics are extracted directly from the evolving artifacts.
There are seven core metrics that are used in managing a modern process.
11
5.2 MANAGEMENT INDICATORS:
1. Work and progress
2. Budgeted cost and expenditures
3. Staffing and team dynamics
1.Work and progress
This metric measure the work performed over time. Work is the effort to be
accomplished to complete a certain set of tasks. The various activities of an iterative
development project can be measured by defining a planned estimate of the work in an
objective measure, then tracking progress (work completed overtime) against that plan.
The default perspectives of this metric are:
Software architecture team: - Use cases demonstrated.
Software development team: - SLOC under baseline change management, SCOs
closed Software assessment team: - SCOs opened, test hours executed and evaluation
criteria meet. Software management team: - milestones completed.
The below figure shows expected progress for a typical project with three major
releases
To maintain management control, measuring cost expenditures over the project life
cycle is always necessary.
The basic parameters of an earned value system, expressed in units of dollars, are
as follows:
12
Expenditure Plan - It is the planned spending profile for a project over its planned
schedule. Actual progress - It is the technical accomplishment relative to the planned
progress underlying the spending profile.
Actual cost: It is the actual spending profile for a project over its actual schedule.
Earned value: It is the value that represents the planned cost of the actual progress.
Cost variance: It is the difference between the actual cost and the earned value.
Schedule variance: It is the difference between the planned cost and the
earned value. Of all parameters in an earned value system, actual progress is
the most subjective
Assessment: Because most managers know exactly how much cost they have incurred
and how much schedule they have used, the variability in making accurate assessments
is centered in the actual progress assessment. The default perspectives of this metric are
cost per month, full-time staff per month and percentage of budget expended.
13
3.Staffing and team dynamics
This metric measure the personnel changes over time, which involves staffing additions
and reductions over time.
An iterative development should start with a small team until the risks in the requirements
and architecture have been suitably resolved.
Depending on the overlap of iterations and other project specific circumstances, staffing
can vary.
Increase in staff can slow overall project progress as new people consume the productive
team of existing people in coming up to speed.
Low attrition of good people is a sign of success. The default perspectives of this metric
are people per month added and people per month leaving.
14
These three management indicators are responsible for technical progress, financial
status and staffing progress.
Adaptability is defined as the rework trend over time. This metric provides insight
into rework measurement.
All changes are not created equal. Some changes can be made in a staff- hour, while
others take staff-weeks. This metric can be collected by average hours per change, by
change type, by release, by components and by subsystems.
It is computed by dividing the test hours by the number of type 0 and type 1 SCOs.
Maturity is defined as the MTBF trend over time.
Software errors can be categorized into two types deterministic and nondeterministic.
16
Deterministic errors are also known as Bohr-bugs and nondeterministic errors are also
called as Heisen-bugs.
Bohr-bugs are a class of errors caused when the software is stimulated in a certain
way such as coding errors.
Heisen-bugs are software faults that are coincidental with a certain probabilistic
occurrence of a given situation, such as design errors.
This metric can be collected by failure counts, test hours until failure, by release, by
components and by subsystems.
These four quality indicators are based primarily on the measurement of software
change across evolving baselines of engineering data.
17
5.pragmatic Software Metrics
18
19
6.METRICS AUTOMATION:
Many opportunities are available to automate the project control activities of a software
project.
A Software Project Control Panel (SPCP) is essential for managing against a plan. This
panel integrates data from multiple sources to show the current status of some aspect of
the project.
The panel can support standard features and provide extensive capability for detailed
situation analysis.
SPCP is one example of metrics automation approach that collects, organizes and
reports values and trends extracted directly from the evolving engineering artifacts.
SPCP:
To implement a complete SPCP, the following are necessary.
Metrics primitives - trends, comparisons and progressions
A graphical user interface.
Metrics collection agents
Metrics data management server
Metrics definitions - actual metrics presentations for requirements progress,
Administrator installs the system, defines new mechanisms, graphical objects and
linkages. The whole display is called a panel. Within a panel are graphical objects, which
are types of layouts such as dials and bar charts for information. Each graphical object
displays a metric. A panel contains a number of graphical objects positioned in a
particular geometric layout.
A metric shown in a graphical object is labeled with the metric type, summary level
and insurance name (line of code, subsystem, server1).
21
22
The format and content of any project panel are configurable to the software
project manager's preference for tracking metrics of top-level interest. The basic
operation of an SPCP can be described by the following top -level use case.
COCOMO model, Critical Path Analysis, PERT technique, Monte Carlo approach
23
24
25
26
2.Critical Path Analysis
Critical path analysis (CPA) is a project management technique that requires mapping out
every key task that is necessary to complete a project. It includes identifying the amount
of time necessary to finish each activity and the dependencies of each activity on any
others.
Also known as the critical path method, CPA is used to set a realistic deadline for a
project and to track its progress along the way.
The concept of a critical path recognizes that completion of some tasks in a project is
dependent on the completion of other tasks. Some activities cannot start until others are
finished. Inevitably, that presents the risk of bottlenecks.
The project plan must be tracked through the course of a project to make sure every task is
on track and no adjustments need to be made. The timeline in a CPA is often expressed as
a Gantt chart, a type of bar chart that is designed to illustrate the key dependencies in a
complex project.
CPA is used widely in industries devoted to extremely complex projects, from aerospace
and defense to construction and product development. Today, project scheduling software
27
is used to automatically calculate dates for CPA, aiding in time efficiency, tracking
performance, and creating a unified workflow.
To create an optimal critical path, one can analyze if the time to complete tasks can be
reduced. For example, say a contractor is building a home. To reduce the number of days
it takes to build the frame, the contractor may choose to have more carpenters assigned to
the job. As a result, the overall project may be completed a day earlier.
It's worth noting that the contractor may have key questions to ask when analyzing the
critical path. Would the costs of this decision outweigh the savings of completing the
project a day earlier? Is there enough equipment to make this possible? Looking closely at
these interconnected variables is important for determining the critical path.
28
3. PERT helps management in deciding the best possible resource utilization
method.
4. PERT take advantage by using time network analysis technique.
5. PERT presents the structure for reporting information.
6. It helps the management in identifying the essential elements for the completion
of the project within time.
Advantages of PERT:
It has the following advantages :
1. Estimation of completion time of project is given by the PERT.
2. It supports the identification of the activities with slack time.
3. The start and dates of the activities of a specific project is determined.
4. It helps project manager in identifying the critical path activities.
5. PERT makes well organized diagram for the representation of large amount of
data.
Disadvantages of PERT:
It has the following dis advantages :
1. The complexity of PERT is more which leads to the problem in implementation.
2. The estimation of activity time are subjective in PERT which is a major
disadvantage.
3. Maintenance of PERT is also expensive and complex.
4. The actual distribution of may be different from the PERT beta distribution
which causes wrong assumptions.
5. It under estimates the expected project completion time as there is chances that
other paths can become the critical path if their related activities are deferred.
he Monte Carlo Analysis is a risk management technique, which project managers use to
estimate the impacts of various risks on the project cost and project timeline. Using this
method, one can easily find out what will happen to the project schedule and cost in case
any risk occurs. It is used at various times during the project life cycle to get the idea on a
range of probable outcomes during various scenarios.
example
Suppose you are estimating the timeline of a project and have come up with the best-case
scenario and the worst-case scenario. If everything goes according to your plan, there will
be no delays with respect to tasks. As a result, you will complete the project in 12 months.
However, if anything goes haywire, the project completion time will increase maximum up
to 15 months. This the worst-case scenario as far as business growth is concerned. This is
where Monte Carlo Analysis comes into the picture as it lets you find quantified estimates:
29
Monte Carlo Analysis – How it Works
All the tasks of a project are allotted and the data is fed to the Monte Carlo
automation.
Various timelines are shown by the tool such as the probabilities of completion of
one task in a specific number of days (as discussed in the example given above).
Now that the probable timelines of the various tasks are generated, a number of
simulations are done on these probabilities. The number of Monte Carlo
simulation project management ranges in a few thousand and all of them generate
the end dates.
Hence, the output of the Monte Carlo Analysis is not a Single value, but,
a Probability Curve. This curve depicts the probable completion dates of various
tasks and their probability values.
This curve enables project managers to come up with the most probable and
intelligent schedule of the project completion and submit a credible report of a
project timeline to the clients and higher management.
Similarly, the Monte Carlo project management technique is used to generate the
costing or budget for a project.
Initial estimation of the best-case scenario, the most likely scenario, and the
worst-case scenario are done by the project manager.
It works in the best possible manner if you provide pertinent values in the first
place. So, the evaluating process can go all wrong if the erroneous data gets
entered.
The Monte Carlo simulation in project management works for an entire project,
instead of individual tasks. So, everything has to be sorted out before using it.
30
31
32
33