SPM Unit 4-3
SPM Unit 4-3
Topic-wise notes
Process Automation: Automation building blocks
Process automation, and change management in particular, are critical to an iterative
process. If change is too expensive, the development organization will resist it.
any software development organizations are focused on evolving mature processes
to improve the predictability of software management and the performance of their
software lines of business (in terms of product quality, time to market, return on
investment, and productivity). While process definition and tailoring are necessary, a
significant level of process automation is also required in order for modern software
development projects to operate profitably.
Automating the development process and establishing an infrastructure for sup-
porting the various project workflows are important activities of the engineering stage
of the life cycle. They include the tool selection, custom tool smithing, and process
automation necessary to perform against the development plan with acceptable
efficiency. Evolving the development environment into the maintenance environment
is also crucial to any long-lived software development project.
Management
There are many opportunities for automating the project planning and control
activities of the management workflow. Software cost estimation tools and WBS tools
are useful for generating the planning artifacts. For managing against a plan,
workflow management tools and a software project control panel that can maintain
an on-line version of the status assessment are advantageous. This automation
support can considerably improve the insight of the metrics collection and reporting
concepts.
Environment
Configuration management and version control are essential in a modern iterative
development process.
Requirements
Conventional approaches decomposed system requirements into subsystem
requirements, subsystem requirements into component requirements, and component
requirements into unit requirements. The equal treatment of all requirements drained
away engineering hours from the driving requirements, then wasted that time on
paperwork associated with detailed traceability that was inevitably discarded later as
the driving requirements and subsequent design understanding evolved.
In a modern process, the system requirements are captured in the vision statement.
Lower levels of requirements are driven by the process—organized by iteration rather
than by lower-level component—in the form of evaluation criteria. These criteria are
typically captured by a set of use cases and other textually represented objectives. The
vision statement captures the contract between the development group and the buyer.
This information should be evolving but slowly varying, across the life cycle, and
should be represented in a form that is understandable to the buyer. The evaluation
criteria are captured in the release specification artifacts, which are transient snapshots
of objectives for a given iteration. Evaluation criteria are derived from the vision
statement as well as from many other sources, such as make/buy analyses, risk
management concerns, architectural considerations, implementation constraints,
quality thresholds, and even shots in the dark.
Design
The tools that support the requirements, design, implementation, and assessment
workflows are usually used together.
Implementation
The implementation workflow relies primarily on a programming environment (edi-
tor, compiler, debugger, linker, run time) but must also include substantial integration
with the change management tools, visual modeling tools, and test automation tools
to support productive iteration.
Assessment and Deployment
The assessment workflow requires all the tools just discussed as well as additional
capabilities to support test automation and test management.
Project Control and Process Instrumentation:
The modern software development process tackles the central management issues of
complex software
1. Getting the design right by focusing on the architecture first
2. Managing risk through iterative development
3. Reducing the complexity with component- based techniques
4. Making software progress and quality tangible through instrumented change
management
5. Automating the overhead and bookkeeping activities through the use of
round-trip engineering and integrated environments.
The goals of soft- ware metrics are to provide the development team and the
management team with the following:
• An accurate assessment of progress to date
• Insight into the quality of the evolving software product
• A basis for estimating the cost and schedule for completing the product with
increasing accuracy over time.
• They are simple, objective, easy to collect, easy to interpret, and hard to
misinterpret.
• Collection can be automated and nonintrusive.
• They provide for consistent assessments throughout the life cycle and are
derived from the evolving product baselines rather than from a subjective
assessment.
• They are useful to both management and engineering personnel for
communicating progress and quality in a consistent format.
• Their fidelity improves across the life cycle.
MANAGEMENT INDICATORS
There are three fundamental sets of management metrics: technical progress, financial
status, and staffing progress.
• By examining these perspectives, management can generally assess whether a
project is on budget and on schedule.
• Financial status is very well understood.
• Managers know their resource expenditures in terms of costs and schedule.
• The problem is to assess how much technical progress has been made.
• Conventional projects whose intermediate products were all paper documents
relied on subjective assessments of technical progress or measured the number of
documents completed.
• While these documents did reflect progress in expending energy, they were not
very indicative of useful work being accomplished.
• The management indicators include standard financial status based on an
earned value system, objective technical progress metrics tailored to the primary
measurement criteria for each major team of the organization, and staffing metrics that
provide insight into team dynamics.
WORK AND PROGRESS
The various activities of an iterative development project can be measured by defining
a planned estimate of the work in an objective measure, then tracking progress against
that plan. Each major organizational team should have at least one primary progress
perspective that it is measured against.
Release 3
100%
Release 2
Work
Release 1
Project Schedule
• Expenditure plan: the planned spending profile for a project over its planned
schedule. For most software projects (and other labor-intensive projects), this
profile generally tracks the staffing profile.
• Actual progress: the technical accomplishment relative to the planned progress
underlying the spending profile. In a healthy project, the actual progress tracks
planned progress closely.
• Actual cost: the actual spending profile for a project over its actual sched- ule.
In a healthy project, this profile tracks the planned profile closely.
• Earned value: the value that represents the planned cost of the actual progress.
• Cost variance: the difference between the actual cost and the earned value.
Positive values correspond to over-budget situations; negative values correspond
to under-budget situations.
• Schedule variance: the difference between the planned cost and the earned
value. Positive values correspond to behind-schedule situations; negative values
correspond to ahead-of-schedule situations.
Tracked are the below mentioned : the status of each part (a sequence of related chapters) using
the following states and earned values (the percent complete earned):
• 0 to 50%: content incomplete
• 50%: draft content; author has completed first draft text and art
• 65%: initial text baseline; initial text editing complete
• 75%: reviewable baseline; text and art editing complete
• 80%: updated baseline; cross-chapter consistency checked
• 90%: reviewed baseline; author has incorporated external reviewercomments
• 100%: final edit; editor has completed a final cleanup pass
STAFFING AND TEAM DYNAMICS
An iterative development should start with a small team until the risks in the
requirements and architecture have been suitably resolved.
• Depending on the overlap of iterations and other project-specific
circumstances, staffing can vary.
• It is reasonable to expect the maintenance team to be smaller than the
development team for these sorts of developments.
• For a commercial product development, the sizes of the maintenance and
development teams may be the same. When long-lived, continuously improved
products are involved, maintenance is just continuous construction of new and
better releases.
• Tracking actual versus planned staffing is a necessary and well-understood
management metric.
• There is one other important management indicator of changes in project
momentum: the relationship between attrition and additions.
• Increases in staff can slow overall project progress as new people consume the
productive time of existing people in coming up to speed.
• Low attrition of good people is a sign of success.
• Engineers are highly motivated by making progress in getting something to
work; this is the recurring theme underlying an efficient iterative development
process.
• If this motivation is not there, good engineers will migrate elsewhere. An
increase in unplanned attrition—namely, people leaving a project
prematurely—is one of the most glaring indicators that a project is destined
for trouble. The causes of such attrition can vary, but they are usually
personnel dissatisfaction with management methods, lack of teamwork, or
probability of failure in meeting the planned objectives.
QUALITY INDICATORS
The four quality indicators are based primarily on the measurement of software
change across evolving baselines of engineering data (such as design models and
source code).
Rework is defined as the average cost of change, which is the effort to analyze,
resolve, and retest all changes to software baselines. Adaptability is defined as the
rework trend over time. For a healthy project, the trend expectation is decreasing or
stable
Not all changes are created equal. Some changes can be made in a staff-hour, while
others take staff-weeks. This metric provides insight into rework measurement.
Rework trends that are increasing with time clearly indicate that product
maintainability is suspect.
Released Baselines
F
Project Schedule
LIFE-CYCLE EXPECTATIONS
There is no mathematical or formal derivation for using the seven core metrics. How-
• The quality indicators are derived from the evolving product rather than from the
artifacts.
• They provide insight into the waste generated by the process. Scrap and rework
Rather than focus on the value, they explicitly concentrate on the trends or changes
• The combination of insight from the current value and the current trend provides
When metrics expose a problem, it is important to get underneath all the symptoms
and diagnose it. Metrics usually display effects; the causes require synthesis of multiple
perspectives and reasoning. For example, reasoning is still required to interpret the
following situations correctly:
• A low number of change requests to a software baseline may mean that the software
is mature and error-free, or it may mean that the test team is on vacation.
• A software change order that has been open for a long time may mean that the
problem was simple to diagnose and the solution required substantial rework, or it
may mean that a problem was very time-consuming to diagnose and the solution
required a simple change to a single line of code.
• A large increase in personnel in a given month may cause progress to increase
proportionally if they are trained people who are productive from the outset. It may
cause progress to decelerate if they are untrained new hires who demand extensive
support from productive people to get up to speed.
Value judgments cannot be made by metrics; they must be left to smarter entities such
as software project managers.
METRICS AUTOMATION
There are many opportunities to automate the project control activities of a software
project. For managing against a plan, a software project control panel (SPCP) that
maintains an on-line version of the status of evolving artifacts provides a key
advantage. This concept was first recommended by the Airlie Software Council
[Brown, 1996], using the metaphor of a project “dashboard.” The idea is to provide a
display panel that integrates data from multiple sources to show the current status of
some aspect of the project. For example, the software project manager would want to
see a display with overall project values, a test manager may want to see a display
focused on metrics specific to an upcoming beta release, and development managers
may be interested only in data concerning the subsystems and components for which
they are responsible. The panel can support standard features such as warning lights,
thresh- olds, variable scales, digital formats, and analog formats to present an
overview of the current situation. It can also provide extensive capability for detailed
situation analy- sis. This automation support can improve management insight into
progress and quality trends and improve the acceptance of metrics by the engineering
team.
To implement a complete SPCP, it is necessary to define and develop the following:
• Start the SPCP. The SPCP starts and shows the most current information that
was saved when the user last used the SPCP.
• Select a panel preference. The user selects from a list of previously defined
default panel preferences. The SPCP displays the preference selected.
• Select a value or graph metric. The user selects whether the metric should be
displayed for a given point in time or in a graph, as a trend. The default for values is
the most recent measurement available. The default for trends is monthly.
• Drill down to trend. The user points to a graphical object displaying a point in
time and drills down to view the trend for the metric.
• Drill down to point in time. The user points to a graphical object display- ing
a trend and drills down to view the values for the metric.
• Drill down to lower level of indicators. The user points to a graphical object
displaying an indicator and drills down to view the breakdown of the next level of
indicators.
Tailoring the Process
Key Points
▲ While specific process implementations will vary, the spirit underlying the
process is the same.
PROCESS DISCRIMINANTS
In tailoring the management process to a specific domain or project, there are two
dimensions of discriminating factors: technical complexity and management
complexity. The formality of reviews, the quality control of artifacts, the priorities of
concerns, and numerous other process instantiation parameters are governed by the
point a project occupies in these two dimensions.
A process framework is not a project-specific process implementation with a well-
defined recipe for success. Judgment must be injected, and the methods, techniques,
culture, formality, and organization must be tailored to the specific domain to
achieve a process implementation that can succeed. The following discussion about
the major differences among project processes is organized around six process
parameters: the size of the project and the five parameters that affect the process
exponent, and hence economies of scale, in COCOMO II. These are some of the
critical dimensions that a software project manager must consider when tailoring a
process frame- work to create a practical process implementation.
SCALE
Perhaps the single most important factor in tailoring a software process framework
to the specific needs of a project is the total scale of the software application. There
are many ways to measure scale, including number of source lines of code, number
of function points, number of use cases, and number of dollars. From a process
tailoring perspective, the primary measure of scale is the size of the team. As the
headcount increases, the importance of consistent interpersonal communications
becomes para- mount. Otherwise, the diseconomies of scale can have a serious
impact on achievement of the project objectives.
Five people is an optimal size for an engineering team. Many studies indicate that
most people can best manage four to seven things at a time. A simple extrapolation
of these results suggests that there are fundamentally different management
approaches needed to manage a team of 1 (trivial), a team of 5 (small), a team of 25
(moderate), a team of 125 (large), a team of 625 (huge), and so on. As team size
grows, a new level of personnel management is introduced at roughly each factor of
5. This model can be used to describe some of the process differences among projects
of different sizes.
Small projects (5 people) require very little management overhead, but team
leadership toward a common objective is crucial. There is some need to
communicate the intermediate artifacts among team members. Project milestones are
easily planned, informally conducted, and easily changed. There is a small number
of individual workflows. Performance depends primarily on personnel skills.
Process maturity is relatively unimportant. Individual tools can have a considerable
impact on performance.
Software process maturity and domain experience are mandatory to avoid risks and
ensure synchronization of expectations across numerous stakeholders. A mature,
highly integrated, common environment across the development teams is necessary
to manage change, automate artifact production, maintain consistency among the
evolving artifacts, and improve the return on investment of common processes,
common tools, common notations, and common metrics.
STAKEHOLDER COHESION OR CONTENTION
A product that is funded, developed, marketed, and sold by the same organization
can be set up with a common goal (for example, profitability). A small, collocated
organization can be established that has a cohesive skill base and excellent day -to-day
communications among team members.
It is much more difficult to set up a large contractual effort without some contention
across teams. A development contractor rarely has all the necessary software or
domain expertise and frequently must team with multiple subcontractors, who have
competing profit goals. Funding authorities and users want to minimize cost,
maximize the feature set, and accelerate time to market, while development
contractors want to maximize profitability. Large teams are almost impossible to
collocate, and synchronizing stakeholder expectations is challenging. All these factors
tend to degrade team cohesion and must be managed continuously.
Artifacts Fewer and less detailed Management artifacts paramount, especially the business
management artifacts case, vision, and status assessment
required
On the other hand, for a very rigorous contract, it could take many months to
PROCESS
PRIMITIVE FLEXIBLE PROCESS INFLEXIBLE PROCESS
Life-cycle phases Tolerant of cavalier More credible basis required for inception
phase commitments phase commitments
Artifacts Changeable business Carefully controlled changes to business case and
case and vision vision
Workflow effort (insignificant) Increased levels of management and assessment
allocations workflows
Checkpoints Many informal events for 3 or 4 formal events
main-
taining technical consistency
Synchronization among stakeholder teams, which
can impede progress for days or weeks
PROCESS MATURITY
The process maturity level of the development organization, as defined by the Soft-
ware Engineering Institute’s Capability Maturity Model [SEI, 1993; 1993b; 1995], is
another key driver of management complexity. Managing a mature process (level 3 or
higher) is far simpler than managing an immature process (levels 1 and 2).
Organizations with a mature process typically have a high level of precedent
experience in developing software and a high level of existing process collateral that
enables predictable planning and execution of the process. This sort of collateral
includes well- defined methods, process automation tools, trained personnel,
planning metrics, artifact templates, and workflow templates. Tailoring a mature
organization’s process for a specific project is generally a straightforward task. Table
summarizes key differences in the process primitives for varying levels of process
maturity.
The degree of technical feasibility demonstrated before commitment to full -scale pro-
duction is an important dimension of defining a specific project’s process. There are
many sources of architectural risk. Some of the most important and recurring sources
are system performance (resource utilization, response time, throughput, accuracy),
robustness to change (addition of new features, incorporation of new technology,
adaptation to dynamic operational conditions), and system reliability (predictable
behavior, fault tolerance). The degree to which these risks can be eliminated be- fore
construction begins can have dramatic ramifications in the process tailoring. Table
summarizes key differences in the process primitives for varying levels of
architectural risk.
Process discriminators that result from differences in architectural risk
COMPLETE ARCHITECTURE
PROCESS FEASIBILITY NO ARCHITECTURE FEASIBILITY
PRIMITIVE DEMONSTRATION DEMONSTRATION
DOMAIN EXPERIENCE
PROCESS
PRIMITIVE EXPERIENCED TEAM INEXPERIENCED TEAM
An analysis of the differences between the phases, workflows, and artifacts of two
projects on opposite ends of the management complexity spectrum shows how differ-
ent two software project processes can be. The following gross generalizations are
intended to point out some of the dimensions of flexibility, priority, and fidelity that
can change when a process framework is applied to different applications, projects,
and domains.
Table 14-7 illustrates the differences in schedule distribution for large and small
projects across the life-cycle phases. A small commercial project (for example, a 50,000
source-line Visual Basic Windows application, built by a team of five) may require
only 1 month of inception, 2 months of elaboration, 5 months of construction , and 2
months of transition. A large, complex project (for example, a 300,000 source -line
embedded avionics program, built by a team of 40) could require 8 months
Schedule distribution across phases for small and large projects
ENGINEERING PRODUC TIO N
1 Design Management
2 Implementation Design
3 Deployment Requirements
4 Requirements Assessment
5 Assessment Environment
6 Management Implementation
7 Environment Deployment
A large, one-of-a kind, complex project typically has a single deployment
site. Legacy systems and continuous operations may pose several risks, but
in general these problems are well understood and have a fairly static set
of objectives.
SMALL COMMERCIAL
ARTIFACT PROJECT LARGE, COMPLEX PROJECT
Work breakdown 1-page spreadsheet with 2 Financial management system with
structure levels of WBS elements 5 or 6 levels of WBS elements
Business case Spreadsheet and short memo 3-volume proposal including
technical volume, cost volume,
and related experience
Vision statement 10-page concept paper 200-page subsystem specification
Development 10-page plan 200-page development plan
plan
Release 3 interim release 8 to 10 interim release
specifications
and specifications specifications
number of
releases
Architecture 5 critical use cases, 50 UML 25 critical use cases, 200 UML dia-
description diagrams, 20 pages of text, grams, 100 pages of text, other
other graphics graphics
Software 50,000 lines of Visual Basic 300,000 lines of C++ code
code
Release 10-page release notes 100-page summary
description
Deployment User training course Transition plan
Sales rollout kit Installation plan
User manual On-line help and 100-page 200-page user manual
user manual
Status assessment Quarterly project reviews Monthly project management
reviews