0% found this document useful (0 votes)
11 views31 pages

SPM Unit 4-3

The document outlines key concepts in software project management, focusing on process automation, management indicators, and quality metrics. It emphasizes the importance of automating development processes, managing project workflows, and utilizing core metrics to assess progress and quality. The seven core metrics include management indicators such as work progress and budget expenditures, as well as quality indicators like change traffic and mean time between failures, all aimed at enhancing software development efficiency and predictability.

Uploaded by

void.00.diwakar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views31 pages

SPM Unit 4-3

The document outlines key concepts in software project management, focusing on process automation, management indicators, and quality metrics. It emphasizes the importance of automating development processes, managing project workflows, and utilizing core metrics to assess progress and quality. The seven core metrics include management indicators such as work progress and budget expenditures, as well as quality indicators like change traffic and mean time between failures, all aimed at enhancing software development efficiency and predictability.

Uploaded by

void.00.diwakar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 31

Software Project Management

Process Automation Unit IV


Syllabus in the unit
Process Automation:
Automation building blocks
Process control and process instrumentation: The seven-core metrics, Management
indicators, Quality Indicators, Life Cycle expectations, Pragmatic Software metrics,
Metrics Automation
Tailoring the process: Process discriminants

Topic-wise notes
Process Automation: Automation building blocks
Process automation, and change management in particular, are critical to an iterative
process. If change is too expensive, the development organization will resist it.
any software development organizations are focused on evolving mature processes
to improve the predictability of software management and the performance of their
software lines of business (in terms of product quality, time to market, return on
investment, and productivity). While process definition and tailoring are necessary, a
significant level of process automation is also required in order for modern software
development projects to operate profitably.
Automating the development process and establishing an infrastructure for sup-
porting the various project workflows are important activities of the engineering stage
of the life cycle. They include the tool selection, custom tool smithing, and process
automation necessary to perform against the development plan with acceptable
efficiency. Evolving the development environment into the maintenance environment
is also crucial to any long-lived software development project.
Management
There are many opportunities for automating the project planning and control
activities of the management workflow. Software cost estimation tools and WBS tools
are useful for generating the planning artifacts. For managing against a plan,
workflow management tools and a software project control panel that can maintain
an on-line version of the status assessment are advantageous. This automation
support can considerably improve the insight of the metrics collection and reporting
concepts.
Environment
Configuration management and version control are essential in a modern iterative
development process.
Requirements
Conventional approaches decomposed system requirements into subsystem
requirements, subsystem requirements into component requirements, and component
requirements into unit requirements. The equal treatment of all requirements drained
away engineering hours from the driving requirements, then wasted that time on
paperwork associated with detailed traceability that was inevitably discarded later as
the driving requirements and subsequent design understanding evolved.
In a modern process, the system requirements are captured in the vision statement.
Lower levels of requirements are driven by the process—organized by iteration rather
than by lower-level component—in the form of evaluation criteria. These criteria are
typically captured by a set of use cases and other textually represented objectives. The
vision statement captures the contract between the development group and the buyer.
This information should be evolving but slowly varying, across the life cycle, and
should be represented in a form that is understandable to the buyer. The evaluation
criteria are captured in the release specification artifacts, which are transient snapshots
of objectives for a given iteration. Evaluation criteria are derived from the vision
statement as well as from many other sources, such as make/buy analyses, risk
management concerns, architectural considerations, implementation constraints,
quality thresholds, and even shots in the dark.

Design
The tools that support the requirements, design, implementation, and assessment
workflows are usually used together.
Implementation
The implementation workflow relies primarily on a programming environment (edi-
tor, compiler, debugger, linker, run time) but must also include substantial integration
with the change management tools, visual modeling tools, and test automation tools
to support productive iteration.
Assessment and Deployment
The assessment workflow requires all the tools just discussed as well as additional
capabilities to support test automation and test management.
Project Control and Process Instrumentation:
The modern software development process tackles the central management issues of
complex software
1. Getting the design right by focusing on the architecture first
2. Managing risk through iterative development
3. Reducing the complexity with component- based techniques
4. Making software progress and quality tangible through instrumented change
management
5. Automating the overhead and bookkeeping activities through the use of
round-trip engineering and integrated environments.

The goals of soft- ware metrics are to provide the development team and the
management team with the following:
• An accurate assessment of progress to date
• Insight into the quality of the evolving software product
• A basis for estimating the cost and schedule for completing the product with
increasing accuracy over time.

THE SEVEN CORE METRICS


Many different metrics may be of value in managing a modern process. Three
are management indicators and four are quality indicators.
MANAGEMENT INDICATORS

• Work and progress (work performed over time)


• Budgeted cost and expenditures (cost incurred over time)
• Staffing and team dynamics (personnel changes over time)
QUALITY INDICATORS
• Change traffic and stability (change traffic over time)
• Breakage and modularity (average breakage per change over time)
• Rework and adaptability (average rework per change over time)
• Mean time between failures (MTBF) and maturity (defect rate over time)

The attributes of seven core metrics are

• They are simple, objective, easy to collect, easy to interpret, and hard to
misinterpret.
• Collection can be automated and nonintrusive.
• They provide for consistent assessments throughout the life cycle and are
derived from the evolving product baselines rather than from a subjective
assessment.
• They are useful to both management and engineering personnel for
communicating progress and quality in a consistent format.
• Their fidelity improves across the life cycle.

MANAGEMENT INDICATORS
There are three fundamental sets of management metrics: technical progress, financial
status, and staffing progress.
• By examining these perspectives, management can generally assess whether a
project is on budget and on schedule.
• Financial status is very well understood.
• Managers know their resource expenditures in terms of costs and schedule.
• The problem is to assess how much technical progress has been made.
• Conventional projects whose intermediate products were all paper documents
relied on subjective assessments of technical progress or measured the number of
documents completed.
• While these documents did reflect progress in expending energy, they were not
very indicative of useful work being accomplished.
• The management indicators include standard financial status based on an
earned value system, objective technical progress metrics tailored to the primary
measurement criteria for each major team of the organization, and staffing metrics that
provide insight into team dynamics.
WORK AND PROGRESS
The various activities of an iterative development project can be measured by defining
a planned estimate of the work in an objective measure, then tracking progress against
that plan. Each major organizational team should have at least one primary progress
perspective that it is measured against.

Release 3

100%
Release 2
Work

Release 1

Project Schedule

Expected progress for a typical project with three major releases


the default perspectives of this metric are as follows:

•Software architecture team: use cases demonstrated


•Software development team: SLOC under baseline change management, SCOs
closed
•Software assessment team: SCOs opened, test hours executed, evaluation
criteria met
•Software management team: milestones completed

BUDGETED COST AND EXPENDITURES


To maintain management control, measuring cost expenditures over the project life
cycle is always necessary.
Through the judicial use of the metrics for work and progress, a much more objective
assessment of technical progress can be performed to compare with cost expenditures.
With an iterative development process, it is important to plan the near-term activities
(usually a window of time less than six months) in detail and leave the far-term
activities as rough estimates to be refined as the current iteration is winding down and
planning for the next iteration becomes crucial.
Tracking financial progress usually takes on an organization-specific format. One
common approach to financial performance measurement is use of an earned value
system, which provides highly detailed cost and schedule insight. Its major weakness
for software projects has traditionally been the inability to assess the technical
progress (% complete) objectively and accurately. While this will always be the case
in the engineering stage of a project, earned value systems have proved to be effective
for the production stage, where there is high-fidelity tracking of actuals versus plans
and predictable results. The other core metrics provide a framework for detailed and
realistic quantifiable backup data to plan and track against, especially in the
production stage of a software project, when the cost and schedule expenditures are
highest.
Modern software processes are amenable to financial performance measurement
through an earned value approach. The basic parameters of an earned value system,
usually expressed in units of dollars, are as follows:

• Expenditure plan: the planned spending profile for a project over its planned
schedule. For most software projects (and other labor-intensive projects), this
profile generally tracks the staffing profile.
• Actual progress: the technical accomplishment relative to the planned progress
underlying the spending profile. In a healthy project, the actual progress tracks
planned progress closely.
• Actual cost: the actual spending profile for a project over its actual sched- ule.
In a healthy project, this profile tracks the planned profile closely.
• Earned value: the value that represents the planned cost of the actual progress.
• Cost variance: the difference between the actual cost and the earned value.
Positive values correspond to over-budget situations; negative values correspond
to under-budget situations.
• Schedule variance: the difference between the planned cost and the earned
value. Positive values correspond to behind-schedule situations; negative values
correspond to ahead-of-schedule situations.
Tracked are the below mentioned : the status of each part (a sequence of related chapters) using
the following states and earned values (the percent complete earned):
• 0 to 50%: content incomplete
• 50%: draft content; author has completed first draft text and art
• 65%: initial text baseline; initial text editing complete
• 75%: reviewable baseline; text and art editing complete
• 80%: updated baseline; cross-chapter consistency checked
• 90%: reviewed baseline; author has incorporated external reviewercomments
• 100%: final edit; editor has completed a final cleanup pass
STAFFING AND TEAM DYNAMICS
An iterative development should start with a small team until the risks in the
requirements and architecture have been suitably resolved.
• Depending on the overlap of iterations and other project-specific
circumstances, staffing can vary.
• It is reasonable to expect the maintenance team to be smaller than the
development team for these sorts of developments.
• For a commercial product development, the sizes of the maintenance and
development teams may be the same. When long-lived, continuously improved
products are involved, maintenance is just continuous construction of new and
better releases.
• Tracking actual versus planned staffing is a necessary and well-understood
management metric.
• There is one other important management indicator of changes in project
momentum: the relationship between attrition and additions.
• Increases in staff can slow overall project progress as new people consume the
productive time of existing people in coming up to speed.
• Low attrition of good people is a sign of success.
• Engineers are highly motivated by making progress in getting something to
work; this is the recurring theme underlying an efficient iterative development
process.
• If this motivation is not there, good engineers will migrate elsewhere. An
increase in unplanned attrition—namely, people leaving a project
prematurely—is one of the most glaring indicators that a project is destined
for trouble. The causes of such attrition can vary, but they are usually
personnel dissatisfaction with management methods, lack of teamwork, or
probability of failure in meeting the planned objectives.
QUALITY INDICATORS

The four quality indicators are based primarily on the measurement of software
change across evolving baselines of engineering data (such as design models and
source code).

CHANGE TRAFFIC AND STABILITY


Overall change traffic is one specific indicator of progress and quality. Change traffic is
defined as the number of software change orders opened and closed over the life cycle. This
metric can be collected by change type, by release, across all releases, by team, by
components, by subsystem, and so forth. Coupled with the work and progress metrics, it
provides insight into the stability of the software and its con- vergence toward stability (or
divergence toward instability). Stability is defined as the relationship between opened
versus closed SCOs. The change traffic relative to the release schedule provides insight into
schedule predictability, which is the primary value of this metric and an indicator of how
well the process is performing.
BREAKAGE AND MODULARITY

• Breakage is defined as the average extent of change, which is the amount of


software baseline that needs rework (in SLOC, function points, components,
subsystems, files, etc.).
• Modularity is the average breakage trend over time. For a healthy project, the trend
expectation is decreasing or stable.
• This indicator provides insight into the benign or malignant character of soft- ware
change.
• In a mature iterative development process, earlier changes are expected to result in
more scrap than later changes.
• Breakage trends that are increasing with time clearly indicate that product
maintainability is suspect.

REWORK AND ADAPTABILITY

Rework is defined as the average cost of change, which is the effort to analyze,
resolve, and retest all changes to software baselines. Adaptability is defined as the
rework trend over time. For a healthy project, the trend expectation is decreasing or
stable
Not all changes are created equal. Some changes can be made in a staff-hour, while
others take staff-weeks. This metric provides insight into rework measurement.
Rework trends that are increasing with time clearly indicate that product
maintainability is suspect.

MTBF AND MATURITY


MTBF is the average usage time between software faults. In rough terms, MTBF is computed by
dividing the test hours by the number of type 0 and type 1 SCOs. Maturity is defined as the MTBF
trend over time.
Early insight into maturity requires that an effective test infrastructure be established.
Conventional testing approaches for monolithic software programs focused on achieving complete
test coverage of every line of code, every branch.
MTB

Released Baselines
F

Project Schedule

Maturity expectation over a healthy project’s life cycle

In today’s distributed and componentized software systems, such complete test


coverage is achievable only for discrete components. Systems of components are more
efficiently tested by using statistical techniques. Consequently, the maturity metrics
measure statistics over usage time rather than product coverage.
Software errors can be categorized into two types: deterministic and non
deterministic. Physicists would characterize these as Bohr-bugs and Heisen-bugs,
respectively. Bohr-bugs represent a class of errors that always result when the
software is stimulated in a certain way. These errors are predominantly caused by
coding errors, and changes are typically isolated to a single component. Heisen-bugs
are software faults that are coincidental with a certain probabilistic occurrence of a
given situation. These errors are almost always design errors (frequently requiring
changes in multiple components) and typically are not repeatable even when the
software is stimulated in the same apparent way. To provide adequate test coverage
and resolve the statistically significant Heisen-bugs, extensive statistical testing under
realistic and randomized usage scenarios is necessary.
Conventional software programs executing a single program on a single proces sor
typically contained only Bohr-bugs. Modern, distributed systems with numerous
interoperating components executing across a network of processors are vulnerable
to Heisen-bugs, which are far more complicated to detect, analyze, and resolve. The
best way to mature a software product is to establish an initial test infrastructure that
allows execution of randomized usage scenarios early in the life cycle and
continuously evolves the breadth and depth of usage scenarios to optimize coverage
across the reliability-critical components.
As baselines are established, they should be continuously subjected to test scenarios.
From this base of testing, reliability metrics can be extracted. Meaningful insight into
product maturity can be gained by maximizing test time (through independent test
environments, automated regression tests, randomized statistical testing, after-hours
stress testing, etc.). This testing approach provides a powerful mechanism for
encouraging automation in the test activities as early in the life cycle as practical. This
technique could also be used for monitoring performance improvements and
measuring reliability.

LIFE-CYCLE EXPECTATIONS

There is no mathematical or formal derivation for using the seven core metrics. How-

ever, there were specific reasons for selecting them:

• The quality indicators are derived from the evolving product rather than from the

artifacts.

• They provide insight into the waste generated by the process. Scrap and rework

metrics are a standard measurement perspective of most manufacturing processes.

• They recognize the inherently dynamic nature of an iterative development process.

Rather than focus on the value, they explicitly concentrate on the trends or changes

with respect to time.

• The combination of insight from the current value and the current trend provides

tangible indicators for management action.


METRIC INCEPTION ELABORATION CONSTRUCTION TRANSITION

Progress 5% 25% 90% 100%


Architecture 30% 90% 100% 100%
Applications <5% 20% 85% 100%
Expenditures Low Moderate High High
Effort 5% 25% 90% 100%
Schedule 10% 40% 90% 100%
Staffing Small team Ramp up Steady Varying
Stability Volatile Moderate Moderate Stable
Architecture Volatile Moderate Stable Stable
Applications Volatile Volatile Moderate Stable
Modularity 50%–100% 25%–50% <25% 5%–10%
Architecture >50% >50% <15% <5%
Applications >80% >80% <25% <10%
Adaptability Varying Varying Benign Benign
Architecture Varying Moderate Benign Benign
Applications Varying Varying Moderate Benign
Maturity Prototype Fragile Usable Robust
Architecture Prototype Usable Robust Robust
Applications Prototype Fragile Usable Robust

The default pattern of life-cycle metrics evolution

PRAGMATIC SOFTWARE METRICS


Measuring is useful, but it doesn’t do any thinking for the decision makers. It only
provides data to help them ask the right questions, understand the context, and make
objective decisions. Because of the highly dynamic nature of software projects, these
measures must be available at any time, tailorable to various subsets of the evolving
product (release, version, component, class), and maintained so that trends can be
assessed (first and second derivatives with respect to time). This situation has been
achieved in practice only in projects where the metrics were maintained on-line as an
automated by-product of the development/integration environment.
The basic characteristics of a good metric are as follows:

1. It is considered meaningful by the customer, manager, and performer. If any


one of these stakeholders does not see the metric as meaningful, it will not be used.
“The customer is always right” is a sales motto, not an engineering tenet. Customers
come to software engineering providers because the providers are more expert than
they are at developing and managing software. Customers will accept metrics that are
demonstrated to be meaningful to the developer.
2. It demonstrates quantifiable correlation between process perturbations and
business performance. The only real organizational goals and objectives are financial:
cost reduction, revenue increase, and margin increase.
3. It is objective and unambiguously defined. Objectivity should translate into
some form of numeric representation (such as numbers, percentages, ratios) as
opposed to textual representations (such as excellent, good, fair, poor). Ambiguity is
minimized through well-understood units of measurement (such as staff-month,
SLOC, change, function point, class, scenario, requirement), which are surprisingly
hard to define precisely in the soft- ware engineering world.
4. It displays trends. This is an important characteristic. Understanding the
change in a metric’s value with respect to time, subsequent projects, subsequent
releases, and so forth is an extremely important perspective, especially for today’s
iterative development models. It is very rare that a given metric drives the appropriate
action directly. More typically, a metric presents a perspective. It is up to the decision
authority (manager, team, or other information processing entity) to interpret the
metric and decide what action is necessary.
5. It is a natural by-product of the process. The metric does not introduce new
artifacts or overhead activities; it is derived directly from the main- stream engineering
and management workflows.
6. It is supported by automation. Experience has demonstrated that the most successful metrics are
those that are collected and reported by automated tools, in part because software tools require
rigorous definitions of the data they process.

When metrics expose a problem, it is important to get underneath all the symptoms
and diagnose it. Metrics usually display effects; the causes require synthesis of multiple
perspectives and reasoning. For example, reasoning is still required to interpret the
following situations correctly:

• A low number of change requests to a software baseline may mean that the software
is mature and error-free, or it may mean that the test team is on vacation.
• A software change order that has been open for a long time may mean that the
problem was simple to diagnose and the solution required substantial rework, or it
may mean that a problem was very time-consuming to diagnose and the solution
required a simple change to a single line of code.
• A large increase in personnel in a given month may cause progress to increase
proportionally if they are trained people who are productive from the outset. It may
cause progress to decelerate if they are untrained new hires who demand extensive
support from productive people to get up to speed.
Value judgments cannot be made by metrics; they must be left to smarter entities such
as software project managers.
METRICS AUTOMATION
There are many opportunities to automate the project control activities of a software
project. For managing against a plan, a software project control panel (SPCP) that
maintains an on-line version of the status of evolving artifacts provides a key
advantage. This concept was first recommended by the Airlie Software Council
[Brown, 1996], using the metaphor of a project “dashboard.” The idea is to provide a
display panel that integrates data from multiple sources to show the current status of
some aspect of the project. For example, the software project manager would want to
see a display with overall project values, a test manager may want to see a display
focused on metrics specific to an upcoming beta release, and development managers
may be interested only in data concerning the subsystems and components for which
they are responsible. The panel can support standard features such as warning lights,
thresh- olds, variable scales, digital formats, and analog formats to present an
overview of the current situation. It can also provide extensive capability for detailed
situation analy- sis. This automation support can improve management insight into
progress and quality trends and improve the acceptance of metrics by the engineering
team.
To implement a complete SPCP, it is necessary to define and develop the following:

• Metrics primitives: indicators, trends, comparisons, and progressions


• A graphical user interface: GUI support for a software project manager role and
flexibility to support other roles
• Metrics collection agents: data extraction from the environment tools that
maintain the engineering notations for the various artifact sets
• Metrics data management server: data management support for populating the
metric displays of the GUI and storing the data extracted by the agents
• Metrics definitions: actual metrics presentations for requirements progress
(extracted from requirements set artifacts), design progress (extracted from design set
artifacts), implementation progress (extracted from implementa- tion set artifacts),
assessment progress (extracted from deployment set arti- facts), and other progress
dimensions (extracted from manual sources, financial management systems,
management artifacts, etc.)
• Actors: typically, the monitor and the administrator

Specific monitors (called roles) include software project managers, software


development team leads, software architects, and customers. For every role, there is a
specific panel configuration and scope of data presented. Each role performs the same
general use cases, but with a different focus.
• Monitor: defines panel layouts from existing mechanisms, graphical objects,
and linkages to project data; queries data to be displayed at differ- ent levels of
abstraction
• Administrator: installs the system; defines new mechanisms, graphical objects,
and linkages; handles archiving functions; defines composition and decomposition
structures for displaying multiple levels of abstraction
The whole display is called a panel. Within a panel are graphical objects, which are
types of layouts (such as dials and bar charts) for information. Each graphical object
displays a metric. A panel typically contains a number of graphical objects positioned
in a particular geometric layout. A metric shown in a graphical object is labeled with
the metric type, the summary level, and the instance name (such as lines of code,
subsystem, server1). Metrics can be displayed in two modes: value, referring to a given
point in time, or graph, referring to multiple and consecutive points in time. Only
some of the display types are applicable to graph metrics.
Metrics can be displayed with or without control values. A control value is an existing
expectation, either absolute or relative, that is used for comparison with a dynami cally
changing metric. For example, the plan for a given progress metric is a control value
for comparing the actuals of that metric. Thresholds are another example of control
values. Crossing a threshold may result in a state change that needs to be obvious to
a user. Control values can be shown in the same graphical object as the corresponding
metric, for visual comparison by the user.
Indicators may display data in formats that are binary (such as black and white),
tertiary (such as red, yellow, and green), digital (integer or float), or some other
enumerated type (a sequence of possible discrete values such as sun sat, ready-aim-
fire, jan..dec). Indicators also provide a mechanism that can be used to summarize a
condition or circumstance associated with another metric, or relationships between
metrics and their associated control values.
A trend graph presents values over time and permits upper and lower thresholds to
be defined. Crossing a threshold could be linked to an associated indicator to depict a
noticeable state change from green to red or vice versa. Trends support user-selected
time increments (such as day, week, month, quarter, year). A comparison graph
presents multiple values together, over time. Convergence or divergence among
values may be linked to an indicator. A progression graph presents percent complete,
where elements of progress are shown as transitions between states and an earned
value is associated with each state.
Metric information can be summarized following a user-defined, linear structure. (For
example, lines of code can be summarized by unit, subsystem, and project.) The
project is the top-level qualifier for all data belonging to a set (top-level context). Users
can define summary structures for lower levels, select the display level based on
previously defined structures, and drill down on a summarized number by seeing the
lower level details.
1. Project activity status. The graphical object in the upper left provides an
overview of the status of the top-level WBS elements. The seven elements
could be coded red, yellow, and green to reflect the current earned value
status. (In Figure 13-10, they are coded with white and shades of gray.) For
example, green would represent ahead of plan, yellow would indicate
within 10% of plan, and red would identify elements that have a greater
than 10% cost or schedule variance. This graphical object provides several
examples of indicators: tertiary colors, the actual percentage, and the cur-
rent first derivative (up arrow means getting better, down arrow means get-
ting worse).

Example SPCP display for a top-level project


situation
1. Technical artifact status. The graphical object in the upper right provides
an overview of the status of the evolving technical artifacts. The Req light
would display an assessment of the current state of the use case models and
requirements specifications. The Des light would do the same for the design
models, the Imp light for the source code baseline, and the Dep light for
the test program.
2. Milestone progress. The graphical object in the lower left provides a
progress assessment of the achievement of milestones against plan and
provides indicators of the current values.
3. Action item progress. The graphical object in the lower right provides a
different perspective of progress, showing the current number of open and
closed issues.
The following top-level use case, which describes the basic operational concept for
an SPCP, corresponds to a monitor interacting with the control panel:

• Start the SPCP. The SPCP starts and shows the most current information that
was saved when the user last used the SPCP.

• Select a panel preference. The user selects from a list of previously defined
default panel preferences. The SPCP displays the preference selected.

• Select a value or graph metric. The user selects whether the metric should be
displayed for a given point in time or in a graph, as a trend. The default for values is
the most recent measurement available. The default for trends is monthly.

• Select to superimpose controls. The user points to a graphical object and


requests that the control values for that metric and point in time be dis- played. In
the case of trends, the controls are shown superimposed with the metric.

• Drill down to trend. The user points to a graphical object displaying a point in
time and drills down to view the trend for the metric.

• Drill down to point in time. The user points to a graphical object display- ing
a trend and drills down to view the values for the metric.

• Drill down to lower levels of information. The user points to a graphical


object displaying a point in time and drills down to view the next level of
information.

• Drill down to lower level of indicators. The user points to a graphical object
displaying an indicator and drills down to view the breakdown of the next level of
indicators.
Tailoring the Process
Key Points

▲ The process framework must be configured to the specific characteristics of the


project.

▲ The scale of the project—in particular, team size—drives the process


configuration more than any other factor.

▲ Other key factors include stakeholder relationships, process flexibility, process


maturity, architectural risk, and domain experience.

▲ While specific process implementations will vary, the spirit underlying the
process is the same.

PROCESS DISCRIMINANTS

In tailoring the management process to a specific domain or project, there are two
dimensions of discriminating factors: technical complexity and management
complexity. The formality of reviews, the quality control of artifacts, the priorities of
concerns, and numerous other process instantiation parameters are governed by the
point a project occupies in these two dimensions.
A process framework is not a project-specific process implementation with a well-
defined recipe for success. Judgment must be injected, and the methods, techniques,
culture, formality, and organization must be tailored to the specific domain to
achieve a process implementation that can succeed. The following discussion about
the major differences among project processes is organized around six process
parameters: the size of the project and the five parameters that affect the process
exponent, and hence economies of scale, in COCOMO II. These are some of the
critical dimensions that a software project manager must consider when tailoring a
process frame- work to create a practical process implementation.

SCALE

Perhaps the single most important factor in tailoring a software process framework
to the specific needs of a project is the total scale of the software application. There
are many ways to measure scale, including number of source lines of code, number
of function points, number of use cases, and number of dollars. From a process
tailoring perspective, the primary measure of scale is the size of the team. As the
headcount increases, the importance of consistent interpersonal communications
becomes para- mount. Otherwise, the diseconomies of scale can have a serious
impact on achievement of the project objectives.

Five people is an optimal size for an engineering team. Many studies indicate that
most people can best manage four to seven things at a time. A simple extrapolation
of these results suggests that there are fundamentally different management
approaches needed to manage a team of 1 (trivial), a team of 5 (small), a team of 25
(moderate), a team of 125 (large), a team of 625 (huge), and so on. As team size
grows, a new level of personnel management is introduced at roughly each factor of
5. This model can be used to describe some of the process differences among projects
of different sizes.

Trivial-sized projects require almost no management overhead (planning,


communication, coordination, progress assessment, review, administration). There is
little need to document the intermediate artifacts. Workflow is single-threaded.
Performance is highly dependent on personnel skills.

Small projects (5 people) require very little management overhead, but team
leadership toward a common objective is crucial. There is some need to
communicate the intermediate artifacts among team members. Project milestones are
easily planned, informally conducted, and easily changed. There is a small number
of individual workflows. Performance depends primarily on personnel skills.
Process maturity is relatively unimportant. Individual tools can have a considerable
impact on performance.

Moderate-sized projects (25 people) require moderate management overhead,


including a dedicated software project manager to synchronize team workflows and
balance resources. Overhead workflows across all team leads are necessary for
review, coordination, and assessment. There is a definite need to communicate the
intermediate artifacts among teams. Project milestones are formally planned and
conducted, and the impacts of changes are typically benign. There is a small number
of concur- rent team workflows, each with multiple individual workflows.
Performance is highly dependent on the skills of key personnel, especially team
leads. Process maturity is valuable. An environment can have a considerable impact
on performance, but success can be achieved with certain key tools in place.

Large projects (125 people) require substantial management overhead, including a


dedicated software project manager and several subproject managers to synchronize
project-level and subproject-level workflows and to balance resources. There is
significant expenditure in overhead workflows across all team leads for
dissemination, review, coordination, and assessment. Intermediate artifacts are
explicitly emphasized to communicate engineering results across many diverse
teams. Project milestones are formally planned and conducted, and changes to
milestone plans are expensive. Large numbers of concurrent team workflows are
necessary, each with multiple individual workflows. Performance is highly
dependent on the skills of key personnel, especially subproject managers and team
leads. Project performance is dependent on average people, for two reasons:
1. There are numerous mundane jobs in any large project, especially in the
overhead workflows.

2. The probability of recruiting, maintaining, and retaining a large number of


exceptional people is small.

Process maturity is necessary, particularly the planning and control aspects of


managing project commitments, progress, and stakeholder expectations. An
integrated environment is required to manage change, automate artifact production,
and maintain consistency among the evolving artifacts.

Huge projects (625 people) require substantial management overhead, including


multiple software project managers and many subproject managers to synchronize
project-level and subproject-level workflows and to balance resources. There is
significant expenditure in overhead workflows across all team leads for
dissemination, review, coordination, and assessment. Intermediate artifacts are
explicitly emphasized to communicate engineering results across many diverse teams.
Project milestones are very formally planned and conducted, and changes to
milestone plans typically cause malignant replanning. There are very large numbers
of concurrent team workflows, each with multiple individual workflows. Performance
is highly dependent on the skills of key personnel, especially subproject managers and
team leads. Project performance is still dependent on average people.

Software process maturity and domain experience are mandatory to avoid risks and
ensure synchronization of expectations across numerous stakeholders. A mature,
highly integrated, common environment across the development teams is necessary
to manage change, automate artifact production, maintain consistency among the
evolving artifacts, and improve the return on investment of common processes,
common tools, common notations, and common metrics.
STAKEHOLDER COHESION OR CONTENTION

The degree of cooperation and coordination among stakeholders (buyers, developers,


users, subcontractors, and maintainers, among others) can significantly drive the
specifics of how a process is defined. This process parameter can range from cohesive
to adversarial. Cohesive teams have common goals, complementary skills, and close
communications. Adversarial teams have conflicting goals, competing or incomplete
skills, and less-than-open communications.

A product that is funded, developed, marketed, and sold by the same organization
can be set up with a common goal (for example, profitability). A small, collocated
organization can be established that has a cohesive skill base and excellent day -to-day
communications among team members.

It is much more difficult to set up a large contractual effort without some contention
across teams. A development contractor rarely has all the necessary software or
domain expertise and frequently must team with multiple subcontractors, who have
competing profit goals. Funding authorities and users want to minimize cost,
maximize the feature set, and accelerate time to market, while development
contractors want to maximize profitability. Large teams are almost impossible to
collocate, and synchronizing stakeholder expectations is challenging. All these factors
tend to degrade team cohesion and must be managed continuously.

PROCESS FEW STAKEHOLDERS, MULTIPLE STAKEHOLDERS, ADVERSARIAL


PRIMITIVE COHESIVE TEAMS RELATIONSHIPS

Life-cycle Weak boundaries Well-defined phase transitions to synchronize


phases between phases progress among concurrent activities

Artifacts Fewer and less detailed Management artifacts paramount, especially the business
management artifacts case, vision, and status assessment
required

Workflow effort Less overhead in High assessment overhead to ensure


allocations assessment stakeholder concurrence

Checkpoints Many informal events 3 or 4 formal events

Many informal technical walkthroughs necessary to


synchron ize technical decisions

Synchronization among stakeholder teams, which can


impede progress for weeks

Management Informal planning, Formal planning, project control, and


discipline project control, and organization
organization

Automation discipline (insignificant) On-line stakeholder environments necessary

Process discriminators that result from differences in stakeholder


cohesion

PROCESS FLEXIBILITY OR RIGOR


The degree of rigor, formality, and change freedom inherent in a specific project’s
“contract” (vision document, business case, and development plan) will have a
substantial impact on the implementation of the project’s process. For very loose con-
tracts such as building a commercial product within a business unit of a software
company (such as a Microsoft application or a Rational Software Corporation
development tool), management complexity is minimal. In these sorts of development
processes, feature set, time to market, budget, and quality can all be freely traded off
and changed with very little overhead. For example, if a company wanted to eliminate
a few features in a product under development to capture market share from the com-
petition by accelerating the product release, it would be feasible to make this decision
in less than a week. The entire coordination effort might involve only the development
manager, marketing manager, and business unit manager coordinating some key
commitments.

On the other hand, for a very rigorous contract, it could take many months to

authorize a change in a release schedule. For example, to avoid a large custom


development effort, it might be desirable to incorporate a new commercial product
into the overall design of a next-generation air traffic control system. This sort of
change would require coordination among the development contractor, funding
agency, users (perhaps the air traffic controllers’ union and major airlines),
certification agencies (such as the Federal Aviation Administration), associate
contractors for interfacing systems, and others. Large-scale, catastrophic cost-of-
failure systems have extensive contractual rigor and require significantly different
management approaches. Table summarizes key differences in the process primitives
for varying levels of process flexibility.

PROCESS
PRIMITIVE FLEXIBLE PROCESS INFLEXIBLE PROCESS
Life-cycle phases Tolerant of cavalier More credible basis required for inception
phase commitments phase commitments
Artifacts Changeable business Carefully controlled changes to business case and
case and vision vision
Workflow effort (insignificant) Increased levels of management and assessment
allocations workflows
Checkpoints Many informal events for 3 or 4 formal events
main-
taining technical consistency
Synchronization among stakeholder teams, which
can impede progress for days or weeks

Management (insignificant) More fidelity required for planning and project


discipline control
Automation discipline (insignificant) (insignificant)

PROCESS MATURITY

The process maturity level of the development organization, as defined by the Soft-
ware Engineering Institute’s Capability Maturity Model [SEI, 1993; 1993b; 1995], is
another key driver of management complexity. Managing a mature process (level 3 or
higher) is far simpler than managing an immature process (levels 1 and 2).
Organizations with a mature process typically have a high level of precedent
experience in developing software and a high level of existing process collateral that
enables predictable planning and execution of the process. This sort of collateral
includes well- defined methods, process automation tools, trained personnel,
planning metrics, artifact templates, and workflow templates. Tailoring a mature
organization’s process for a specific project is generally a straightforward task. Table
summarizes key differences in the process primitives for varying levels of process
maturity.

PROCESS MATURE , LEVEL 3 OR 4


PRIMITIVE ORGANI ZA TI O N LEVEL 1 ORGANI ZA TI ON

Life-cycle Well-established criteria (insignificant)


phases for phase transitions
Artifacts Well-established format, Free-form
con- tent, and production
methods
Workflow Well-established basis No basis
effort
allocations
Checkpoints Well-defined (insignificant)
combination of formal
and informal events
Management Predictable planning Informal planning and
project control
discipline
Objective status assessments
Automation Requires high levels of Little automation or
automa- disconnected
discipline tion for round-trip islands of automation
engineering, change
management, and pro- cess
instrumentation

Process discriminators that result from differences in process maturity


ARCHITECTURAL RISK

The degree of technical feasibility demonstrated before commitment to full -scale pro-
duction is an important dimension of defining a specific project’s process. There are
many sources of architectural risk. Some of the most important and recurring sources
are system performance (resource utilization, response time, throughput, accuracy),
robustness to change (addition of new features, incorporation of new technology,
adaptation to dynamic operational conditions), and system reliability (predictable
behavior, fault tolerance). The degree to which these risks can be eliminated be- fore
construction begins can have dramatic ramifications in the process tailoring. Table
summarizes key differences in the process primitives for varying levels of
architectural risk.
Process discriminators that result from differences in architectural risk
COMPLETE ARCHITECTURE
PROCESS FEASIBILITY NO ARCHITECTURE FEASIBILITY
PRIMITIVE DEMONSTRATION DEMONSTRATION

Life-cycle phases More inception and elabora Fewer early iterations


tion phase iterations More construction iterations
Artifacts Earlier breadth and (insignificant)
depth across technical
artifacts
Workflow effort Higher level of design effort Higher levels of
allocations Lower levels of implementation and
implementation and assessment to deal with
assessment increased scrap and rework
Checkpoints More emphasis on More emphasis on
executable demonstrations briefings, documents, and
simulations
Management discipline (insignificant) (insignificant)
Automation More environment resources Less environment demand early
in the
discipline required earlier in the life cycle life cycle

DOMAIN EXPERIENCE

The development organization’s domain experience governs its ability to converge on


an acceptable architecture in a minimum number of iterations. An organization that
has built five generations of radar control switches may be able to converge on an
adequate baseline architecture for a new radar application in two or three prototype
release iterations. A skilled software organization building its first radar application
may require four or five prototype releases before converging on an adequate base -
line. Table summarizes key differences in the process primitives for varying levels of
domain experience.
Process discriminators that result from differences in domain experience

PROCESS
PRIMITIVE EXPERIENCED TEAM INEXPERIENCED TEAM

Life-cycle phases Shorter engineering stage Longer engineering stage


Artifacts Less scrap and rework in More scrap and rework in require-
requirements and design ments and design sets
sets
Workflow effort Lower levels of Higher levels of requirements and
requirements
allocations and design design
Checkpoints (insignificant) (insignificant)
Management Less emphasis on risk More-frequent status assessments
discipline management required
Less-frequent status assess-
ments needed
Automation (insignificant) (insignificant)
discipline

EXAMPLE: SMALL-SCALE PROJECT VERSUS LARGE-SCALE PROJECT

An analysis of the differences between the phases, workflows, and artifacts of two
projects on opposite ends of the management complexity spectrum shows how differ-
ent two software project processes can be. The following gross generalizations are
intended to point out some of the dimensions of flexibility, priority, and fidelity that
can change when a process framework is applied to different applications, projects,
and domains.

Table 14-7 illustrates the differences in schedule distribution for large and small
projects across the life-cycle phases. A small commercial project (for example, a 50,000
source-line Visual Basic Windows application, built by a team of five) may require
only 1 month of inception, 2 months of elaboration, 5 months of construction , and 2
months of transition. A large, complex project (for example, a 300,000 source -line
embedded avionics program, built by a team of 40) could require 8 months
Schedule distribution across phases for small and large projects
ENGINEERING PRODUC TIO N

DOMAIN INCEPTION ELABORATION CONSTRUCTION TRANSITION

Small 10% 20% 50% 20%


commercial
project
Large, complex 15% 30% 40% 15%
project

of inception, 14 months of elaboration, 20 months of construction, and 8 months of transition.


Comparing the ratios of the life cycle spent in each phase highlights the obvious differences.
The biggest difference is the relative time at which the life-cycle architecture milestone occurs.
This corresponds to the amount of time spent in the engineering stage compared to the
production stage. For a small project, the split is about 30/70; for a large project, it is more like
45/55.
One key aspect of the differences between the two projects is the leverage of the various
process components in the success or failure of the project. This reflects the importance of
staffing or the level of associated risk management. Table 14-8 lists the workflows in order of
their importance.
The following list elaborates some of the key differences in discriminators of success. None of
these process components is unimportant, although some of them are more important than
others.
• Design is key in both domains. Good design of a commercial product is a key
differentiator in the marketplace and is the foundation for efficient new product releases.
Good design of a large, complex project is the foundation for predictable, cost-efficient
construction.
• Management is paramount in large projects, where the consequences of planning
errors, resource allocation errors, inconsistent stakeholder expectations, and other out -of-
balance factors can have catastrophic consequences for the overall team dynamics.
Management is far less important in a small team, where opportunities for
miscommunications are fewer and their consequences less significant.
• Deployment plays a far greater role for a small commercial product because there is a
broad user base of diverse individuals and environments.
TABLE Differences in workflow priorities between small and large projects

RANK SMALL COMMERCIAL PROJECT LARGE, COMPLEX PROJECT

1 Design Management

2 Implementation Design

3 Deployment Requirements

4 Requirements Assessment

5 Assessment Environment

6 Management Implementation

7 Environment Deployment
A large, one-of-a kind, complex project typically has a single deployment
site. Legacy systems and continuous operations may pose several risks, but
in general these problems are well understood and have a fairly static set
of objectives.

Another key set of differences is inherent in the implementation of the


various artifacts of the process. Table provides a conceptual example of these
differences.

Differences in artifacts between small and large projects

SMALL COMMERCIAL
ARTIFACT PROJECT LARGE, COMPLEX PROJECT
Work breakdown 1-page spreadsheet with 2 Financial management system with
structure levels of WBS elements 5 or 6 levels of WBS elements
Business case Spreadsheet and short memo 3-volume proposal including
technical volume, cost volume,
and related experience
Vision statement 10-page concept paper 200-page subsystem specification
Development 10-page plan 200-page development plan
plan
Release 3 interim release 8 to 10 interim release
specifications
and specifications specifications
number of
releases
Architecture 5 critical use cases, 50 UML 25 critical use cases, 200 UML dia-
description diagrams, 20 pages of text, grams, 100 pages of text, other
other graphics graphics
Software 50,000 lines of Visual Basic 300,000 lines of C++ code
code
Release 10-page release notes 100-page summary
description
Deployment User training course Transition plan
Sales rollout kit Installation plan
User manual On-line help and 100-page 200-page user manual
user manual
Status assessment Quarterly project reviews Monthly project management
reviews

You might also like