CS706 Lecture Handouts (PDF Format)
CS706 Lecture Handouts (PDF Format)
LECTURE 01
Introduction
This course deals with a very important aspect of software engineering: quality assurance
of software products and services
Well learn different aspects of software quality assurance in this course
In the first few lectures, we will discuss what software quality is and how it impacts the
development of the software development and maintenance and other basic concepts in
SQA
In the second phase of this course, well discuss in detail the activities in each phase of
the software development lifecycle, as they relate to software quality assurance
In the third part of this course, well discuss different topics related to software quality
assurance. Well look at quality assurance processes, some of the major process
improvement programs from the quality assurance perspective
Well also study some other topics, given our time constraints
What is quality?
Synonyms of Quality
Excellence, Superiority, Class, Eminence, Value, Worth
Antonym of Quality
Inferiority
Marketability of Quality
Software Quality
Sense of beauty
Sense of fitness for purpose
Sense of elegance that goes beyond the simple absence of overt flaws
Has well-formed requirements
Robust
So the term software quality assurance would mean that the software guarantees high
quality
In this course, well learn the different processes, techniques, and activities, which
enables us the software professionals to provide that guarantee to ourselves and
our clients
Achieving Software Quality
For a software application to achieve high quality levels, it is necessary to begin upstream
and ensure that intermediate deliverables and work products are also of high quality levels.
This means that the entire process of software development must itself be focused on quality
(Capers Jones)
A software defect is an error, flaw, mistake, failure, or fault in software that prevents it
from behaving as intended (e.g., producing an incorrect or unexpected result)
Software defects are also known as software errors or software bugs
Bugs can have a wide variety of effects, with varying levels of inconvenience to the user
of the software. Some bugs have only a subtle effect on the programs functionality, and
may thus lie undetected for a long time. More serious bugs may cause the software to
crash or freeze leading to a denial of service
Others qualify as security bugs and might for example enable a malicious user to bypass
access controls in order to obtain unauthorized privileges
The results of bugs may be extremely serious
In 1996, the European Space Agencys US $1 billion prototype Arian 5 rocket was
destroyed less than a minute after launch, due a bug in the on-board guidance computer
program
In June 1994, a Royal Air Force Chinook crashed into the Mull of Kintyre, killing 29
people. An investigation uncovered sufficient evidence to convince that it may have been
caused by a software bug in the aircrafts engine control computer
In 2002, a study commissioned by the US Department of Commerce National Institute of
Standards and Technology concluded that software bugs are so prevalent and detrimental
that they cost the US economy and estimated US $59 billion annually, or about 0.6
percent of the gross domestic product
Errors of omission
Errors of speed or capacity
Something wrong is done. A classic example at the code level would be going through a loop
one time too many or branching on the wrong address
Errors of Omission
Something left out by accident. For example, omitting a parentheses in nested expressions
Errors of Clarity and Ambiguity
Software defects can be found in any of the documents and work products including very
serious ones in cost estimates and development plans
However, there are seven major classes of software work products where defects have a
strong probability of triggering some kind of request for warranty repair if they reach the
field
Errors in Design,
Errors due to Bad fixes,
Well discuss all of them in detail, when we talk about different processes of software
development life cycle
Defect Discovery
Both defect prevention and removal techniques are used by the best-in-the-class
companies
Defect prevention is very difficult to understand, study, and quantify. Well talk
about defect prevent in a later lecture
Both non-test and testing defect removal techniques must be applied
Code inspections
Design
Defects
Code
Defects
Document
Defects
Perf.
Defects
Reviews /
Inspections
Fair
Excellent
Excellent
Good
Fair
Prototypes
Good
Fair
Fair
N/A
Good
Testing (all
forms)
Poor
Poor
Good
Fair
Excellent
Correctness
Proofs
Poor
Poor
Good
Fair
Poor
Accumulation of defect statistics for errors found prior to delivery, and then for a
predetermined period after deployment (usually one year)
US averages: 85%
Best projects in best US companies: 99%
Defect Seeding
Willful insertion of errors into a software deliverable prior to a review, inspection, or testing
activity. It is the quickest way of determining defect removal efficiency. Considered
unpleasant by many
Defect Severity Levels
Defect Tracking
Defect Prevention
We do not want defects or faults to enter our work products, requirements, design,
code, or other documents
We try to eliminate the error sources in defect prevention
Defect prevention is very difficult to understand, study, and quantify
If human misconceptions are the error sources, education and training can help us remove
these error sources
If imprecise designs and implementations that deviate from product specifications or
design intentions are the causes for faults, formal methods can help us prevent such
deviations
Education and training provide people-based solutions for error source elimination
The people factor is the most important factor that determines the quality and, ultimately,
the success or failure of most software projects
Education and training of software professionals can help them control, manage, and
improve the way they work
There are a number of formal method approaches. The oldest and most influential formal
method is the so-called axiomatic approach
The research in this area is on-going and depending on the real need of the software
applications, formal methods are being used
The biggest obstacle to formal methods is the high cost associated with the difficult task
of performing these human intensive activities correctly without adequate automated
support
This fact also explains, to a degree, the increasing popularity of limited scope and semiformal approaches
Req.
Defects
Design
Defects
Code
Defects
Document
Defects
Perf.
Defects
JAD
Excellent
Good
N/A
Fair
Poor
Prototypes
Excellent
Excellent
Fair
N/A
Excellent
Structured
Methods
Fair
Good
Excellent
Fair
Fair
CASE Tools
Fair
Good
Fair
Fair
Fair
Blueprints /
Reusable
Code
Excellent
Excellent
Excellent
Excellent
Good
Good
Excellent
Fair
Poor
Good
QFD
Some of the reasons for having high quality software have been discussed in the first
lecture of this course, and so it should be well understood now why software products and
services should have high quality
There are negative consequences for poor or bad quality software
But still we see that the software industry still suffers from problems related to software
quality
Now well look at six root causes of poor software quality, and discuss them in detail
60%
20%
15%
5%
There is another point that must be remembered that software varies from industry to industry
The focus on software quality naturally is dependent on the industry, as well as the
importance of the software application. More critical applications, naturally, need to have
higher software quality than others
Software
10
Product-Specific Attributes
Ease of use, Documentation,
Defect impact,
Packaging,
Defect tolerance,
Defect frequency
Price versus reliability,
Performance
Organization-Specific Attributes
Service and support, Internal processes
11
12
13
14
Software
15
Every situation introduces new challenges for development of high quality software
16
17
18
Developed by the fact that major commercial software companies have latent
software bugs in their released products
Major commercial software companies have cumulative defect removal efficiency of
95% (and 99% on their best projects)
This concept is very hazardous for ordinary companies, which usually have their
defect removal efficiency level between 80%-85%
Quality will be decrease for these companies
Data Quality
Extremely important to understand issues of data quality
Data results in (useful | useless) information
Usually, governments are holders of largest data banks (are they consistent?)
Companies are increasingly using data to their advantage over competitors
Data warehouses present a unique challenge to keep data consistent
Another problem is the interpretation of data
19
20
Existing system
Domain/business area
Examples of Requirements
The system shall maintain records of all payments made to employees on accounts of
salaries, bonuses, travel/daily allowances, medical allowances, etc.
The system shall interface with the central computer to send daily sales and inventory
data from every retail store
The system shall maintain records of all library materials including books, serials,
newspapers and magazines, video and audio tapes, reports, collections of transparencies,
CD-ROMs, DVDs, etc.
Kinds of Software Requirements
Functional requirements, Non-functional requirements, Domain requirements, Inverse
requirements, Design and implementation constraints
Functional Requirements
Statements describing what the system does
Functionality of the system
Statements of services the system should provide
o Reaction to particular inputs
o Behavior in particular situations
Sequencing and parallelism are also captured by functional requirements
Abnormal behavior is also documented as functional requirements in the form of
exception handling
Non-Functional Requirements
Most non-functional requirements relate to the system as a whole. They include
constraints on timing, performance, reliability, security, maintainability, accuracy, the
development process, standards, etc.
They are often more critical than individual functional requirements
Capture the emergent behavior of the system, that is they relate to system as a whole
Many non-functional requirements describe the quality attributes of the software
product
For example, if an aircraft system does not meet reliability requirements, it will not be
certified as safe
If a real-time control system fails to meet its performance requirements, the control
functions will not operate correctly
21
Product Requirements
The system shall allow one hundred thousand hits per minute on the website
The system shall not have down time of more than one second for continuous
execution of one thousand hours
Organizational Requirements
22
External Requirements
The system shall not disclose any personal information about members of the library
system to other members except system administrators
The system shall comply with the local and national laws regarding the use of
software tools
23
Domain Requirements
Requirements that come from the application domain and reflect fundamental
characteristics of that application domain
Can be functional or non-functional
These requirements, sometimes, are not explicitly mentioned, as domain experts find
it difficult to convey them. However, their absence can cause significant
dissatisfaction
Domain requirements can impose strict constraints on solutions. This is particularly
true for scientific and engineering domains
Domain-specific terminology can also cause confusion
Inverse Requirements
They explain what the system shall not do. Many people find it convenient to describe
their needs in this manner
These requirements indicate the indecisive nature of customers about certain aspects of a
new software product
They are development guidelines within which the designer must work, which can
seriously limit design and implementation options
24
Requirements Elicitation
Determining the system requirements through consultation with stakeholders, from
system documents, domain knowledge, and market studies
Requirements acquisition or requirements discovery
Requirements Analysis and Negotiation
Understanding the relationships among various customer requirements and shaping
those relationships to achieve a successful result
Negotiations among different stakeholders and requirements engineers
Incomplete and inconsistent information needs to be tackled here
Some analysis and negotiation needs to be done on account of budgetary constraints
Requirements Specification
Requirements Document
Software designers, developers and testers are the primary users of the document
Requirements Validation
Reviewing the requirements model for consistency and completeness
This process is intended to detect problems in the requirements document, before they are
used to as a basis for the system development
Requirements need to be validated for
o consistency
o testability
25
performance issues
conformance to local/national laws
conformance to ethical issues
conformance to company policies
availability of technology
Requirements Management
Identify, control and track requirements and the changes that will be made to them
It is important to trace requirements both ways
o origin of a requirement
o how is it implemented
This is a continuous process
26
27
Changing/Creeping Requirements
Requirements will change, no matter what
A major issue in requirements engineering is the rate at which requirements change
once the requirements phase has officially ended
This rate is on average 3% per month in the subsequent design phase, and will go
down after that
This rate should come down to 1% per month during coding
Ideally, this should come down to no changes in testing
Defects and Creeping Requirements
Studies have shown that very significant percentage of delivered defects can be traced
back to creeping user requirements
This realization can only be made, if defect tracking, requirements traceability, defect
removal efficiency, and defect rates are all monitored for software projects
Damage Control of Creeping Requirements
Following quality assurance mechanisms can limit the damage done by creeping
requirements
o Formal change management procedures
o State-of-the-art configuration control tools
o Formal design and code inspections
Problems with Natural Languages
Lack of clarity
Requirements confusion
Requirements amalgamation
Natural language understanding relies on the specification readers and writers using
the same words for same concept
A natural language requirements specification is over-flexible. You can say the same
thing in completely different ways
It is not possible to modularize natural language requirements. It may be difficult to
find all related requirements
o To discover the impact of a change, every requirement have to be examined
Writing Requirements
Requirements specification should establish an understanding between customers and
suppliers about what a system is supposed to do, and provide a basis for validation
and verification
Typically, requirements documents are written in natural languages (like, English,
Japanese, French, etc.)
Natural languages are ambiguous
Structured languages can be used with the natural languages to specify requirements
o These languages cannot completely define requirements
28
29
30
31
32
33
Design Independent: An SRS is design independent if it does not imply a specific software
architecture or algorithm
Annotated
The purpose of annotating requirements contained in an SRS is to provide guidance to the
development organization
Relative necessity (E/D/O)
Relative stability
Concise: The SRS that is shorter is better, given that it meets all characteristics
Organized: An SRS is organized if requirements contained therein are easy to locate. This
implies that requirements are arranged so that requirements that are related are co-related
Phrases to Look for in an SRS
Always, Every, All, None, Never
Certainly, Therefore, Clearly, Obviously, Evidently
Some, Sometimes, Often, Usually, Ordinarily, Customarily, Most, Mostly
Etc., And So Forth, And So On, Such As
Good, Fast, Cheap, Efficient, Small, Stable
Handled, Processed, Rejected, Skipped, Eliminated
IfThen(but missing Else)
The Balancing Act
Achieving all the preceding attributes in an SRS is impossible
Once you become involved in writing an SRS, you will gain insight and experience
necessary to do the balancing act
There is no such thing as a perfect SRS
34
Software Design
Software design is an artifact that represents a solution, showing its main features and
behavior, and is the basis for implementing a program or collection of programs
Design is a meaningful representation of something that is to be built. It can be traced to
a customers requirements and at the same time assessed for quality against a set of predefined criteria of good design
Software design is different from other forms of design because it is not constrained by
physical objects, structures, or laws
As a result, software design tends to be much more complex than other forms of design
because it is conceptually unbounded, whereas the capabilities of the human mind are
bounded
Tasks are generally ill-defined and suffer from incomplete and inaccurate specifications
There is seldom a predefined solution, although many solutions tend to satisfy the defined
task
Viable solutions usually require broad and interdisciplinary knowledge and skill, some of
which is based on rapidly changing technology
Solutions often involve many components that have numerous interactions
Expert designers use a mostly breadth-first approach because it allows them to mentally
simulate the execution of an evolving system to detect unwanted interaction,
inconsistencies, weaknesses, and incompleteness of their designs
Thus, design is driven by known solutions, which increases performance by allowing a
user to dynamically shift goals and activities
Good designers structure problem formulations by discovering missing information, such
as problem goals and evaluation criteria, and resolving many open-ended constraints
Hence the challenge of design necessitates the use of a methodical approach based on key
principles and practices to effectively and efficiently produce high quality software
designs
However, designers must occasionally deviate from a defined method in response to
newly acquired information of insights
Software design principles identify strategic approaches to the production of quality
software designs
Software design practices identify tactical methods for producing quality software designs
Software design procedures provide an organizational framework for designing software
An Important Point
35
one
one
one
one
Design Process
It is a sequence of steps that enables a designer to describe all aspects of the software to
be built
During the design process, the quality of the evolving design is assessed with a series of
formal technical reviews or design walkthroughs
Needs creative skills, past experience, sense of what makes good software, and an
overall commitment to quality
Design Model
Design Defects
Defects introduced during preliminary design phase are usually not discovered until
integration testing, which is too late in most cases
Defects introduced during detailed design phase are usually discovered during unit testing
All four categories of defects (Errors of commission, Errors of omission, Errors of clarity
and ambiguity, Errors of speed and capacity) are found in design models
Most common defects are errors of omission, followed by errors of commission
Errors of clarity and ambiguity are also common, and many performance related
problems originate in design process also
Overall design ranks next to requirements as a source of very troublesome and expensive
errors
A combination of defect prevention and defect removal is needed for dealing with design
defects
36
Formal design inspections are one of the most powerful and successful software quality
approaches of all times
Software professionals should incorporate inspections in their software development
process
Defects in information on how t start-up a feature, control its behavior, and safely turn off
a feature when finished are common in commercia l and in-house software applications
Fifty percent of the problems reported to commercial software vendors are of this class
Data Elements
Errors in describing the data used by the application are a major source of problems
downstream during coding and testing
A minor example of errors due to inadequate design of data elements can be seen in many
programs that record addresses and telephone numbers
Often insufficient space is reserved for names, etc.
Data Relationships: Errors in describing data relationships are very common and a source of
much trouble later
Structure of the Application
Complex software structures with convoluted control flow tend to have higher error rates
Poor structural design is fairly common, and is often due to haste or poor training and
preparation
Tools can measure cyclomatic and essential complexity
Prevention is often better than attempting to simplify an already complex software
structure
Many errors of speed and capacity have their origin in failing to design for optimum
performance
Performance errors are a result of complex control flow, excessive branching, or too
many sequential processing (use parallel processing)
37
Interfaces
38
39
The design must implement all of the explicit requirements contained in the analysis
model, and it must accommodate all of the implicit requirements desired by the customer
The design must be readable and understandable guide for those who generate code, write
test cases, and test the software
The design should provide a complete picture of the software, addressing the data,
functional, and behavioral domains from an implementation perspective
Separation of concerns
Modeling real-world objects
Minimizing the interactions among cohesive design components
The design should be traceable to the analysis model
The design should be structured to accommodate change
The design should be structured to degrade gently, even when aberrant data, events, or
operating conditions are encountered
Design is not coding, coding is not design
Lets look at some design model guidelines, before we discuss the design quality attributes
Guidelines for Good Design Model
The guidelines we have just discussed help designers enormously in developing high quality
software designs. Designers apply these guidelines with the help of fundamental design
concepts, which improve the internal and external quality of software design
Questions Answered by Design Concepts
40
Levels of Abstraction
At the highest level of abstraction, a solution is stated in broad terms using the language
of the problem environment
At lower levels of abstraction, a more procedural orientation is taken. Problem-oriented
terminology is coupled with implementation-oriented terminology in an effort to state a
solution
At the lowest level of abstraction, the solution is stated in a manner that can be directly
implemented
Types of Abstraction
Procedural abstraction
o Named sequence of instructions that has a specific and limited function
o Example: Open door
Data abstraction
o Named collection of data that describes a data object
o Example: any object (ADT)
Control abstraction
o Implies a program control mechanism without specifying internal details
o Example: synchronization semaphore
Refinement
41
There is a tendency to move immediately to full detail, skipping the refinement steps.
This leads to errors and omissions and makes the design much more difficult to
review. Perform stepwise refinement
Abstraction and refinement are complementary concepts
Modularity
42
Modules should be specified and designed so that information (procedures and data)
contained within a module is inaccessible to other modules that have no need for such
information
IH means that effective modularity can be achieved by defining a set of independent
modules that communicate with one another only that information necessary to achieve a
software function
Abstraction helps to define the procedural (or informational) entities that make up the
software
IH defines and enforces access constraints to both procedural detail within a module and
any local data structure used by the module
The greatest benefits of IH are achieved when modifications are required during testing
and later, during software maintenance
Both data and procedures are hidden from other parts of the software, inadvertent errors
introduced modifications are less likely to propagate to other locations within the
software
Cohesion
Cohesion is the qualitative indication of the degree to which a module focuses on just one
thing
In other words, cohesion is a measure of the relative functional strength of a module
A cohesive module performs one single task or is focused on one thing
Highly cohesive modules are better, however, mid-range cohesion is acceptable
Low-end cohesion is very bad
Types of Cohesion
43
Type of Coupling
Indirect coupling occurs when one object interacts with another object through a third
component, usually a controller. Another form of indirect coupling occurs when using a
data-driven style of computation
Data coupling occurs when a portion of a data structure is passed among modules. When
using data coupling, shorter parameter lists are preferred to longer ones
Stamp coupling occurs when a portion of data structure is passed among modules. When
using stamp coupling, a smaller number of actual arguments are preferable to a larger
number because the only data that should be passed to a module is what it requires
Control coupling occurs when information is passed to a module that affects its internal
control. This is undesirable because it requires the calling module to know the internal
operation of the module being called
External coupling occurs when a module depends on the external environment
Common coupling occurs when modules access common areas of global or shared data.
This form of coupling can cause one module to unintentionally interfere with the
operation of another module
Content coupling occurs when one module uses information contained in another module
Use a design method, which is most suitable for the problem at hand. Dont just use the
latest or the most popular design method
There are many structured design and object-oriented design methods to choose from
Follow the design methods representation scheme. It helps in understanding design
44
45
The act of programming, also known as coding, produces the primary products
executables of a software development effort
All prior activities culminate in their development
Programming is done in a programming language
Coding Defects
All four categories of defects (of commission, Errors of omission, Errors of ambiguity
and clarity, Errors of speed and capacity) are found in source code
Errors of commission are the most common when the code is underdevelopment
The most surprising aspect of coding defects is that more than fifty (50) percent of the
serious bugs or errors found in the source code did not truly originate in the source code
A majority of the so-called programming errors are really due to the programmer not
understanding the design or a design not correctly interpreting a requirement
Software is one of the most difficult products in human history to visualize prior to
having to build it, although complex electronic circuits have the same characteristic
Built- in syntax checkers and editors with modern programming languages have the
capacity to find many true programming errors such as missed parentheses or looping
problems
They also have the capacity to measure and correct poor structure and excessive
branching
The kinds of errors that are not easily found are deeper problems in algorithms or those
associated with misinterpretation of design
At least five hundred (500) programming languages are in use, and the characteristics of
the languages themselves interact with factors such as human attention spans and
capacities of temporary memory
This means that each language, or family of languages, tends to have common patterns of
defects but the patterns are not the same from language-to- language
There is no solid empirical data that strongly-typed languages have lower defect rates
than weakly-typed languages, although there is no counter evidence either
Of course for all programming languages, branching errors are endemic. That is,
branching to the wrong location for execution of the next code segment
Many high- level languages, such as Ada and Modula, were designed to minimize certain
common kinds of errors, such as mixing data types or looping incorrect number of times
Of course, typographical errors and syntactical errors can still occur, but the more
troublesome errors have to do with logic problems or incorrect algorithms
A common form of error with both non-procedural and procedural languages has to do
with retrieving, storing, and validating data
It may sometimes happen that the wrong data is requested
Programming in any language is a complex intellectual challenge with a high probability
of making mistakes from time to time
46
Since low-level languages often manipulate registers and require that programmers setup
their own loop controls, common errors involve failure to initialize registers or going
through loops the wrong number of times, not allocating space for data and subroutines
For weakly-typed languages, mismatched data types are common errors
Also known as IDEs, these suites include an editor, a compiler, a make utility, a profiler,
and a debugger. Other tools may also be included
Recent IDEs include tools to model software designs and implement graphical user
interfaces
47
Coding standards are controversial because the choice among many candidate standards
is subjective and somewhat arbitrary
Standards are most useful when they support fundamental programming principles
So, it is easier to adopt a standard for handling exceptions, than for identifying the
amount of white-space to use for indentation
An organization should always ask itself whether a coding standard improves program
comprehension characteristics
Specify the amount of white-space that should be used and where it should appear
o Before and after loop statements and function definitions
o At each indentation level (two or four spaces have been reported as improving
comprehensibility of programs)
Physically offset code comments from code when contained on the same line
Use comments to explain each class, function, and variable contained in source code.
(Comments can be from 10% and up)
o Key interactions that a function has with other functions and global variables
o Complex algorithms used by every function
o Exception handling
o Behavior and effect of iterative control flow statements and interior block
statements
Provide working examples in the user documentation or tutorial materials
48
Process all exceptions so that personnel can more easily detect their cause
Log important system events, including exceptions
User interface prototyping helps identify necessary features that software engineers might
otherwise overlook
Prototyping can reduce the development effort significantly
Prototyping reduces development risk because is allows programmers to explore methods
for achieving performance and other high-risk requirements
49
Colleagues as Critics
There is no particular reason why your friend and colleague cannot also be your sternest
critic. (Jerry Weinberg)
Benefits of Review
Kinds of Reviews
Business reviews, Technical reviews, Management reviews, Walk-throughs, Inspections
Objectives of Business Reviews
50
Achieve technical work or more uniform, or at least more predictable, quality than can be
achieved without reviews, in order to make technical work more management
Software reviews are a filter for software engineering process
Reviews are applied at several points during software development and serve to uncover
errors and defects that can then be removed
Software reviews purify the software engineering activities
Technical work needs reviewing for the same reason that pencils need erasers: To err is
human
Another reason we need technical reviews is that although people are good at catching
some of their own errors, large classes of errors escape the originator more easily than
they escape anyone else
They also ensure that any changes to the software are implemented according to predefined procedures and standards
Validate from a management perspective that the project is making progress according to
the project plan
Ensure a deliverable is ready for management approval
Resolve issues that require managements attention
Identify if the project needs a change of direction
Control the project through adequate allocation of resources
Review Roles
Facilitator, Author, Recorder, Reviewer, Observer
Responsibilities of Facilitator
Responsible for providing the background of the work and assigning roles to
attendees
Encourages all attendees to participate
Keeps the meeting focused and moving
Responsible for gaining consensus on problems
Responsibilities of Author
51
Responsibilities of Recorder
Collects and records each defect uncovered during the review meeting
Develops an issues list and identifies whose responsibility it is to resolve each issue
Records meeting decisions on issues; prepares the minutes; and publishes the minutes,
and continually tracks the action items
Responsibilities of Reviewer
Responsibilities of Observer
A new member to the project team, who learns the product and observes the review
techniques
Review Guidelines
Preparation, Discussions, Respect, Agenda, Review Records, Resources, Attendees
Review Frequency
requirements phase
design phase
code phase
test phase
Review Planning
52
Facilitator begins the meeting with an introduction of agenda, people, and description of
their roles
Author of the document proceeds to explain the materials, while reviewers raise issues
based on advance preparation
When valid problems, issues, or defects are discovered, they are classified according to
their origin or severity and then recorded
These are accompanied with the names of individuals who are responsible for resolution
and the time frame during which the item will be resolved
Related recommendations are also recorded
Review Report
Published by the recorder, with approval from all attendees, after a week of the review
meeting
Review report consists of
o Elements reviewed
o Names of individuals who participated in the review
o Specific inputs to the review
o List of unresolved items
o List of issues that need to be escalated to management
o Action items/ownership/status
o Suggested recommendations
Rework
53
During the follow-up, that all discrepancies identified are resolved and the exit criteria for
the review have been met
Document lessons learned during the final report also
54
An inspection is a rigorous team review of a work product by peers of the producer of the
work product
The size of the team will vary with the characteristics of the work product being
inspected; e.g., size, type
The primary purpose is to find defects, recording as a basis for analysis on the current
project and for historical reference and for improvement for future projects, analyzing
them, and initiating rework to correct the defects
Direct fault detection and removal
Inspections are most effective when performed immediately after the work product is
complete, but they can be held any time the work product is deemed ready for inspection
Inspections are critical reading and analysis of software code or other software artifacts,
such as designs, product specifications, test plans, etc
Inspections are typically conducted by multiple human inspectors, through some
coordination process. Multiple inspection phases or sessions may be used
Faults are detected directly in inspection by human inspectors, either during their
individual inspections or various types of group sessions
Identified faults need to be removed as a result of the inspection process, and their
removal also needs to be verified
The inspection processes vary, but typically include some planning and follow-up
activities in addition to the core inspection activity
Developed by Michael Fagan at IBM and were first reported in public domain in 1976
Inspections remove software defects at reduced cost
Inspections enable us to remove defects early in the software life cycle, and it always
cheaper to remove defects earlier in than later in the software life cycle
We know that defects are injected in every software life cycle activity. We remove some
of these defects in testing activities after code is completed. We also know that all defects
are not removed at shipment time, and these are known as latent defects. We want to
eliminate or at least minimize latent defects in the shipped software product. It is
expensive to find and remove defects in the testing phase, and even more expensive after
shipment of the software. We can use inspections to reduce these costs and improve the
timelines also.
How Defect Removal is Cheaper for Inspections as Compared to Software Testing
During testing, defects are found, then the programmers are notified of their presence,
who will recreate the defects under the similar circumstances, fix them, re-test the
software and re-integrate the software module, which were affected
While in inspections, the inspection process is executed in the same life cycle activity,
and substantial amount of rework is avoided
This results in the reduction of costs
If and when defects are detected after the shipment of the software, then these costs are
even higher
55
Many times, original development team is disbanded after the completion of the project
and new staff is looking after the maintenance activity
These people are usually not fully aware about the project
This can result in unplanned expenses for the software development company
On the other hand, if an effective software inspections process is in place, fewer defects
enter the testing activity and the productivity of tests improve
The costs of tests are lower and the time to complete tests is reduced
Several studies have confirmed the reduction in project costs when defects were removed
earlier
Defect Cost Relationship
It is interesting to note that this relationship has remain consistent in the last three decades
since the earliest studies when inspections were being first reported
In addition to the costs on project, there are additional costs to the customer for
downtime, lost opportunity, etc., when defects are detected in maintenance
Lets look at the published data from different studies of companies in which comparison
of inspection costs and testing costs have been made
These were independent studies, and so they use different units to report their results
However, the pattern repeats that the cost of inspections is much lower than that of
software testing
56
Cost in Inspections
Cost in Test
IBM
$48/defect
$61-$1030 / defect
$1770 / defect
AT &T
1 unit
20 units
--
ICL
1.2-1.6 hours/defect
8.47 hours/defect
--
AT &T
1.4 hours
8.5 hours
--
JPL
$105/defect
$1700/defect
--
IBM
1 unit
9 times more
Shell
1 unit
30 units
--
T horn EMI
1 unit
6.8-26 units
96 units
Applicon,
Inc.
1 hour
--
30 hours
Infosys
1 unit
3 6 units
--
These studies clearly report data from different companies that it is cheaper to detect and
remove data using software inspections as compared to software testing
There is evidence in the literature that inspection offer significant return on investment
even in their initial use
Let now look at inspections from another point of view
Relating defect origin points and defect discovery
In a project with no software inspections, defects are typically injected in the earlier
activities and detected in later stages
As a result, we get a chaos zone
This situation is a mess. If only we were able to detect defects in the same life cycle activity,
we can eliminate the chaos zone, and bring some sanity back to the project team and project
management. If we introduce software inspections, we can do that.
Defect Origins and Discovery Points With Usage of Formal Inspections
57
Here you can see that the chaos zone has been eliminated. This is achieved by performing
inspections on work products before leaving that life cycle activity, and as a large number of
requirements defects will be detected and removed during the requirements activity, design
and coding defects will be detected and removed during those activities, and so on.
Why Isnt Everyone Using Inspections?
Now we are convinced that inspections have a clear value independent of any model or
standard for software development, so why isnt everyone using it?
Reasons for Not Using Inspections
There is resistance to Inspections because people view them as if they are not easy to do
well
Management often views Inspections as an added cost, when in fact Inspections will
reduce cost during a project
Development of new tools and environments
Inspections are not the most enjoyable engineering task compared to designing and
coding
Inspections are labor intensive and low-tech
Programmers/designers are possessive about the artifacts they create
Inspection Preconditions
58
o All team members treat it as a cooperative effort to find and remove defects as
early as possible during development activities
o Inspectors with good domain knowledge of the material are available for the
inspection
o Inspectors are trained in the inspection process
Inspections succeed to varying degrees even when these three conditions are not met
Effectiveness may not be as good
It must always be remembered that inspection data must not be used to evaluate the
performance or capability of the work product producer
All levels of management must accept this value, practice it, and communicate the
commitment to all personnel
The inspection uses a checklist to prepare for the inspection and to verify the work
product against historical, repetitive, and critical defect types within the domain of the
project
The inspection is used to determine the suitability of the work product to continue into the
next life cycle stage by either passing or failing
If the work product fails, it should be repaired and potentially re-inspected
Work Products
Requirements specifications, Design specifications, Code, User documentation, Plans, Test
cases, All other documents
Inspection Steps
59
60
The ETVX technique indicates the relationship and flow among the four aspects of an
activity and between activities
The notion of formal entry, exit, and criteria go back to the evolution of the waterfall
development process
The idea is that every process step, inspection, function test, or software design has a
precise entry and exit criteria
Characteristics of ETVX
The picture that you have just seen shows the ETVX model. Lets now apply this model to
the inspection process
Practices in the Inspection Process
61
Ancillary Purposes
Improvement in productivity
Education and increased knowledge sharing
Developing backup/replacement capability
Process improvement
Early product quality visibility
Product re-development
Building team spirit
Productivity Improvement
Fagan calculated a 23% productivity improvement during the VTAM study one of the first
projects in which inspections were used
o An improvement in productivity is the most immediate effect of purging errors
from the product (Michael Fagan)
o Inspection reduces the development cost during test by 50% based on IBM
studies. (Norris)
Education and Increased Knowledge Sharing
Overview activity
Preparation activity
Inspection meeting
Analysis meeting (causes of defect)
Prevention meeting
Back-Up/Replacement Capability
Many organizations have high turnover rates, and in many cases only a few people (or
even one person) has the required knowledge of a product or key parts of a product
62
Where turnover is high, knowledge can literally walking out of the door
To mitigate this risk, some organizations have elected to inspect 100% of all work
products
Basically, they are providing backup, and this is dynamic backup
In these situation, inspections are used to spread the knowledge as fast and as far as
possible
This education also provides a flexibility to react quicker when there are customer needs
In some situations, maintenance of the work product may be transferred to a new
organization or a subcontractor
So new people need to be educated and trained on the work products as fast as possible
So, inspections are used to create backups or replacement owners of work products
The choice for when to consider these types of inspections is determined by
o Defect backlogs
o Change request backlogs
o Possibilities for re-engineering
o Risk mitigation for volatile product sections
o Turnover rates
o Recruitment rates
To be successful for in these inspections, the author of the work product has to be present
o Inspections broaden the knowledge base of the project in the group, create
potential backup programmers for each module, and better inform the testers of
the functions they are to test. (Norris)
Process Improvement: Data is gathered during the inspection and later analyzed to
understand the process of doing the inspection and later to improve it
Early Product Quality Visibility: Quality of the work product and that of the software
product starts to become clear in the early stages of the software development life cycle
Product Re-Development
Products with multiple releases can have high volumes of changes in some areas
And, some work products with high defect rates may have to be re-engineered
Inspections are a very good mechanism for highlighting and prioritizing candidate areas
for enhancement
Building Team Spirit: The review process also promotes team building. It becomes one of
the first steps toward establishing a good development team, by substituting an environment
where programmers work alone throughout their careers, for a programming team
environment in which each individual feels free to discuss and critique everyone elses
program. Implicit in the concept of a team is the notion of working closely together, reading
each others work, sharing responsibilities, learning each others idiosyncrasies both on
technical and personal level, and accepting altogether as a group shared responsibility for the
product where each member can expect similar rewards if the project is a success and similar
penalties if the project fails
Where Do Inspections Fit in Software Development Process?
63
There is a cost for every activity during project life cycle. The cost is determined by
many factors:
o Capability of programmers performing the activity
o The defined process for the activity
o Stability of the input
Inspections have a cost also
Time spent on inspections is an investment. You inspect now, you invest now, and then
you reap the benefits down the line
Concern should only be when the inspections are performed for the first time
Once the cost question is removed from managements thinking, the time needed up front
in a project is no longer a concern
HOURS/DEFECT
AT&T
.67 1.4
ICL
JPL
Bell-Northern
1.2 1.6
1.5 2.1
Less than .4
HP
Bull
ITT Industries
0.2
1.43
~1
64
Its not the products but the processes that create products that bring companies long-term
success. (Michael Hammer and James Champy)
Inspection Process Flow
Planning and
Scheduling
Overview
Required?
No
Preparation
Yes
Inspection
Meeting
Defect
Analysis?
Yes
Inspection
Meeting
Rework
No
Overview
Follow-up
Prevention
Meeting
Inspection Process
Planning and scheduling, Overview, Preparation, Inspection meeting, Analysis meeting,
Rework, Follow-up, Prevention meeting, Data recording and reports, Inspection process
monitoring.
Well be using the ETVX model to describe the steps in the inspections process
Planning and Scheduling
65
To ensure adequate time and resources are allocated for inspections and to establish
schedules in the project for work products to be inspected, to designate the inspection
team, and to ensure the entry criteria are satisfied
All project plans exist at three levels of knowledge as the project progresses
o Those things that are unknown
o Those things that are becoming known
o Those things that are known
Plan details reveal themselves to the planner as a rolling wave
The project lead must plan which inspections are to be performed at the initial stages of
the project
Unknowns become knowns
Has two sections
o Inspection planning
o Inspection scheduling
The SQA group in the organization should assure that the project plan has been
documented and includes planned inspections as required by the organization policy and
procedures
Data to be gathered during this activity
o Which work products are planned for inspection
o The estimated size of work products to be inspected
o Risks
66
There is a project plan showing the inspections to be held, including resources and
milestones that may be known in the early stages of the project
Where milestones may not be known a boundary of probable dates should be noted in the
plan for the inspections
Adequate resources are allocated in the project plan for inspections
The moderator remains actively involved during the inspection scheduling period and is
responsible for assuring that all tasks up to completion of the inspection meeting are
performed
67
Overview: Responsibility
The producers primary responsibility for the success of the overview meeting is to
deliver the presentation
If overview material is provided, it is the producers responsibility to make sufficient
copies for the meeting either directly or via the moderator
The moderator determines with the project lead whether an overview is necessary,
schedules the overview meeting, obtains the meeting room, and records the time of the
meeting, the number of participants, and the results of the meeting
Inspectors participate during the overview meeting and must concur that the overview
met the exit criteria
68
o Customer requirements
The producer is ready to present the overview
Open issues and any potential problem areas are highlighted
Overview: Tasks
Producer prepares for the overview using a format and style that will best convey the
information to the participants
Moderator invites the participants to the overview meeting
Producer presents the overview
Inspection team members concur that the overview satisfies the needs for preparation and
inspection meeting
Any open issues are documented in the inspection report
If the overview is used to familiarize the participants with their roles, the inspection
process, or some other aspect key to this inspection, the moderator will provide this
briefing
Defects, if any, are documented
Overview: Validation/Verification
The moderator uses the work product overview meeting entry criteria and procedure to
determine if a meeting is necessary
The inspection team is in concurrence with the decision taken to have an overview or not
The inspectors have the responsibility to state that the overview, when held, is
satisfactory for their preparation and subsequent inspection meeting
The SQA group ensures that the moderator has used the overview meeting criteria and
ensures an appropriate decision was made to have an overview or not. This can be done
via audits of the process records or sampling of inspections
Data gathered during this activity
o How much participant time was spent in the overview
o The clock time for the overview
o Time between notification and the overview meeting
o How many overviews required rescheduling
o How many defects were identified at the overview
The overview meeting was determined to be satisfactory by the inspectors and SQA
Open issues are documented
Potential problems areas are noted to the participants for preparation and for the reader
for the inspection meeting
Defects, if any, are documented
69
Allows time for the inspection participants to sufficiently prepare for the inspection
meeting and list potential defects
During preparation the inspectors should:
o Increase their understanding of the material
o Inspect the work product using the checklist appropriate to the work product
o Identify possible defects that will be discussed at the inspection meeting
o Create a list of minor defects and provide them to the producer
o Note the amount of time spent in this activity
Preparation: Responsibility
Primary responsibility is with the inspectors to ensure they have properly prepared for the
inspection meeting
If an inspector cannot prepare sufficiently, the moderator must be notified immediately
and a backup inspector selected
The inspection meeting may have to be cancelled in those situations if backup inspector is
not available
Decision should be recorded to learn during analysis
The moderator should first estimate the preparation time needed for the inspection based
on the material to be inspected. These estimates should be verified with the inspection
team participants
The moderator needs to get a commitment from each participant that enough time is
allocated and that it will be sufficient for him/her to prepare
70
All necessary ancillary material have been made available well in advance
The work product includes all base-lined function and approved changes for this planned
work product completion date
The amount of time needed for preparation has been confirmed with the inspectors and is
available to them
Predecessor and dependent work products are available, have been inspected, and meet
exit criteria
The moderator and producer have defined the coverage of material to be inspected
The work products allow easy identification of defects by location in the material
The moderator agrees that the work product is inspectable
Preparation: Tasks
Each inspector uses the scheduled time to complete the preparation in a style and format
they are comfortable with
The material to be inspected is marked with questions, concerns, and possible defects,
both major and minor, found during inspection
The minor defects are either recorded on a separate sheet that will be delivered to the
moderator at the start of the inspection meeting or they are clearly noted in the marked
material that will be delivered to the moderator at the end of the inspection meeting. Each
minor defect should be noted by location in the work product when using a minor list
Preparation: Validation/Verification
Each inspector has completed sufficient preparation based on organization and project
preparation time criteria
Minor defect inputs are complete
Preparation notes are recorded on the work product materials or defect lists
Inspection Meeting
Identifies defects before work product is passed into the next project stage
71
The producer is responsible for the inspected work product, answering questions, and
concurring on identified defects or adequately explaining to the inspection teams
agreement why the identified possible defect is not a defect
The reader is responsible for focusing and pacing the inspection meeting by leading the
team through the material
The recorder is responsible for correctly recording all identified defects and open issues
All inspectors, including the producer, are responsible for sharing their questions and
identified defects found during preparation, and work together to find more defects in
meeting
The inspection team members are sufficiently present in number and role assignments
Inspection materials were available for preparation with sufficient time for study and
review before the inspection meeting, including necessary reference material
Inspectors have adequately prepared
Inspectors have submitted their minor defects list at the start of the meeting or have
marked the work products that will be provided at the end of the meeting
Scope of the inspection meeting has been defined
Recorder and a data recording system are available
Other roles; e.g., reader have been assigned
The producer has identified any new potential problem areas
72
The moderator, using the inspection meeting entry criteria and procedure, determines if
the team has properly performed the inspection
The inspectors participated in an effective meeting
The SQA group ensures that inspection meeting procedure and that the inspectors
performed sufficient preparation. This can be done via audits of the process records or
sampling of inspections
Data gathered during this activity
o How much time was spent in the inspection meeting
o How long a period between the preparation and the inspection meeting
o How many inspection meetings required rescheduling due to insufficient
preparation
o How many inspections required re-inspection
o How many defects were found
o How long the meeting took
o How many inspectors were in attendance
The inspection materials have been inspected and coverage of the work product is
completed as planned
The inspection results fall within expected tolerance of performance for
o Time spent during preparation
o Time spent at the inspection meeting
o Defect density
The defects and the conduct of the inspection have been recorded and the team concurs
with the contents
Open issues have been recorded for follow-up during rework
The moderator or a designee has been appointed to perform follow-up with the producer
Data is available to update the process data base
Any associated deviations or risks are noted
Decisions to re-inspect or not have been reviewed against criteria
Decision on re-engineering has been addressed
Process defects have been recorded, as appropriate, as well as product defects
The locations of the defects of the inspected work product are clearly noted to facilitate
repair
A decision is taken on the timeframe by which defect repairs and open issues will be
resolved
The inspection satisfies the criteria to be indicated as performed
Analysis Meeting
73
Which is held after the inspection meeting, to begin defect prevention activities
This activity was not part of the original inspections
The project lead and moderator have chosen this activity to be performed
The inspection team has been trained in causal analysis techniques. Training in team
dynamics and group behavior can be helpful
Major defects have been found during the inspection
A defect taxonomy or set of cause categories has been defined
The moderator uses the analysis meeting entry criteria and procedure to determine if all
inspectors have properly participated and the meeting was effective
The inspectors have participated
o If they cannot participate, they must notify the moderator at the start of the
inspection meeting
The SQA group ensures that the moderator has used the Analysis meeting checklist and
reviews the recorders report for sufficiency. Audits
Data gathered during this activity
o How much time was spent in the analysis meeting
o How many defects were discussed
o How many defects were assigned causes
74
Fixes identified defects and resolves any open issues noted during the inspection
In some cases, the repair may require a Change request to be written because of the nature
or impact of the defect
Rework: Responsibility
The producer is responsible for all open issues, fixing all defects, and writing any change
requests
Rework: Other Roles
The moderator or designee is assigned to discuss open issues with the producer during rework
and to come to concurrence with the producer
Rework: Entry Criteria
The list of defects and open issues is provided to the producer for resolution
The moderator or someone assigned meets with the producer to review rework and open
issues
The inspection report is completed, is on file, and available
Rework: Tasks
The producer repairs accepted defects identified during the inspection meeting
The producer resolves any open issues
The moderator meets with the producer to discuss resolutions of open issues
Change requests are written for any open issues or defects not resolved during the rework
activity
Either the minor defect list or marked work products with minor defects noted are used to
repair the minor defects
Rework: Validation/Verification
75
The follow-up activity is scheduled; where the rework will be verified by the moderator
or assigned designee
SQA has reviewed sample results of this activity in the project
Rework: Validation/Verification
Follow-Up
Verifies that all defects and open issues have been adequately fixed, resolved, and closed out
Follow-up: Responsibility
The moderator is the individual primarily responsible for reviewing repairs. The moderator
will also review the producers decisions on repairs and change requests. The moderator may
delegate some or all of this responsibility
Follow-up: Other Roles
The producer is to provide an explanation of the repairs and closures made
Follow-up: Entry Criteria
Rework of defects has been completed; i.e., fixed or identified with a decision to not fix
The producer has completed the rework for defects and open issues resolved to be defects
Change requests are written for any defects or open issues not resolved
The moderator concurs with the producers decisions on defects, open issues, and change
requests
Follow-up: Tasks
The moderator and producer discuss and agree on compliance with respect to defects and
open issues
In case of disagreement, the issue would be resolved by the project lead
The producer updates the work product to reflect the fixes to defects found and open
issues accepted as defects
The producer writes any change requests that may be required
The moderator completes the inspection report and marks the inspection as closed
Follow-up: Validation/Verification
76
The moderator concurs with the defect repairs and open issue closures
The producer reviews the final inspection report
SQA group reviews the final inspection report
Data gathered during this activity
o How much time was spent in follow-up
o How many open issues were disputed
Any change requests resulting from unresolved open issues have been submitted to the
change approval process for handling
The inspection report is completed and the producer agrees
If necessary, a re-inspection is scheduled
If necessary, issues are escalated to the project lead for resolution
The inspection is noted as closed
Prevention Meeting
Which is held periodically after sets of inspections have been performed to determine
probable causes for selected defect types, instances, or patterns
Required data about defects
The prevention team has met based on the defined cycles for the meetings
SQA reviews sampled reports
The SEPG reviews proposed actions and resultant actions taken
77
To record the data about the defects and conduct of the inspection
This activity is held concurrently with other activities, including at the end of the
inspection process
The recorder during the overview, inspection meeting, and optional analysis meeting
records data about the defects and the conduct of the inspection
Alternatively the moderator can enter the data
Record
Record
Record
Record
The inspection verifies the data at the end of the inspection meeting and optional analysis
meeting
SQA review sampled reports
The producer reviews the report completed by the moderator
Data should be considered for this activity; e.g., how much effort is used for recording
and reporting
78
The data are complete and agreed to by the inspection meeting and analysis meeting
participants
The data gathered during the follow-up activity are complete and agreed to by the
producer and moderator
This activity is held concurrently with other activities and after inspections
The purpose is to evaluate the results of the inspection process as performed in the
organization and to propose suggestions for improvement
Management ensures that inspection process monitoring is integrated into the inspection
process
The inspection process improvement team proposes actions for inspection process
improvements based on the monitoring and analysis of the inspection coordinator
Reports and results from inspections over a period of performance are available
A coordinator is assigned
Resources are allocated for inspection process improvement team
Gather the inspection process data provided since the last monitoring report
Review inspection reports and related data for trends and results against objectives
Interview inspection participants to ensure understanding of results and to gather other
inputs
Perform analysis using data from the inspection reports, interviews, and surveys
Provide the analysis to the inspection process improve ment team for review and proposal
to management for inspection process management
The inspection coordinator performs monitoring actions per agreed periods for analysis
The inspection process action improvement team meets per agreed periods for
recommendations
SQA reviews monitoring activity on a random basis to ensure it is being performed
Data gathered during this activity
o How much effort is expended
79
80
Moderator should not be the part of the team that worked on the work product under
inspection
Sometimes, this cannot be avoided
Leader
Moderators serve best when they have management and leadership abilities
They will manage the inspection once it has been scheduled
Some organizations have viewed how well a moderator leads as an indication of
management ability on future projects
Coach
81
The moderator does not have to be an expert in the domain of the work product, but the
moderator should be able to understand the technical aspects
When the moderator is not technically knowledgeable, the team may discount them and
they are less able to control the technical discussions
Communication Skills
The moderator must listen and hear; the moderator must give directions and explain so the
participants understand the value of inspections
Trained
A moderator should have sense of humor, because that helps when situation gets tense during
inspection meetings
Problems with Moderators
Is aggressive, Cannot control the meeting, Moderator is treated as a secretary, Biased
moderator
Activities to be performed by the Moderator
Inspection scheduling, Overview, Preparation, Inspection meeting, Data recording, Analysis
meeting, Rework, Follow-up
Inspection Scheduling
82
Standards
Code versus other documentation
Efficiency
User interfaces
Maintainability
Operating convenience
Defining the inspection activities schedule
o Overview, when required
o Preparation effort
o Inspection meeting duration
o Analysis meeting
o Logistics
Overview
Preparation
The moderator as inspector prepares for the inspection meeting just as any other inspection
participant
Inspection Meeting
Data Recording
The moderator must review the defect report created by the recorder and then complete this
report during the follow-up activity for the required contents of the inspection report
Analysis Meeting
83
Complete the defect report as provided by the recorder to show that the inspection is
closed
Verify all rework (defects and open issues)
Schedule a re-inspection, if warranted
Other Roles
Producer
84
The reader is the inspector who will lead the inspection team through the material during
the inspection meeting. The purpose of reading is to focus on the inspection material and
to ensure an orderly flow for the inspectors
Reader participates in preparation and inspection meeting during the inspection process
Ways to Read
Verbatim, Paraphrase, Mixed styles, section-by-section enumeration, Perspective-based, Not
read
Possible Problems with Readers
Recorder
The recorder is the inspector who will record the data for defects found and data about the
conduct of the inspection
Recorder participates in the preparation and inspection meeting activities during the
inspection process
Inspector
85
Is not prepared
Does not actively participate
Comes late to meetings
Not focused
86
Planned
o Includes any time during the projects life cycle where the schedule has been
defined for a required inspection; it concludes with the inspection meeting start
Performed
o Includes all times from the start of the inspection meeting through rework
Closed
o Is only after follow-up when closure has been achieved and signed-off
Requirements specifications
Design
Code
User documentation
Yes, but the decision requires data that demonstrates minimal risk and good data requires
time in practice
For safety-critical or life-critical software, you should not take the risk lightly, if at all
A domain expert will be far more effective in finding defects than a novice. Experts,
however, are not always available when we want then
Less capable inspectors may only be able to find a certain class of defects, while experts
can find deeper defects, but they all can contribute to finding defects
The decision should be based on risk and criticality of the work product. Here criticality
is not just safety-critical or life-critical situations, but work products critical to the success
of the project
Re-inspection may have to be done when experts are available, in case they were not
available for the inspection meeting
87
If we allow the programmers to learn in a safe environment, they generally will learn
Not all programmers are equal in capability, but all can contribute to the projects success
We may have to provide additional training to some
As programmers learn, they will become more effective, and this will show in their work
products
The will take pride in their work and work environment
Small Teams
In the beginning, data may not be collected to get familiar with inspections
However, in order to have highly effective inspections, youll need the data and analysis
on that data
Inspecting Changes
Some argue that when a change is made to a previously tested work product that it is
cheaper and less labor intensive to retest rather than inspect or re-inspect
88
It sometimes happens that the checkmark of having done an inspection is tracked with a
higher level of importance than performing an effective inspection
For example, inspections are indicated as having been completed regardless of when they
might have occurred
This results in ineffective inspections and an organization attitude that inspections are a
bureaucratic quality approach
Situations where this checkmark misuse is seen
Unit test is performed before the code inspection, and in some cases the inspection
becomes a non-event
Resources are not available for a scheduled inspection, so the work continues without an
inspection and then sometime later a minimal inspection in name is held, e.g., a
requirement inspection is held after or while the design is being completed
Some people believe an inspection will go faster when bugs have already been taken out
via test or some other approach
One wonders why inspections are held in the first place
Satisfy the managements decision without buying into them. People suffer
89
Inspections are intended to remove defects as early as possible. They are quality control
mechanism
Testing is intended to
o Find the defects that leaked through the inspection process
o Revalidate that the delivered solution satisfies the needs of the customer
Tests are both quality control and quality assurance mechanism
With analysis of data from the inspection and test results, both processes can be tuned to
maximize efficiency and minimize redundancy, while maintaining effectiveness
If an organization has not defined or agreed to an accepted style for specifications, design,
code, or for other documents, then it is possible that inspection participants may have
different viewpoints. This can lead to useless discussions
Agreements on these issues is a prerequisite for effective and efficient inspections
Some people believe that debugging within a test execution environment is faster and
cheaper. The literature consistently has suggested otherwise
Try a controlled study
There may be languages where execution of the code during the inspection would be
more beneficial. For example, the visual type of languages may be best inspected in
permutation of the traditional inspection using both the code and viewing the performance
of the code by looking at the screens during the execution
Absolutely, but they do not have to have as much domain knowledge as the producer
Know code and design sufficiently
Design and specification documents can be helpful in understanding
90
Can be done with forethought that this is a good business decision from an effectiveness
perspective (i.e., very low defects)
The informal approach is noted as such and tracked as a potential risk
Review is made after test and customer use to learn if the decision was good
91
92
93
The purpose of software testing is to ensure that the software systems would work as
expected when they are used by their target customers and users
For software products, software testing provides a mechanism to demonstrate their
operation through controlled execution
Software Testing
The dynamic execution of software and the comparison of results of that execution
against a set of known, pre-determined criteria (Capers Jones)
This demonstration of proper behavior is a primary purpose of software testing, which
can also be interpreted as providing evidence of quality in the context of software quality
assurance, or as meeting certain quality goals
In comparison with the manufacturing systems, problems in software products can be
corrected much more easily within the development process to remove as many defects
from the product as possible
Therefore, software testing has two primary purposes
94
Test Execution
95
Due to the increasing size and complexity of todays software products, informa l testing
without much planning and preparation becomes inadequate
Important functions, features, and related software components and implementation
details could be easily overlook in such informal testing
Therefore, there is a strong need for planned, monitored, managed and optimized testing
strategies based on systematic considerations for quality, formal models, and related
techniques
Test planning and preparation has these sub-activities
Goal setting
o Reliability and coverage goals
Test case preparation
o Constructing new test cases or generating automatically, selecting them from
existing ones for legacy products, and organizing them in some systematic manner
Test procedure preparation
o It is defined and followed to ensure effective test execution, problem handling and
resolution, and overall test process management
Testing and inspections often find different kinds of problems, and may be more effective
under different circumstances
Therefore, inspections and testing should be viewed more as complementary QA
alternatives instead of competing ones
Testing is an integral part of different software development processes
Most forms of testing are less than 30% efficient in finding bugs
Many forms of testing should be used in combination with pre-test design and code
inspections to achieve high defect removal efficiencies
96
97
Let us now discuss the basic principles that guide software testing
Testing Principles
98
This change in thinking can help test planners and test designers to create effective test
cases
This concept also helps software designers and programmers to design and code software
systems, which are testable. So, what is this testability?
Testability
Observability
Controllability
The better we can control the software, the more testing can be automated and
optimized
All possible outputs can be generated through some combination of input
All code is executable through some combination of input
Software and hardware states and variable can be controlled directly by the test engineer
Input and output formats are consistent and structured
Tests can be conveniently specified, automated, and reproduced
99
By controlling the scope of testing, we can more quickly isolate problems and perform
smarter retesting
The software system is built from independent modules
Software modules can be tested independently
Simplicity
Stability
Understandability
Discussion of Testability
The characteristics we have discussed just are fundamental for software testers to
efficiently and effectively perform software testing
The all belong to the different developmental stages of software engineering process
These characteristics clearly indicate that in order to have good software testing, best
practices should be used in all activities of software engineering life cycle to develop
software product, otherwise, we will end up with a product which is difficult to test
100
The tester must understand the software and attempt to develop a mental picture of how
the software might fail
Ideally, the classes of failure are probed and a set of tests should be designed to show that
software fails in a particular situation
Testing time and resources are limited and there is no point in conducting a test that has
the same purpose as another test
Every test should have a different purpose, even if it subtly different
Ideally, each test should be executed separately, instead of being combined with many
tests
If tests are combined into one test, this can result in masking of errors
The design of tests for software and other engineered products can be as challenging as
the initial design of the product itself
However, many software engineers treat testing as an afterthought, developing test cases
that may feel right but have little assurance of being complete
A rich variety of test case design methods have evolved for software, which provide the
developer with a systematic approach to testing
More important, methods provide a mechanism that can help to ensure the completeness
of tests and provide the highest likelihood for uncovering errors in software
Any engineered product can be tested in one of two ways
o Knowing the specified function that a product has been designed to perform
o Knowing the internal workings of a product
In the first case, tests can be conducted that demonstrate each function is fully operational
while at the same time searching for errors in each function
In the second case, tests can be conducted to ensure that internal operations are performed
according to the specifications and all internal components have been adequately
exercised
101
Categories of Testing
General testing
Specialized testing
Testing that involves users or clients
102
In the first case, testing is focused on the external behavior of a software system or its
various components, and we cannot see inside the components
While in the second case, testing is focused on the internal implementation, and we must
see inside the component
Testing Techniques
Black-box testing (BBT)
aka functional/behavioral testing
White-box testing (WBT)
aka structural/glass-box testing
Black-Box Testing
Black-box testing alludes to tests that are conducted at the software interface
Although they are designed to uncover errors, they are also used to demonstrate that
software functions are operational, that input is properly accepted and output is correctly
produced, and that the integrity of external information is maintained
A block-box test examines some fundamental aspect of a system with little regard for the
internal logical structure of the software
The inner structure or control flow of the application is not known or viewed as irrelevant
for constructing test cases. The application is tested against external specifications and/or
requirements in order to ensure that a specific set of input parameters will in fact yield
the correct set of output values
It is useful for ensuring that the software more or less is in concordance with the written
specifications and written requirements
The simplest form of BBT is to start running the software and make observations in the
hope that it is easy to distinguish between expected and unexpected behavior
This is ad-hoc testing and it is easy to identify some unexpected behavior, like system
crash
With repeated executions, we can determine the cause to be related to software and then
pass that information to the people responsible for repairs
103
It is useful for ensuring that all or at least most paths through the applications have
been executed in the course of testing
Using white-box testing methods, software engineers can derive test cases that
Guarantee that all independent paths within a module have been exercised at
least once
Exercise all logical decisions on their true and false sides
Execute all loops at their boundaries and within their operational bounds
Exercise internal data structures to ensure their validity
The simplest form of WBT is statement coverage testing through the use of various
debugging tools, or debuggers, which help us in tracing through program executions
By doing so, the tester can see if a specific statement has been executed, and if the
result or behavior is expected
One of the advantages is that once a problem is detected, it is also located
However, problems of omission or design problems cannot be easily detected through
white-box testing, because only what is present in the code is tested
Another important point is that the tester needs to be very familiar with the code
under testing to trace through it executions
Therefore, typically white-box testing is performed by the programmers themselves
Well have some discussion on this topic, that is who is most productive for which
kind of testing at a later stage in this course
104
Defects detected through WBT are easier to fix than those through BBT because of the
direct connection between the observed failures and program units and implementation
details in WBT. However, WBT may miss certain type of defects, such as omission and
design problems, which could be detected by BBT
In general, BBT is effective in detecting and fixing problems of interfaces and
interactions, while WBT is effective for problems localized within a small unit
105
Why spend time and energy worrying about (and testing) logical details when we might
better expend effort ensuring requirements have been met? or
Why dont we spend all of our energy on black-box tests?
Logical errors and incorrect assumptions are inversely proportional to probability that a
program path will be executed
We often believe that a logical path is not likely to be executed when, in fact, it may be
executed on a regular basis
Typographical errors are random, its likely that untested paths will contain some
White-box testing uses the control structure of the procedural design to derive test cases
WBT Methods Derive Test Cases
Guarantee that all independent paths within a module have been exercised at least once
Exercise all logical decisions on their true and false sides
Execute all loops at their boundaries and within their operational bounds
Exercise internal data structures to ensure their validity
Basis path testing is a white-box testing technique first proposed by Tom McCabe
It enables the test case designer to derive a logical complexity measure of a procedural
design and use this measure as a guide for defining a basis set of execution paths
Test cases derived to exercise basis set are guaranteed to execute every statement in the
program at least once
Basis path testing uses cyclomatic complexity, which is a software metric that provides a
quantitative measure of the logical complexity of a program
When the cyclomatic complexity used in the context of the basis path testing method, the
value computed for cyclomatic complexity defines the number of independent paths in
the basis set of a program and provides us with an upper bound for the number of tests
that must be conducted to ensure that all statements have been executed at least once
An independent path is any path through the program that introduces at least one new set
of processing statement or a new condition
The cyclomatic complexity can be calculated in a number of ways
106
Cyclomatic complexity can also be calculated by developing flow graph (or program
graph), which depicts the logical control flow
A flow graph primarily consist of edges and nodes
A flow graph depicts logical control flow of a program, and contains notation for
o Sequences
o If conditions
o While conditions
o Until statements
o Case statements
107
Condition Testing
Condition testing is a test case design method that exercises the logical conditions
contained in a program module
Conditions can be
o Simple conditions
o Relational expressions
o Compound conditions
Errors are much more common in the neighborhood of logical conditions than they
are in the locus of sequential procession statements
The condition testing method focuses on testing each condition in the program
If a condition is incorrect, then at least one component of the condition is incorrect,
therefore, types of errors in a condition include the following
o Boolean operator error
o Boolean variable error
o Relational operator error
o Arithmetic expression error
The purpose is to detect errors in the conditions of a program but other errors also
There are two advantages of condition testing
o Measurement of test coverage of a condition is simple
o Test coverage of conditions in a program provides guidance for the generation
of additional tests for the program
Branch testing
o For a compound condition, C, the true and false branches of C and every simple
condition in C need to be executed at least once
Domain testing
o Testing related to relational operators
108
The data flow testing method selects test paths of a program according to the locations of
definitions and uses of variables in the program
Data flow testing strategies are useful for selecting test paths of a program containing
nested if and loop statements
Loop Testing
Loops are the cornerstone for the vast majority of all algorithms implemented in
software, and yet, we often them little heed while conducting software tests
Complex loop structures are another hiding place for bugs. Its well worth spending
time designing tests that fully exercise loop structures
It is a white-box testing technique that focuses exclusively on the validity of loop
constructs
Four different classes of loops can be defined
o Simple loop
o Concatenated loops
o Nested loops
o Unstructured loops
109
110
BBT Techniques
Equivalence partitioning
Boundary value analysis
Equivalence Partitioning
111
Equivalence partitioning is a black-box testing method that divides the input domain
of a program into classes of data from which test cases can be derived
Equivalence partitioning strives to define a test case that uncovers classes of errors,
thereby reducing the total number of test cases that must be developed
An equivalence class represents a set of valid or invalid states for input conditions
Typically, an input condition is
A specific numeric value
A range of values
A set of related values
A boolean condition
If an input condition
defined
If an input condition
classes are defined
If an input condition
class are defined
If an input condition
specifies a range, one valid and two invalid equivalence classes are
requires a specific value, one valid and two invalid equivalence
specifies a member of a set, one valid and one invalid equivalence
is boolean, one valid and one invalid class are defined
Equivalence Partitioning
112
Valid data
Invalid data
Boundary value analysis (BVA) is a testing technique, which leads to a selection of test
cases that exercise bounding values
This is because that for reasons not clearly completely clear, a greater number of errors
tends to occur at the boundaries of the input domain rather than in the center
BVA complements equivalence partitioning testing technique
Boundary Value Analysis
State Testing
113
Testing begins at the component level and works outward toward the integration of the
entire computer-based system
Different testing techniques are appropriate at different points in time
Testing is conducted by the developer of the software and (for large projects) an
independent test group
Testing and debugging are different activities, but debugging must be accommodated in
any testing strategy
A strategy for software testing must accommodate low-level tests that are necessary to
verify that a small source code segment has been correctly implemented as well as highlevel tests that validate major system functions against customer requirements
A strategy must provide guidance for the practitioner and a set of milestones for the
manager
Because the steps of the test strategy occur at a time when deadline pressure begins to
rise, progress must be measurable and problems must surface as early as possible
Testing Strategy
Unit testing, Integration testing, Validation testing, System testing
114
Finally, we have system testing, where the software and other system elements are
tested as a whole
After software has been integrated, a set of high-order tests are conducted
Validation criteria (established during requirements analysis) must be tested
Validation testing provides final assurance that software meets all functional, behavioral,
and performance requirements
BBT techniques are used exclusively here
System testing is the last high-order testing and it falls outside the boundaries of software
engineering and into the broader context of computer system engineering
System testing verifies that all elements (hardware, people, databases) mesh properly and
that overall system function/performance is achieved
Strategic Issues
115
Actors/users of use-cases
Develop a testing plan that emphasizes rapid cycle testing
Build robust software that is designed to test itself
Software should be capable of diagnosing certain classes of errors
Use effective formal technical reviews as a filter prior to testing
Conduct formal technical reviews to assess the test strategy and test cases themselves
Develop a continuous improvement approach for the testing process
Lets have some detailed discussions on each of the testing strategies introduced
Unit Testing
Unit testing focuses verification effort on the smallest unit of software design the
software component or module
The relative complexity of tests and uncovered errors is limited by the constrained scope
established for unit testing
Different units can be tested in parallel
The module interface is tested to ensure that information properly flows into and out of
the program unit under test
The local data structure is examined to ensure that data stored temporarily maintains its
integrity during all steps in an algorithms execution
Boundary conditions are tested to ensure that the module operates properly at boundaries
established to limit or restrict processing
All independent paths through the control structure are exercised to ensure that all
statements in a module have been executed
All error handling paths are tested
116
Tests of data flow across a module interface are required before any other test is initiated
Among the potential errors that should be tested when error handling is evaluated are
Error description is unintelligible
Error noted does not correspond to error encountered
Error condition causes system intervention prior to error handling
Exception-condition processing is incorrect
Error description does not provide enough information to assist in the location of
the cause of the error
Boundary testing is the last and probably the most important task of unit testing step
Software often fails at its boundaries
Test cases that exercise data structure, control flow, and data values just below, at, and
just above maxima and minima are very likely to uncover errors
Unit testing is simplified when a component with high cohesion is designed
When only one function is addressed by a component, the number of test cases is reduced
and errors can be more easily predicted and uncovered
Lets see a unit test environment
117
Integration Testing
If they all work individually, why do you doubt that theyll work when we put them
together?
The problem, of course, is putting them together interfacing
Data can be lost across an interface
One module can have an inadvertent, adverse affect on another
Sub-functions, when combined, may not produce the desired major function
Individually acceptable imprecision may be magnified to unacceptable levels
Global data structures can present problems
Integration testing is a systematic technique for constructing the program structure while
at the same time conducting tests to uncover errors associated with interfacing
Individual components have already been unit tested and the structure is dictated by
design
There is tendency, sometimes, to attempt non-incremental integration a sort of big
bang approach
118
Bottom-Up Integration
119
120
Validation succeeds when software functions in a manner that can be reasonably expected
by the customer
Software validation is achieved through a series of black-box tests that demonstrate
conformity with requirements
A test plan outlines the classes of tests to be conducted and a test procedure defines
specific test cases that will be used to demonstrate conformity to requirements
Both test plan and procedure are designed to ensure that
All requirements are satisfied
All behavioral characteristics are achieved
All performance requirements are attained
Documentation is correct
Human engineered and other requirements are met
After each validation test case has been conducted, one of two possible conditions exist
The function or performance characteristics conform to specification and are accepted
A deviation from specification is uncovered and a deficiency list is created
Deviation or error discovered at this stage in a project can rarely be corrected prior to
scheduled delivery
It is often necessary to negotiate with customer to establish a method for resolving
deficiencies
Other High-Order Testing
Specialized Testing
Recovery testing
Security testing
Stress testing
Sensitivity testing
Performance testing
Subroutine testing
Unit testing
New function testing
Regression testing
Integration testing
System testing
Performance testing
Capacity testing
Reliable
Software
Used
Used
Used
Used
Used
Used
Used
Used
Software Involved in
Litigation for Poor Quality
Used
Used
Rushed or omitted
Rushed or omitted
Used
Rushed or omitted
Rushed or omitted
Rushed or omitted
121
122
In the second case, the person performing debugging may suspect a cause, design a test
case to help validate that suspicion, and work toward error correction in an iterative
fashion
Debugging
Debugging is one the more frustrating parts of programming. It has elements of problem
solving or brain teasers, coupled with the annoying recognition that you have made a mistake.
Heightened anxiety and unwillingness to accept the possibility of errors increases the task
difficulty. Fortunately, there is a great sigh of relief and a lessening of tension when the bug
is ultimately corrected
Characteristics of Bugs
(1) The symptom and cause may be geographically remote. That is, the symptom may appear
in one part of the program, while the cause may actually be located at a site that is far
removed. Highly coupled program structures exacerbate this situation
(2) The symptom may disappear (temporarily) when another error is created
(3) The symptom may actually be caused by non-errors (e.g., round-off inaccuracies)
(4) The symptom may be caused by human error that is not easily traced
(5) The symptom may be a result of timing problems, rather than processing problems
(6) It may be difficult to accurately reproduce input conditions (e.g., a real-time application
in which input ordering is indeterminate)
(7) The symptom may be intermittent. This is particularly common in embedded systems that
couple hardware and software inextricably
(8) The symptom may be due to causes that are distributed across a number of tasks running
on different processors
123
Debugging Approaches
Regardless of the approach that is taken, debugging has one overriding objective: to find
and correct the cause of a software error
The objective is realized by a combination of systematic evaluation, intuition, and luck
Brute force
Backtracking
Cause elimination
The brute force debugging is probably the most common and least efficient method for
isolating the cause of a software error
We apply brute force debugging methods when all else fails, using a let the computer
find the errors philosophy, memory dumps are taken, run-time traces are invoked, and
the program is loaded with WRITE statements
We hope that somewhere in the morass of information that is produced we will find a clue
that can lead us to the cause of an error
Although the mass of information produced may ultimately lead to success, it more
frequently leads to wasted effort and time
Thought must be expended first
Backtracking Debugging
124
Fixing a Bug
125
As you can see from the picture just shown to you that test planning and preparation is the
most important activity in the generic testing process for systematic testing
Most of the key decisions about testing are made during this stage
Setting of goals for testing
Selecting an overall testing strategy
Preparing specific test cases
Preparing general test procedure
Test Planning
It is difficult to manage the testing of a large system, considering numerous test cases,
problem reporting, and problem fixes
Just as we plan and manage a project by estimating resources, schedule, cost, and so
forth, we must plan the testing of a software product
The planning activities enable us to do strategic thinking on what types of tests should
be executed within the constraints of time and budget
During the preparation of the test plan, as each test scenario is planned, we must
estimate the time it would take to test the system
The mistake most testers make is not planning the test schedule upfront
They fail to allocate adequate time to test the product and conduct regression testing
If one of the test dependencies identified upfront in the planning stage is receiving the
code on time for testing, the management will realize that if the code does not arrive
in testing on time, the entire project schedule will slip
126
Its too late to begin test planning when testing actually begins. All that you can do
then is hope that baseline designing and unit testing have been so well that things fall
into place easily
This sort of luck is seldom achieved in real projects
Planning is the only reliable way to mitigate such problems
Beizer claims that test planning can uncover at least as many problems as the actual
tests themselves
This is a very powerful statement based on the observations of a very seasoned and
senior researchers in software engineering community
The test plan acts as a service level agreement between the test department and other
significantly impacted departments
The testing efficiency can be monitored and improved by developing a through test
plan
A test plan identifies the test deliverables, functions to be tested, and risks
A test plan communicates to the audience
o The methodology which will be used to test the product
o The resources
Hardware
Software
Manpower
o The schedule during which tests will be accomplished
o The process used to manage the testing project
Before writing a test plan, the following information should be obtained
o Names of all test analysts
o Automated test tools to be used for this project
o Environment required for conducting tests
When developing a test plan, ensure that it is kept
o Simple and complete
o Current
o Accessible
The test plan should be frequently routed to the appropriate people for feedback and
sign-offs
A test plan can either be written by the test leader, test coordinator, or test manager
Depending on the features of the system, determine which tests you need to perform
Once you have identified the types of tests needed to test the system, plan how each of
these tests will be performed. Develop test cases
Use a test plan template (well discuss one today)
When the contents of the test plan are finalized, conduct a review for completeness of the
test plan
Incorporate the feedback from the review of the test plan
Get approvals on the contents of the test plan from managers of all the departments that
will be impacted by this new product, for example, the managers of Development,
Customer Support, Marketing, Operations, Integrated Technology, Product Assurance,
Quality and User Management
127
Follow the test plan to ensure that everyone on the test team uses the process outlined in
the test plan and if there are exceptions to this, the deviations are logged as addenda to
the test plan. For consistency, these addenda should be approved by the same group of
individuals who approved the original test plan
A well-thought test plan will save hours of frustration at the end of test cycle, reduce
risks, and provide a tool for management reporting
As many members of the project team should contribute to the test plan document
through reviews and discussion as possible
All the participants will feel committed to the plan since they helped to create it and this
fosters a high degree of dedication among participants to follow what is being said in the
test plan
If participants accept responsibilities and later do not fulfill them, the test plan provides
evidence of the initial agreement
A test plan provide criteria for measurement, deadlines for test deliverables, and describes
procedures for documenting and resolving problems
Introduction
Scope
Test plan strategy
Test environment
Schedule
Control procedures
Control activities
Functions to be tested
Functions not to be tested
Risks and assumptions
Deliverable
Test tools
Approvals
Exit criteria
Test plan strategy outlines the objectives for developing the test plan and establishes the
procedures for conducting the tests
128
The strategy should also identify all the tests that will be performed to check the
robustness of the product
A brief description of each test should be given
The test environment section lists the hardware and software configurations required to
perform tests
It determines the type of help required from other departments to support the environment
tests
129
The overall test schedule is identified in the project plan indicating the start date and end
date of test activities. However, detailed schedule of testing should be developed during
test planning process, showing start and end dates for
Development of the test plan
Designing test cases
Executing test cases
Problem reporting
Developing the test summary report
This section lists assumptions taken into consideration when developing the test
schedule, such as
Unit testing will be done prior to receiving code for system testing
All testing personnel will have the knowledge of the product and will be
experienced testers
The computers will be available during all work hours
The deliverable section lists what will be delivered at the end of test phase, such as,
Test plan
Test cases
Test incident report
130
This section identifies all test tools which will be used during testing, whether they
are in-house or need to be purchased
Test coverage analyzers might be used to provide information on the thoroughness of
one or more test runs of the program
At the end of the analysis process, the analyzer produces a visual listing of the
program with all unexecuted code flagged
This section identifies the exit criteria for the test phase. Some examples of the exit
criteria:
No priority-one open problems
All functions identified in the requirements document are present and working
No more than three priority-two problems open
131
Studies have shown that companies are missing shipping deadlines for their software
products
Overall 90% of developers have missed ship dates, and missing deadlines is a routine
occurrence for 67% of developers
These numbers are very alarming for our industry
Todays software managers and developers are being asked to turn around their products
within ever-shrinking schedules and with minimal resources
Getting a product to market as early as possible may mean the difference between product
survival and product death and therefore company survival and death
In an attempt to do more with less, organizations want to test their software adequately,
but within a minimal schedule
To accomplish this, organizations are turning to automated testing
So, what is automated testing?
Automated Testing
The management and performance of test activities, to include the development and
execution of test scripts so as to verify test requirements, using an automated test tool
Automated software testing addresses the challenge for todays software professionals
who are faced with real schedule deadlines
The automation of test activities provides its greatest value in instances where test scripts
are repeated or where test script sub-routines are created and then invoked repeatedly by a
number of test scripts
The performance of integration test using an automated test tool for subsequent
incremental software builds provide great value
Each new build brings a considerable number of new tests, but also reuses previously
developed test scripts
Regression testing at the system test level represents another example of the efficient use
of automated testing
Regression tests seek to verify that the functions provided by the modified system or
software product perform as specified and that no unintended change has occurred in the
operation of the system or product
Automated testing can provide several benefits when implemented correctly and follows
a rigorous process
The test engineer must evaluate whether the potential benefits fit the required
improvement criteria and whether the pursuit of automated testing on a project is still a
logical fit, given the organizational needs
132
Manual Testing
(Hrs)
32
Automated
Testing (Hrs)
40
262
117
55%
466
23
95%
117
58
50%
117
23
80%
96
16
83%
1090
277
75%
Percentage
Improvement
-25%
133
As rapid application development has become popular in the last few years, automated
testing has also become an integral part
Particularly, with the increase in the number of projects using agile manifesto, automated
software testing is being used extensively
Many agile software development processes include automated software testing as an
integral part of the way of they develop software a sort of modus operandi
This is an area that is open for more research and certainly productivity and efficiency of
software testing can increase substantially with automated software testing
Does the tool support the platforms and environments of the system under test?
Client-Level Compatibility
Communications-Level Compatibility
Host-Level Compatibility
Portability
Usability
Does it provide the necessary functionality, and are the testers able to make effective use
of it without a prohibitive learning curve?
Functionality
Extensibility
Learning Curve
Maintainability
134
Does the tool provide sufficient and meaningful information to enable management of the
test library and measurement of the test results?
Test Library Management
Test Results
Test results should be meaningful to inform management of the status of the system under
test and its quality relative to expectations
Test tools should be reviewed to determine the amount and quality of test result
information; analysis and reporting capabilities are either included or can be made
available
Vendor Selection
Experience
What level of experience does the vendor offer in the implementation of
automated test tools?
Service
What services does the vendor offer for training as well as implementation to
assure that the tool is properly deployed?
Support
How readily available and responsive is the vendor to technical questions and
issues?
Commitment
Is the vendor committed to the automated test tools market or is it only one of
several offerings?
135
A test case describes how each test is to be conducted and also describes input/output
details
Development of test cases assist in keeping track of what is tested, when it is tested, and
the outcome of the test
If a defect is found in testing, a documented test case makes it easy for the developer to
re-create the problem so that proper analysis can be done to fix it
Selecting test cases is the single most important task that software testers do. Improper
selection can result in testing too much, testing too little, or testing the wrong things
Intelligently weighing the risks and reducing the infinite possibilities to a manageable
effective set is where the magic is
When designing and running your test cases, always run the test-to-pass cases first. It is
important to see if the software fundamentally works before you throw the kitchen sink at
it. You may be surprised how many bugs you find just by using the software normally
136
If you see words such as these that denote something as certain or absolute, make
sure that it is, indeed, certain. Think of cases that violate them
Certainly, therefore, clearly, obviously, evidently
These words tend to persuade you into accepting something as given. Dont fall
into the trap
Some, sometimes, often, usually, ordinarily, customarily, most, mostly
These words are too vague. Its impossible to test a feature that operates
sometimes
Etc., And so forth, and so on, such as
Lists that finish with words such as these arent testable. More explanation is
needed
Good, fast, cheap, efficient, small, stable
These are unquantifiable terms. They are not testable
Handled, processed, rejected, skipped, eliminated
These terms can handle large amounts of functionality that need to be specified
Ifthen(but missing else)
Look for statements that have ifthen clauses but dont have a matching
else. Ask yourself what will happen if the if doesnt happen
Consistent
Destructive
Detects Discrepancy
Manageable
Results Can Be
Traced
137
The error density in software test cases is often higher than the error density in the
software product
Sometimes more than 15% of total number of test cases created can have errors
themselves
50% of repair effort is applied towards test cases themselves
Running test cases that are flawed or in error is of no value in terms of quality
There is an implied need for greater rigor in building and validating test cases and test
libraries
Testing limits and ranges where those included in the test-case are themselves incorrect
Testing for numeric values where the test data contains wrong data
Test cases derived from specifications or user requirements which contained undetected
errors that were accidentally passed on to the test cases
Accidental redundancy of test cases or test libraries (30%)
Gaps, or portions of the application for which no test case exist (70%)
Result of gaps in requirements
Formal inspections on test cases are very effective in eliminating bad test cases
Interesting Observations
Test cases created by the programmers themselves have the lowest probability of being
duplicated, but the highest probability of containing errors or bugs
Test cases created later by test or quality assurance personnel have the greatest
probability of redundancy but lower probability of containing errors
The defect removal efficiency of black-box testing is higher when performed by test
personnel or by quality assurance personnel rather than developers themselves
Black-box testing performed by clients (Beta and acceptance testing) varies widely, but
efficiency rises with the numbers of clients involved
For usability problems, testing by clients themselves outranks all other forms of testing
The defect removal efficiency of white-box subroutine and unit testing stages is highest
when performed by developers themselves
The defect removal efficiency of specialized kinds of white-box testing such as Year
2000 testing or viral protection testing is highest when performed by professional test
personnel rather than by the developers themselves
138
On average
Developers do 31% of testing
Professional test personnel do 31% of testing
Quality assurance personnel do 18% of testing
Clients do 20% of testing
Youre never done testing, the burden simply shifts from you to your customer
Youre done testing when you run out of time or you run out of money
There is no definitive answer
On a small or local scale, we can ask: When to stop testing for a specific test
activity?
On a global scale, we can ask: When to stop all the major test activities? because
the testing phase is usually the last major development phase before product release,
this question is equivalent to: When to stop testing and release the product?
Resource-based criteria, where decision is made based on resource consumptions. The
most commonly used such stopping criteria are
Stop when you run out of time
Stop when you run out of money
Such criteria are irresponsible, as far as product quality is concerned, although they
may be employed if product schedule or cost are the dominant concerns for the
product in question
Activity-based criteria, commonly in the form:
Stop when you complete planned test activities
This criterion implicitly assumes the effectiveness of the test activities in ensuring the
quality of the software product. However, this assumption could be questionable
without strong historical evidence based on actual data from the project concerned
No we cannot be absolutely certain that the software will never fail, but relative to a
theoretically sound and experimentally validated statistical model, we have done
sufficient testing to say with 95% confidence that the probability of 1000 CPU hours
of failure free operation in a probabilistically defined environment is at least 0.995.
(Musa and Ackerman)
139
Confusion
Configuration Management
Identify change
Control change
Ensure that the change is being properly implemented
Report changes to others who may be interested
If we dont control change, it will control us
Its very easy for a stream of uncontrolled changes to turn a well-run software project into
chaos
140
For that reason, SCM is an essential part of good project management and is a solid
software engineering practice
The items that comprise all information produced as part of the software process are
collectively called a software configuration
o Computer programs (source and executable)
o Documents that describe the computer programs
o Data
Software configuration items will grow
Change - A Constant
The number of configuration items in a software project will grow continuously and the
changes within each configuration item will also occur on a frequent basis
So, we can say that there is nothing permanent except change. [Heraclitus (500 B.C.)]
No matter where you are in the system life cycle, the system will change, and the desire
to change it will persist throughout the life cycle
Software is like a sponge due to its susceptibility to change
We emphasize this point, because this is the first step in planning and implementing a
good software configuration management process
Sources of Change
141
A specification or product that has been formally reviewed and agreed upon, that
thereafter serves as the basis for further development, and that can be changed only
through formal change control procedures
o IEEE Std. No. 610.12-1990
Before a software configuration item becomes a baseline, change may be made quickly
and informally
However, once a baseline is established, changes can be made, but a specific, formal
procedure must be applied to evaluate and verify each change request
In the context of software engineering, a baseline is a milestone in the development of
software that is marked by the delivery of one or more software configuration items and
approval of these software configuration items is obtained through a formal technical
review or inspection
Typical, work products that are base-lined are
o System specification
o Software requirements
o Design specification
o Source code
o Test plans/procedures/data
o Operational system
SCM is an important element of SQA program
Its primary responsibility is the control of change
142
Any discussion of SCM introduces a set of following complex questions. Listen carefully
to these questions. Each one of them is related to one or more aspect of software quality
SCM Questions
How does an organization identify and manage the many existing versions of a program
(and its documentation) in a manner that will enable change to be accommodated
efficiently?
How an organization controls changes before and after software is released to a
customer?
Who has the responsibility for approving and ranking changes?
How can we ensure that changes have been made properly?
What mechanism is used to appraise others of changes that are made?
To control and manage software configuration items, each item must be separately named
or numbered
The identification scheme is documented in the software configuration management plan
The unique identification of each SCI helps in organization and retrieval of configuration
items
143
How can we ensure that the approved changes have been implemented?
o Formal technical reviews/inspections
Focuses on the technical correctness of the modified item. The
reviewers/inspectors assess the SCI to determine consistency with other
SCIs, omissions, or potential side effects
o Software configuration audit
A software configuration audit complements the formal technical reviews/inspections by
assessing a configuration item for characteristics that are generally not considered during
review
The SCM audit is conducted by the quality assurance group
The status accounting function provides a corporate memory of project events that
supports accomplishment of other configuration management items
o What happened?
o Who did it?
o When did it happen?
o What else will be affected?
144
To control and manage software configuration items, each item must be separately named
or numbered
The identification scheme is documented in the software configuration management plan
The unique identification of each SCI helps in organization and retrieval of configuration
items
Change control is vital. Too much change control and we create problems. Too little, and
we create other problems
For large projects, uncontrolled change rapidly leads to chaos
For medium to large projects, change control combines human procedures and automated
tools to provide a mechanism for the control of change
Well talk about change control process after few minutes
How can we ensure that the approved changes have been implemented?
o Formal technical reviews/inspections
o Software configuration audit
A software configuration audit complements the formal technical
reviews/inspections by assessing a configuration item for
characteristics that are generally not considered during review
The SCM audit is conducted by the quality assurance group
Has the change specified in the ECO been made? Have any additional modifications
been incorporated?
Has a formal technical review/inspection been conducted to assess technical correctness?
145
Has the software process been followed and have software engineering standards been
properly applied?
Has the change been highlighted in the SCI? Have the change date and change author
been specified? Do the attributes of the configuration item reflect the change?
Have SCM procedures for noting the change, recording it, and reporting it been followed?
Have all related SCIs been properly updated?
The status accounting function provides a corporate memory of project events that
supports accomplishment of other configuration management items
o What happened?
o Who did it?
o When did it happen?
o What else will be affected?
Each time an SCI is assigned a new or updated identification, a configuration status
reporting (or CSR) entry is made
Each time a change is approved, a CSR entry is made
Each time a configuration audit is conducted, the results are reported as part of CSR task
Output from CSR may be placed in an on-line database for easy access, and CSR reports
are issued on a regular basis to keep management and practitioners informed about
important changes
Configuration status reporting plays a vital role in the success of a large software
development project
When many people are involved, it is likely that the left hand not knowing what the right
hand is doing syndrome will occur
Two developers may attempt to modify the same SCI with different and conflicting
intents
A software engineering team may spend months of effort building software to an obsolete
hardware specification
A person who would recognize serious side effects for a proposed change is not aware
that the change is being made
Configuration status reporting helps to eliminate these problems by improving
communication among all people involved
Without proper safeguards, change control can retard progress and create unnecessary red
tape
It is relatively easy to incorporate changes before a work-product has been base-lined
the author has the authority to incorporate changes based on the organizations policy,
project managements guidelines, and according to the technical needs of the project. We
only need informal change control
However, when a work-product has been base-lined after conducting a formal technical
review or inspection, we need a more formal process to incorporate changes
Before the release of software, project level change control is implemented
146
However, when the software product is released to the customer, strict formal change
control is instituted
Proposed changes to software work-products are reviewed, then subjected to the
agreement of project participants, and finally incorporated into the currently approved
software configuration
This requires that the separate authority, which reviews and approves all change requests
be established for every project from among the project participants
This authority is known as change control authority (or CCA) or change control board (or
CCB)
Well use these two terms interchangeably during this course
A CCA/CCB plays an active role in the project level change control and formal change
control activities
Typically, a CCA consists of representatives from software, hardware, database
engineering, support, and marketing, etc., depending on the size and nature of the project
For every change request, the change control authority/board assesses the
o Technical merit
o Potential side effects
o Overall impact on other configuration items and system functions
o Projected cost of the change
In addition, change control authority/board may assess the impact of change beyond the
SCI in question
o How will the change affect hardware?
o How will the change affect performance?
o How will the change modify customers perception of the product?
o How will the change affect product quality and reliability?
The CCA/CCB has the authority to approve or reject a change request
It can delay the request for consideration and inclusion in the next iteration also
For every approved change request, an engineering change order (or ECO) is generated,
which describes
o The change to be made
o The constraints that must be respected
o The criteria of review and audit
At this time, we have to implement the approved changes
First, we need to check-out the configuration item that has to be changed from the
project database
Now we have access and authorization to make modifications to that specific SCI
Then the approved change are made and necessary SQA and testing activities are applied
That SCI is then checked-in to the project data base
The changes are now part of the new version of the base-lined work-product
147
148
As you can notice, that formal change control process is very elaborate and
comprehensive process
The check-in and check-out activities implement two important elements of change
control access control and synchronization control
Access control governs which software engineers have the authority to access and modify
a particular configuration item
Synchronization control helps to ensure that parallel changes, performed by two different
people, dont overwrite one another
We need to implement both
149
SCM Standards
MIL-STD-483, DOD-STD-480A, MIL-STD-1521A, ANSI/IEEE Std. No. 828-1983,
ANSI/IEEE Std. No. 1042-1987, ANSI/IEEE Std. No. 1028-1988
150
Visibility: Control
Visibility: Auditing
Visibility: Accounting/Reporting
Traceability: Control
Makes baselines and changes to them manifest, thus providing the links in a traceability
chain
Provides the forum for avoiding unwanted excursions and maintaining convergence with
requirements
Traceability: Auditing
Checks that parts in one software product are carried through to the subsequent software
product
Checks that parts in a software product have antecedents/roots in requirements
documentation
151
Real-World Considerations
Management Commitment
SCM Staffing
Establishment of a CCB
SCM During the Acceptance Testing Cycle
Justification and Practicality of Auditing
Avoiding the Paperwork Nightmare
Allocating Resources among SCM Activities
Management Commitment
Management commitment to the establishment of checks and balances is essential to
achieving benefits from SCM
Management Commitment for SCM
SCM Staffing
Initial staffing by a few experienced people quickly gains the confidence and respect of
other project team members
It is important to build the image that the SCM team has the objective of helping the other
project team members achieve the overall team goals, that they are not a group of
obstructionists and criticizers
An SCM organization requires
o Auditors
o Configuration control specialists
152
o Status accountants
These positions require hard work and dedication
Important qualification of these people are the ability to see congruence (similarity)
between software products and the ability to perceive what is missing from a software
product
With these abilities, the SCM team member can observe how change to the software
system is visibly and traceably being controlled
Should all SCM personnel be skilled programmers and analysts?
No, these particular skills are not a necessity by any means, although personnel
performing software configuration auditing should be technically skilled
Establishment of a CCB
As a starting point in instituting SCM, periodic CCB meetings provide change control,
visibility, and traceability
The CCB meeting is a mechanism for controlling change during the development and
maintenance of software
CCB membership should be drawn from all organizations on the project
153
Decision mechanism
CCB chairperson
CCB minutes
When changes are necessary, a CCB meets to evaluate and manage the impact of change
on the software development process
The impact of change can be countered in only three ways
o Add more people to the project to reduce the impact of the change
o Extend the time to completion
o Eliminate other nonessential or less essential functionality
If a small amount of code is changed, it is redesigned into the old code; if a large amount
is changed, a complete subsystem is redesigned as though it were a new product
From a maintenance point of view, IBM followed this rule of thumb; If 20% of the code
must be modified, then the module should be redesigned and rewritten
Testing: Control
CCB meetings
o Establishment of development baseline
o Assignment of testing and incident resolution priorities
o Establishment of turnover dates
o Approval of audit and test reports
o Approval of incident report resolutions
o Establishment of operational baseline
Testing: Auditing
Testing: Accounting/Reporting
154
155
156
Acquire highly reliable and redundant physical storage and processing elements for the
software repository
Identify configuration management administrator, who is responsible for numerous tasks.
The key tasks include
o The creation of configuration management accounts and the assignment of
capabilities to them
o The enforcement of defined configuration management policies and procedures
o The building of internal and external deliveries
Define a backup procedure to regularly back up configuration management repositories to
nonvolatile storage and periodically purge them of redundant or useless data. This
procedure should identify when incremental and full backups are done
Define a procedure that verifies that the backup process functions correctly
Determine whether work must be authorized. If work must be authorized, then:
o Establish a change control board
o Assign people to the change control board
o Define rules for approving changes to artifacts
Identify the number of development lines. Typically, one is sufficient. If more than one
development line is needed, then:
o Specify the frequency of the integration of each development line
Identify the number of new tasks that an individual can work on simultaneously
Determine whether parallel development is permitted
Determine if branches can be created for tasks other than parallel development or the
development of releases. If branches can be created then:
o Identify who can create the branches
o Specify under what conditions the branches can be created
o Establish the criteria for determining when merges are performed
Determine who can create workspaces
Specify standard workspaces
157
Identify what information should be specified for each new development task, change
request, and anomaly report. Consider performing the following as required actions for
each change request and anomaly report
o Estimate the size of the change
o Identify any alternative solutions
o Identify the complexity of the change and the impact on other systems
o Identify when the need exists
o Identify the effect the change will have on subsequent work
o Estimate the cost of the change
o Identify the criticality of the change request
o Identify if another change request will solve this problem
o Identify the effort to verify the change
o Identify who will verify the change
o Identify whether the right people are available to work on the request
o Identify the impact on critical system resources, if this is an issue
o Identify the length of time that the change request has been pending
Select metrics to gather
o Number of change requests submitted
o Number of change requests reviewed and approved for resolution
o Number of change requests resolved and length of resolution
o Number of anomaly reports submitted
o Number of anomaly reports reviewed and approved for correction
o Number of anomaly reports corrected and length of correction
o Number of artifacts changed
o Number of artifacts changed more than once (these should be characterized by the
number of changes and frequency of the changes)
Acquire a configuration management tool that is able to manage software configurations,
document identified software defects, and produce software releases
Automate the policies, practices, and procedures as much as possible
158
Larger outsource vendors are often quite expert in implementing change control
mechanisms
Once an application has been deployed, new features and modifications average
approximately 7% per year for several years in a row (i.e., new and changed features will
approximate 7% of function points)
Change Estimation and Costing in Contracts
o Specific clauses for change control are included in the outsource agreements
o The forms of clauses vary with specific needs of the client, but are often based on
the predicted volumes of the changes
o Initial set of requirements have a fixed price, but new requirements will be
included at a higher price
Function Point Metrics for Changes
o Estimate and measure the function point totals of all changes to software projects
larger than 15 function points
Change Control Boards
o For all projects larger than 5,000 function points
o CCB usually has three to seven people
Automated Change Control
o For all deliverables, which include requirements, specifications, design
documents, source code, test plans, user documentation, and training material
o Package should flag recommended changes
o Automated change control tools that support only source code are adequate for
projects larger than 100 function points
For these kinds of projects, changes during development can occur for a much wider
variety of reasons than those found with internal information systems
Systems software control physical devices
Change Control Boards
o For all projects larger than 10,000 function points
o CCB usually has three to seven people
Primary client
Project office
Development team
Hardware portion of the application
Automated Change Control
o For all deliverables, which include requirements, specifications, design
documents, source code, test plans, user documentation, and training material
o Package should flag recommended changes
o Automated change control tools that support only source code are adequate for
projects larger than 100 function points
159
Commercial software vendors may market the same application on different hardware
platforms
They may offer the same application in different national languages
When major changes occur, they affect dozens of versions at the same time
Change control is a key technology for commercial software vendors
Automated Change Control
The military software community was an early adopter of change control packages
Change control starts during initial development and continues until an application is
retired
Change Control Boards
o For all projects larger that 10,000 function points
o CCB usually has three to seven people
Primary client
Project office
Development team
Hardware portion of the application for hybrid projects
Automated Change Control
o For all deliverables, which include requirements, specifications, design
documents, source code, test plans, user documentation, and training material
o Package should flag recommended changes
o This is one of the 16 best practices identified by the Airlie Council
Function Point Metrics for Changes
o Estimate and measure the function point totals of all changes to military projects
o This data can be used to ascertain the monthly rate of requirements creep
Cost Estimates for Changes
o Cost-estimating changes and cost measurement of changes are both difficult
o Use automated estimation tools and function point metrics
Requirements Tracing and Changes
160
161
Introduction
Reference Documents, Definitions, Acronyms
Management
Activities
Resources
Plan Maintenance
Plan Approval
Introduction
This section provides a complete list of documents referenced elsewhere in the text of the
SCM Plan. By definition, these documents originate outside the project. Also included in
this section is a glossary of project specific terms and their definitions and a list of
project-specific abbreviations and acronyms and their meaning
Reference Documents
Glossary of Terms
Abbreviations and Acronyms
Management
162
o This section depicts the organizational context, both technical and managerial,
within which the prescribed software configuration management activities are to
be implemented
Responsibilities
o Describes the allocation of software configuration management activities to
organizational units
Policies, Directives and Procedures
o Any external constraints, or requirements, placed on the SCM Plan by other
policies, directives, or procedures must be identified here. A detailed impact
analysis should accompany the identification of external constraints
Activities
Identifies all functions and tasks required to manage the configuration of the software
system as specified in the scope of the SCM Plan. Both technical and managerial
activities must be identified
Configuration Identification
o Identify, name, and describe the documented physical and functional
characteristics of the code, specification, design, and data elements to be
controlled for the project. The Plan must identify the items to be maintained in
configuration management control
o Identifying configuration items
o Naming configuration items
o Acquiring configuration items
Configuration Control
o Configuration control activities request, evaluate, approve or disapprove, and
implement changes to the software configuration items. Changes include both
error correction and enhancement. This section shall identify the records to be
used for tracking and documenting the sequence of steps for each change
o Requesting changes
o Evaluating changes
o Approving or disapproving changes
o Implementing changes
Configuration Status Accounting
o Record and report the status of configuration items. The following minimum data
elements should be tracked and reported for each configuration management item:
Approved version
Status of requested changes
Implementation status of approved changes
Configuration Audits and Reviews
o Configuration audits determine the extent to which the actual configuration
management items reflect the required physical and functional characteristics.
Configuration reviews may also be designed a management tool used to ensure
that a software configuration management baseline is established
Interface Control
o Coordinates changes to the project's configuration management items with
changes to interfacing items outside the scope of the SCM Plan
163
Subcontractor/Vendor Control
o For acquired software, the Software Configuration Management Plan shall
describe how the vendor software will be received, tested, and placed under
software configuration management control
o For both subcontracted and acquired software, the SCM Plan must define the
activities to be performed to incorporate the externally developed items into
project configuration management and to coordinate changes to these items
Resources
Establishes the sequence and coordination for all the software configuration management
activities and all the events affecting the Plan's implementation
Schedules
o Schedule information shall be expressed as absolute dates, as dates relative to
other project activities, as project milestones, or as a simple sequence of events.
Graphic representation can be particularly appropriate for conveying this
information
Resources
o Identifies the software tools, techniques, equipment, personnel, and training
necessary for the implementation of software configuration management activities
Plan Maintenance
Identifies and describes the activities and responsibilities necessary to ensure continued
software configuration management planning during the life cycle of the project. This
section of the SCM Plan should state the following:
o Who is responsible for monitoring the SCM Plan
o How frequently updates are to be applied
o How changes to the SCM Plan are to be evaluated and approved
o How changes to the SCM Plan are to be made and communicated
Plan Approval
References for SCM Plan
SCM Plan based on IEEE Standard for Software Configuration Management Plans (Std
828-1990) and the IEEE Guide to Software Configuration Management (Std 1042-1987)
Managing the Software Process by Watts S. Process, AW, 1989, (Chapter 12.1) [pp.228232]
SQA Plan
Every development and maintenance project should have a software quality assurance
plan (SQAP) that specifies:
o Its goals
o The SQA tasks to be performed
o The standards against which the development work is to be measured
o The procedures and organizational structure
164
Purpose
Reference documents
Management
Documentation
Standards, practices, and conventions
Reviews and audits
Software configuration management
Test
Problem reporting and corrective action
Tools, techniques, and methodologies
Media control
Supplier control
Records collection, maintenance, and retention
Purpose
This section documents the purpose of this Software Quality Assurance (SQA) Plan
It documents the goals, processes, and responsibilities required to implement effective
quality assurance functions for the current project
Scope
o Defines the scope of the SQAP in different activities of the software life cycle and
even through maintenance
Reference Documents
Lists the documents referenced in this software quality assurance plan
Management
This section describes the management organizational structure, its roles and
responsibilities, and the software quality tasks to be performed
Management organization
Tasks
Roles and responsibilities
Software assurance estimated resources
Management Organization
o Describes the support of entities, organizations and personnel
o Relevant entities/roles that are of interest and applicable to this SQA Plan and the
software assurance effort specially include
Project office
Assurance management office
Tasks
o This section summarizes the tasks (product and process assessments) to be
performed during the development, operations, and maintenance of software
165
o These tasks are selected based on the developers Project Schedule, Software
Management Plan (SMP) (and/or Software Maintenance Plan) and planned
deliverables, contractual deliverables, and identified reviews
o Product assessments
Peer Review packages
Document Reviews
Software Development Folders
Software Configuration Management
Test results
o Process assessments
o Process assessments
Project Planning
Project Monitoring and Control
Measurement and Analysis
System/Subsystem Reviews
Peer Reviews
Requirements Management
Software Configuration Management and Configuration Audits
Test Management (Verification & Validation)
Software Problem Reporting and Corrective Action
Risk Management
Supplier Agreement Management
Roles and Responsibilities
o This section describes the roles and responsibilities for each assurance person
assigned to a project
Software Assurance Manager
Software Quality Personnel
Software Assurance Estimated Resources
o Staffing to support software assurance (i.e., quality, safety, and reliability)
activities must be balanced against various project characteristics and constraints,
including cost, schedule, maturity level of the providers, criticality of the software
being developed, return on investment, perceived risk, etc.
Documentation
166
This section highlights the standards, practices, quality requirements, and metrics to be
applied to ensure a successful software quality program
This section specifies a minimum content of:
o Documentation standards
o Logic structure standards
o Coding standards
o Commentary standards
Software Quality Program
Standard Metrics
This section discusses major project reviews conducted by SQA staff and software team
members
Generic review guidelines
o A set of guidelines for all formal technical reviews or inspections is presented in
this section
o Conducting a review
General guidelines for conducting a review
o Roles and responsibilities
The roles people play during a FTR or inspection and the responsibilities
of each player
o Review work products
Documents, forms, lists produced as a consequence of the FTR/inspection
Formal technical reviews/inspections
o A description of the specific character and the intent of each major
FTR/inspection conducted during the software process
SQA audits
o A description of audits performed by the SQA group with the intent of assessing
how well SQA and software engineering activities are being conducted on a
project
A brief overview of the content of the software configuration management (SCM) plan is
presented here
Alternatively, the SCM plan is referenced
Test
SQA personnel will assure that the test management processes and products are being
implemented per the Software Management Plan and /or Test Plans
This includes all types of testing of software system components as described in the test
plan, specifically during integration testing (verification) and acceptance testing
(validation).
167
This section describes problem reporting mechanisms that occur as a consequence of the
formal technical reviews or inspections that are conducted and the means for corrective
action and follow-up
Reporting mechanisms
o Describes how and to whom problems are reported
Responsibilities
o Describes who has responsibility for corrective actions and follow-up
Data collection and evaluation
o Describes the manner in which error/defect data are collected and stored for future
or real-time evaluation
Statistical SQA
o Describes the quantitative techniques that will be applied to error/defect data in an
effort to discern trends and improvement
SQA personnel will conduct off-site surveillance activities at supplier sites on software
development activities
SQA personnel will conduct a baseline assessment of the suppliers Quality Management
Systems (QMS) to ensure that the suppliers have quality processes in place. This initial
assessment will help to scope the level of effort and follow-on activities in the area of
software quality assurance
SQA personnel will maintain records that document assessments performed on the
project. Maintaining these records will provide objective evidence and traceability of
assessments performed throughout the projects life cycle
Example records include the process and product assessments reports, completed
checklists, the SQA Activity Schedule, metrics, weekly status reports, etc.
Managing the Software Process by Watts S. Process, AW, 1989, (Chapter 8.4) [pp.147150]
SQA Plan Template by R.S. Pressman & Associates, Inc.,
http://www.rspa.com/docs/sqaplan.html
IEEE template for SQA Plan
168
Process assurance makes certain that the process for building and delivering software is
robust and allows for the delivery and maintenance of the products
Process assurance consists of the collective activities carried out while developing a
product to ensure that the methods and techniques used are integrated, consistent, and
correctly applied
Emphasis is given to cost, time, technical requirements, testing measurements, and
prototyping
Process assurance involves the interrelationships of several different components.
Depending on how these are managed, they can have a major positive impact on the
products
Once an effective process assurance program is put in place and shown to be beneficial,
then emphasis can be placed in making verification and validation strategies effective and
in improving the quality of the products
Successful process assurance is based on planning and organization
There are several important aspects of planning and organization that must be considered
before starting the project
Ill show you a picture, which captures the components of planning and organization
Project Team
Project team is the project managers only means of reaching the project goals
Formation of project team is vital to success
Size of the team depends on the size and complexity of the project
Right mix of technical knowledge and experience
169
Project Standards
Before the project is started, standards should be established for activities like
requirements gathering, design development, and unit testing
Standards should also be developed for quality control activities, like walkthroughs,
reviews, and inspections
Many companies follow IEEE software engineering standards or they have their
internally developed standards
Standards should be flexible enough to be applied to large or small projects
Any deviations from the standards should be approved by the project team and the reason
for such deviation should be noted in the minutes of the project meetings
Schedule Monitoring
Stringent deadlines for the project are frequently established by management, end users, a
project sponsor, or a client with no regard to the reality of achievement
The project manager is then designated to meet unrealistic expectations of the project
completion date
For this reason, the project start date, milestones, and completion date should be
negotiated upfront
If the unrealistic date is accepted and the project activities are then made to fit within this
time frame, the quality of the project certainly will suffer
The key to an on-time project lies in the ability to identify the critical path before
starting the project
The critical path of a project is where problems that may affect the overall schedule are
faced
Develop systematic work breakdown structures which identify task groupings (tasks that
can be combined together), task sequences, and entrance/exit criteria for each task
To define tasks, follow the guidelines of the system development methodology used by
your organization
In the absence of a development methodology, obtain copies of task lists and task
dependencies from other projects and customize them to suit your needs of the current
project
Clearly defined work breakdown structures will assist in selecting the correct skilled
resources
At the same time, using the breakdown structures also ensures that no activity is forgotten
The technique of breaking down activities into smaller tasks takes an impossibly complex
project and reorganizes it into manageable pieces under the direction of the project
manager
Once you have defined the critical path, review the tasks and schedule with the project
team members and other significantly impacted individuals
Since these people are the stakeholders and are affected by the project in one or more of
the following ways:
o Their budget is charged for all or part of the project
o The departments resources are used by the project
170
o The department has either existing projects or ongoing projects that are affected
by the new project
Avoid the most common mistake of adding another resource to shorten or meet the
schedule
Project Tracking
Estimation
Realistic estimates allow you to discuss alternative approaches at the start of the project
Estimates are not foolproof
Allow time for resource management and unforeseen events, like the illness of a team
member
Revise estimates, and update plans
Effective Communication
Steering Committee
A committee responsible for defining project policy, reviewing the project milestones,
and evaluating risk factors must be established
Members of the committee should represent all the impacted areas of the business. They
should be knowledgeable enough to make informed technological decisions and be able
to change the course if needed
It is responsible for
o Estimating the time that will be required to maintain the system
o Deciding on the type of support required from the operations for the running of
the system
o Deciding when the data will be available and how it will be managed, reported,
and used
o Forming a configuration control board (CCB) that manages the impact of changes
Project Risks
Every project has risks associated with them, some more than others
There is need to identify and address the risk factors upfront
171
All risk factors should be discussed with the project team, management, and users
A risk mitigation policy needs to be developed
Project Risks
Measurement
Establishing measurement criteria, against which each phase of the project will be
evaluated, is vital
When exit criteria is well defined, it is sufficient to evaluate the outcome of each phase
against the exit criteria and move forward
If the outcome of each phase does not meet the performance criteria, the project manager
should be able to control the project by evaluating the problems, identifying the
deviations, and implementing new processes to address the deviations
The pre-established quality goals for the project can also serve as criteria against which
the project can be measured
Processes should be established to
o Enable the organization to address customer complaints
o Give the organization statistics regarding the types of customer calls
o Incorporate reporting and handling of customer problems
o Enable management to make staffing decisions based on the number of customer
calls
Integrated Technology
172
This will empower the management to react to the operational needs of the business and,
at the same time, take an inventory of the current status of various systems, projects, and
the ability of technical staff to support any future projects
The IT trends, competitors, and demands of the customers should be visible to the
management
Parts of the new system that will be interfacing with existing system should be identified
so that the impact can be evaluated
If technology is new and not well understood, allowances to incorporate experiments
should be made in the overall project plan and schedule
Lack
Lack
Lack
Lack
of Management Support
of User Involvement
of Project Leadership
of Measures of Success
Common Misconceptions
173
Everyone realizes the importance of having a motivated, quality work force but even our
finest people cannot perform at their best when the process is not understood or operating
at its best
Process, people, and technology are the major determinants of product cost, schedule, and
cost
The quality of a system is highly influenced by the quality of the process used to acquire,
develop, and maintain it
While process is often described as a node of the process-people-technology triad, it can
also be considered the glue that ties the triad together
174
Design processes that can meet or support business and technical objectives
Identify and define the issues, models, and measures that relate to the performance of the
processes
Provide infrastructures (the set of methods, people, and practices) that are needed to
support software activities
Ensure that the software organization has the ability to execute and sustain the processes
o Skills
o Training
o Tools
o Facilities
o Funds
175
Controlling a process means keeping within its normal (inherent) performance boundaries
that is, making the process behave consistently
Measurement
o Obtaining information about process performance
Detection
o Analyzing the information to identify variations in the process that are due to
assignable causes
Correction
o Taking steps to remove variation due to assignable causes from the process and to
remove the results of process drift from the product
Determine whether or not the process is under control (is stable with respect to the
inherent variability of measured performance)
Identify performance variations that are caused by process anomalies (assignable causes)
Estimate the sources of assignable causes so as to stabilize the process
Once a process is under control, sustaining activities must be undertaken to forestall the
effects of entropy. Without sustaining activities, processes can easily fall victim to the
forces of ad hoc change or disuse and deteriorate to out-of-control states
This requires reinforcing the use of defined processes through continuing management
oversight, measurement, benchmarking, and process assessments
Understand the characteristics of existing processes and the factors that affect process
capability
Plan, justify, and implement actions that will modify the processes so as to better meet
business needs
Assess the impacts and benefits gained, and compare these to the costs of changes made
to the processes
Process improvement should be done to help the business not for its own sake
176
Explain the problem, discuss why the change is necessary, and spell out the reasons in
terms that are meaningful
Create a comfortable environment where people will feel free to openly voice their
concerns and their opinions
Explain the details of the change, elaborate on the return on investment, how it will effect
the staff, and when the change will take place
Explain how the change will be implemented and measured
Identify the individuals who are open-minded to accept the change more easily
Train employees to help them acquire needed skills
Encourage team work at all times and at all levels
Address each concern with care so there is no fear left and value each opinion
Make decisions based on factual data rather than opinions or gut feelings
Enforce decisions to reinforce the change
177
More than half of the large software systems were late in excess of 12 months
The average costs of large software systems was more than twice the initial budget
The cancellation rate of large software systems exceeded 35%
The quality and reliability levels of delivered software of all sizes was poor
Software personnel were increasing by more than 10% per year
Software was the largest known business expense which could not be managed
SEI developed a Capability Maturity Model (CMM) for software systems and an
assessment mechanism
CMM has five maturity models
o Initial
o Repeatable
o Defined
o Managed
o Optimizing
178
Organizations have introduced at least some rigor into project management and technical
development tasks
Approaches such as formal cost estimating are noted for project management, and formal
requirements gathering are often noted during development
Compared to initial level, a higher frequency of success and a lower incidence of
overruns and cancelled projects can be observed
In terms of People CMM, level 2 organizations have begun to provide adequate training
for managers and technical staff
Become aware of professional growth and the need for selecting and keeping capable
personnel
Key process areas
o Requirements management
o Software project planning
o Software project tracking and oversight
o Software subcontract management
o Software quality assurance
o Software configuration management
179
Organizations have mastered a development process that can often lead to successful
large systems
Over and above the project management and technical approached found in Level 2
organizations, the Level 3 groups have a well-defined development process that can
handle all sizes and kinds of projects
In terms of People CMM, the organizations have developed skills inventories
Capable of selecting appropriate specialists who may be needed for critical topics such as
testing, quality assurance, web mastery, and the like
Key process areas
o Organization process focus
o Organization process definition
o Training
o Software product engineering
o Peer reviews
o Integrated software management
o Inter-group coordination
Key process areas for People CMM
o Career development
o Competency-based practices
o Work force planning
o Analysis of the knowledge and the skills needed by the organization
Organizations have established a firm quantitative basis for project management and
utilize both effective measurements and also effective cost and quality estimates
In terms of People CMM, organizations are able to not only monitor their need for
specialized personnel, but are actually able to explore the productivity and quality results
associated from the presence of specialists in a quantitative way
Able to do long-range predictions of needs
Mentoring
Key process areas
o Software quality management
o Quantitative software management
Key process areas for People CMM
o Mentoring
o Team building
o Organizational competency
180
Management
5 Optimizing
4 Managed
Quantitative Software
Management
3 Defined
Integrated Software
Management
Intergroup Coordination
2 Repeatable
Requirements Management
Software Project Planning
Software Project Tracking
and Oversight
Software Subcontract
Management
Software Quality Assurance
Software Configuration
Management
1 Initial
Organizational
Engineering
Technology Change
Management
Process Change
Management
Defect Prevention
Software Quality
Management
Organization Process
Focus
Organization Process
Definition
Training Program
Software Product
Engineering
Peer Reviews
Ad Hoc Processes
Level 1 Quality
Software defect potentials run from 3 to more than 15 defects per function points, but
average is 5 defects per function point
Defect removal efficiency runs from less than 70% to more than 95%, but average is 85%
Average number of delivered defects is 0.75 defects per function point
Several hundred projects surveyed
Level 2 Quality
181
Software defect potentials run from 3 to more than 12 defects per function points, but
average is 4.8 defects per function point
Defect removal efficiency runs from less than 70% to more than 96%, but average is 87%
Average number of delivered defects is 0.6 defects per function point
Fifty (50) projects surveyed
Level 3 Quality
Software defect potentials run from 2.5 to more than 9 defects per function points, but
average is 4.3 defects per function point
Defect removal efficiency runs from less than 75% to more than 97%, but average is 89%
Average number of delivered defects is 0.47 defects per function point
Thirty (30) projects surveyed
Level 4 Quality
Software defect potentials run from 2.3 to more than 6 defects per function points, but
average is 3.8 defects per function point
Defect removal efficiency runs from less than 80% to more than 99%, but average is 94%
Average number of delivered defects is 0.2 defects per function point
Nine (9) projects surveyed
Level 5 Quality
Software defect potentials run from 2 to 5 defects per function points, but average is 3.5
defects per function point
Defect removal efficiency runs from less than 90% to more than 99%, but average is 97%
Average number of delivered defects is 0.1 defects per function point
Four (4) projects surveyed
182
Benefits of CMMI
Background of CMMI
183
Staged Representation
The staged representation is the approach used in the Software CMM. It is an approach
that uses predefined sets of process areas to define an improvement path for an
organization. This improvement path is described by a model component called a
maturity level
A maturity level is a well-defined evolutionary plateau toward achieving improved
organizational processes
Continuous Representation
The continuous representation is the approach used in the SECM and the IPD-CMM. This
approach allows an organization to select a specific process area and improve relative to it
The continuous representation uses capability levels to characterize improvement relative
to an individual process area
STAGED REPRESENTATION
CMMI Model Components in the Staged Representation
184
Maturity Level
Maturity level signifies the level of performance that can be expected from an
organization
There are five maturity levels i.e. Adhoc, Managed, Defined, Quantitatively Managed &
Optimizing.
Process Areas
Goals
Each PA has several goals that need to be satisfied in order to satisfy the objectives of the
PA. There are two types of goals:
o Specific goals (SG): goals that relate only to the specific PA under study
o Generic goals (GG): goals that are common to multiple process areas throughout
the model. These goals help determine whether the PA has been institutionalized
Practices
185
Practices are activities that must be performed to satisfy the goals for each PA. Each
practice relates to only one goal. There are two types of practices:
Specific practices (SP): practices that relate to specific goals
Generic practices (GP): practices associated with the generic goals for institutionalization
Level 1: Adhoc
Level 2: Defined
Level 3: Managed
The organization has achieved all of the goals of Level 2. There is an organizational way
of doing business, with tailoring of this organizational method allowed under predefined
conditions. The organization has an organizations set of standard processes (OSSP)
The following characteristics of the process are clearly stated
o Purpose, Inputs, Entry criteria, Activities, Roles, Measures, Verification steps,
Outputs, Exit criteria
Level 3 continues with defining a strong, meaningful, organization-wide approach to
developing products. An important distinction between Level 2 and Level 3 is that at
Level 3, processes are described in more detail and more rigorously than at Level 2.
Processes are managed more proactively, based on a more sophisticated understanding of
the interrelationships and measurements of the processes and parts of the processes. Level
3 is more sophisticated, more organized, and establishes an organizational identitya
way of doing business particular to this organization
186
Level 5 is nirvana
Everyone is a productive member of the team, defects are reduced, and your product is
delivered on time and within the estimated budget
CONTINUOUS REPRESENTATION
CMMI Model Components in the Continuous Representation
The continuous representation uses the same basic structure as the staged representation.
However, each PA belongs to a Process Area Category. A Process Area Category is just a
simple way of arranging PAs by their related, primary functions
Capability levels are used to measure the improvement path through each process area
from an unperformed process to an optimizing process. For example, an organization may
wish to strive for reaching capability level 2 in one process area and capability level 4 in
another. As the organization's process reaches a capability level, it sets its sights on the
next capability level for that same process area or decides to widen its scope and create
the same level of capability across a larger number of process areas
187
Organizational
Organizational
IPPD)
Organizational
Organizational
Organizational
Process Focus
Process Definition (with Integrated Product and Process Development
Training
Process Performance
Innovation and Deployment
Project Management
Project Planning
Project Monitoring and Control
Supplier Agreement Management
Integrated Project Management (with Integrated Product and Process Development
IPPD)
Risk Management
Quantitative Project Management
Engineering
Requirements Development
Requirements Management
Technical Solution
Product Integration
Verification
Validation
(listed in increasing order of complexity)
Support
Configuration Management
Process and Product Quality Assurance
Measurement and Analysis
Decision Analysis and Resolution
Causal Analysis and Resolution
Specific goals and practices relate to specific process areas and relate to tasks that make
sense for that process area only. For example, Project Planning requires a project plan.
Quantitative Project Management requires a process performance baseline
Generic goals and practices relate to multiple process areas.
CMMI focuses on institutionalization. Goals cannot be achieved without proving
institutionalization of the process. Generic goals and generic practices support
institutionalization and increasing sophistication of the process. Specific goals and
specific practices support implementation of the process area. Process maturity and
capability evolve. Process improvement and increased capability are built in stages
because some processes are ineffective when others are not stable
188
The continuous representation has the same basic information as the staged
representation, just arranged differently; that is, in capability levels not maturity levels,
and process area categories. The continuous representation focuses process improvement
on actions to be completed within process areas, yet the processes and their actions may
span different levels. More sophistication in implementing the practices is expected at the
different levels. These levels are called capability levels
There are six capability levels
Level 0: Incomplete
Level 1: Performed
Level 2: Managed
Level 3: Defined
Level 4: Quantitatively Managed
Level 5: Optimizing
Whats a capability level? Capability levels focus on maturing the organizations ability
to perform, control, and improve its performance in a process area. This ability allows the
organization to focus on specific areas to improve performance of that area
Level 0: Incomplete
An incomplete process does not implement all of the Capability Level 1 specific practices in
the process area that has been selected. This is tantamount to Maturity Level 1 in the staged
representation
Level 1: Performed
A Capability Level 1 process is a process that is expected to perform all of the Capability
Level 1 specific practices. Performance may not be stable and may not meet specific
objectives such as quality, cost, and schedule, but useful work can be done
This is only a start, or baby step, in process improvement. It means you are doing
something, but you cannot prove that it is really working for you
Level 2: Managed
A managed process is planned, performed, monitored, and controlled for individual projects,
groups, or stand-alone processes to achieve a given purpose. Managing the process achieves
both the model objectives for the process as well as other objectives, such as cost, schedule,
and quality. As the title of this level states, you are actively managing the way things are
done in your organization. You have some metrics that are consistently collected and applied
to your management approach
Level 3: Defined
A defined process is a managed process that is tailored from the organizations set of
standard processes. Deviations beyond those allowed by the tailoring guidelines are
documented, justified, reviewed, and approved. The organizations set of standard processes
is just a fancy way of saying that your organization has an identity. That is, there is an
organizational way of doing work that differs from the way another organization within your
company may do it
189
190
Measurements in SE
191
We fail to set measurable targets for our software products. For example, we promise that
the product will be user-friendly, reliable and maintainable without specifying clearly and
objectively what these terms mean
Projects without clear goals will not achieve their goals clearly (Gilb)
We fail to understand and quantify the component costs of software projects. For
example, most projects cannot differentiate the cost of design from the cost of coding and
testing
We do not quantify or predict the quality of the products we produce
We allow anecdotal evidence to convince us to try yet another revolutionary new
development technology, without doing a carefully controlled study to determine if the
technology is efficient and effective
Engineers Perspectives
Benefits of Metrics
192
Determine the skill level and the number of resources required to support a given
application
Identify programs that require special attention or additional maintenance time
Identify complex programs that may cause unpredictable results
Provide constructive means of making decisions about product quality
Cost of Metrics
Process metrics are those that can be used for improving the software development and
maintenance process
Examples include the effectiveness of defect removal during development, the pattern of
testing defect arrival, and the response time of the fix process
Project Metrics
Project metrics are those that describe the project characteristics and execution
Examples include the number of software developers, the staffing pattern over the life
cycle of the software, cost, schedule, and productivity
Software quality metrics are a subset of software metrics that focus on quality aspects of
the product, process, and project
In general, software metrics are more closely associated with process and product metrics
than with project metrics
Nonetheless, the project parameters such as number of developers and their skill levels,
the schedule, the size, and the organization structure certainly affect the quality of the
product
193
Requirements
o Size of the document (# of words, pages, functions)
o Number of changes to the original requirements, which were developed later in
the life cycle but not specified in the original requirements document. This
measure indicates how complete the original requirements document was
o Consistency measures to ensure that the requirements are consistent with
interfaces from other systems
o Testability measures to evaluate if the requirements are written in such a way that
the test cases cab be developed and traced to the requirements
Often problems detected in the requirements are related to unclear requirement which are
difficult to measure and test, such as
o The system must be user friendly. (What does user friendly mean?)
o The system must give speedy response time (What is speedy response time? 10
seconds, 13 seconds?)
o The system must have state-of-the-art technology (What is considered state-ofthe-art?)
o They system must have clear management reports (What should these reports look
like? What is the definition of clear?)
194
Code/Design
o No of external data items from which a module reads
o No of external data items to which a module writes
o No of modules specified at a later phase and not in the original design
o No of modules which the given module calls
o No of lines of code
o Data usage, measured in terms of the number of primitive data items
o Entries/exits per module which predict the completion time of the system
Testing
o No of planned test cases in the test plan that ran successfully
o Success/effectiveness of test cases against the original test plan
o No of new unplanned test cases which are developed at a later time
Though there are several items in organization that can be tracked, measured, and
improved, there are seven measures that are commonly tracked
Defect
o
o
o
Work Effort
Work effort constitutes the number of hours spent on development of a new system,
system enhancement, or the support and maintenance of an existing system
The hours are collected throughout the project life cycle, across all the development
phases to track commitments, expectations made to the clients, and to provide historical
data to improve estimating for future work efforts
Can provide early warnings regarding budget over-runs and project delays
Schedule
The purpose of schedule measurements is to track the performance of the project team
toward meeting the committed schedule
Planned start date versus actual date
Planned completion date versus actual date
195
Size
The size measures are important because the amount of effort required to perform most
tasks is directly related to the size of the program involved
The size is usually measured in
o Lines of code
o Function Points
When the size grows larger than expected, the cost of the project as well as the estimated
time for completion also grow
The size of the software is also used for estimating the number of resources required
The measure used to estimate program size should be easy to use early in the project life
cycle
Lines of Code (Empty lines, Comments/statements, Source lines, Reused lines, Lines
used from other programs)
Function Points
o It is a method of quantifying the size and complexity of a software system based
on a weighted user view of the number of external inputs to the application;
number of outputs from the application; inquiries end users can make; interface
files; and internal logical files updated by an application
o These five items are counted and multiplied by weight factors that adjust them for
complexity
o Function points are independent of languages, can be used to measure productivity
and number of defects, are available early during functional design, and the entire
product can be counted in a short time
o They can be used for a number of productivity and quality metrics, including
defects, schedules, and resource utilization
Documentation Defects
196
197
Intrinsic product quality is usually measured by the number of bugs (functional defects)
in the software or by how long the software can run before encountering a crash
In operational definitions, the two metrics are defect density (rate) and mean time to
failure (MTTF)
The mean time to failure metric measures the time between failures
It is often used with safety-critical systems such as the airline traffic control systems,
avionics, and weapons
For example, the air traffic control system cannot be unavailable for more than three
seconds a year
A failure occurs when a functional unit of a software-related system can no longer
perform its required function or cannot perform it within specified limits
It requires that the profile of a system is available to know the tolerance
Mean time between failures (MTBF)
Defect Density
The defect density measures the number of defects discovered per some unit of software
size (lines of code, function points)
The defect density metric is used in many commercial software systems
The defect rate of a product or the expected number of defects over a certain time period
is important for cost and resource estimates of the maintenance phase of the software life
cycle
The defect density metric is used to measure the number of defects discovered per some
unit of product size (KLOC or function points)
For example, if your product has 10 KLOC and 127 defects were discovered during a test
cycle, the defect density would be 0.0127 defects per KLOC
The defect density metric can be applied during any test period; however, only the value
calculated during test phases that follow system integration can be used to make
predictions about the rate at which defects will be discovered by customers
Defects by Severity
The defects by severity metric is a simple count of the number of unresolved defects
listed by severity
Typically, this metric is measured at some regular interval and plotted to determine
whether or not a trend exists
Ideally, a trend exists, showing progress toward the acceptable values for each severity
198
Movement away from those values should raise a flag that the project is at risk of failing
to satisfy the conditions of the metric
Customer Problems
This metric is a simple count of the number of new (non-duplicate) problems reported by
customers over some time interval
When measured at regular intervals and plotted, the data can be used to identify a trend.
Although, a trend may be apparent, it is more useful to determine the reasons behind the
trend
If, for example, the number of customer-reported problems increases over time, is it
because more end users are using the product?
If you measure the number of customers who use the product at the same intervals that
you measure customer-reported problems, you might identify a cause-effect or correlation
between the metric and number of end users
For example, if you determine that as the number of end users of the system increases the
number of customer-reported problems increases, a relationship may exist between the
two that suggests that you may have a serious scalability flaw in your product
On the other hand, is the increase related to greater demands placed on the system by end
users as their experience with the product matures? With the help of profiling features,
you can determine the load on the product
Customer Satisfaction
199
Process metrics are those that can be used for improving the software development and
maintenance process
Examples include the effectiveness of defect removal during development, the pattern of
testing defect arrival, and the response time of the fix process
Compared to end-product quality metrics, process quality metrics are less formally
defined, and their practices vary greatly among software developers
Some organizations pay little attention to process quality metrics, while others have wellestablished software metrics programs that cover various parameters in each phase of the
development cycle
Process Metrics
Defect arrival rate, Test effectiveness, Defects by phase, Defect removal effectiveness
Defect Arrival Rate
It is the number of defects found during testing measured at regular intervals over some
period of time
Rather than a single value, a set of values is associated with this metric
When plotted on a graph, the data may rise, indicating a positive defect arrival rate; it
may stay flat, indicating a constant defect arrival rate; or decrease, indicating a negative
defect arrival rate
Interpretation of the results of this metric can be very difficult
Intuitively, one might interpret a negative defect arrival rate to indicate that the product is
improving since the number of new defects found is declining over time
To validate this interpretation, you must eliminate certain possible causes for the decline
For example, it could be that test effectiveness is declining over time. In other words, the
tests may only be effective at uncovering certain types of problems. Once those problems
have been found, the tests are no longer effective
Another possibility is that the test organization is understaffed and consequently is unable
to adequately test the product between measurement intervals. They focus their efforts
during the first interval on performing stress tests that expose many problems, followed
by executing system tests during the next interval where fewer problems are uncovered
Test Effectiveness
To measure test effectiveness, take the number of defects found by formal tests and
divide by the total number of formal tests i.e. TE = Dn / Tn
When calculated at regular intervals and plotted, test effectiveness can be observed over
some period of time
If the graph rises over time, test effectiveness may be improving. On the other hand, if the
graph is falling over time, test effectiveness may be waning
Defects by Phase
200
It is much less expensive in terms of resources and reputation to eliminate defects early
instead of fix late
The defects by phase metric is a variation of the defect arrival rate metric
At the conclusion of each discreet phase of the development process, a count of the new
defects is taken and plotted to observe a trend
If the graph appears to be rising, you might infer that the methods used for defect
detection and removal during the earlier phases are not effective since the rate at which
new defects are being discovered is increasing
On the other hand, if the graph appears to be falling, you might conclude that early defect
detection and removal is effective
Explain the snowball effect
When a software product has completed its development and is released to the market, it
enters into the maintenance phase of its life cycle
During this phase the defect arrivals by time interval and customer problem calls (which
may or may not be defects) by time interval rate are the de facto metrics
However, the number or problem arrivals is largely determined by the development
process before the maintenance phase
Not much can be done to alter the quality of the product during this phase. Therefore, the
de facto metrics, while important, do not reflect the quality of software maintenance
What can be done during maintenance phase is to fix the defects as soon as possible and
with excellent fix quality
Such actions, although still not able to improve the defect rate of the product, can
improve customer satisfaction to a large extent
The defect backlog metric is a count of the number of defects in the product following its
release that require a repair
It is usually measured at regular intervals of time and plotted for trend analysis
By itself, this metric provides very little useful information
For example, what does a defect backlog count of 128 tells you? Can you predict the
impact of those defects on customers? Can you estimate the time it would take to repair
those defects? Can you recommend changes to improve the development process?
201
As the backlog is worked, new problems arrive that impact the net result of your teams
efforts to reduce the backlog
If the number of new defects exceeds the number of defects closed over some period of
time, your team is losing ground to the backlog. If, on the other hand, your team closes
problems faster than new ones are opened, they are gaining ground
The backlog management index (BMI) is calculated by dividing the number of defects
closed during some period of time by the number of new defects that arrived during that
same period of time i.e. BMI = Dc / Dn
If the result is greater than 1, your team is gaining ground; otherwise it is losing
When measurements are taken at regular intervals and plotted, a trend can be observed
indicating the rate at which the backlog is growing or shrinking
The fix response time metric is determined by calculating the average time it takes your
team to fix a defect
It can be measured several different ways
In some cases, it is the elapsed time between the discovery of the defect and the
development of an unverified fix
In other cases, it is the elapsed time between the discovery and the development of
verified fix
A better alternative to this metric is fix response time by severity
A fix is delinquent if exceeds your fix response time criteria. In other words, if you have
established a maximum fix response time of 48 hours; then fix response times that exceed
48 hours are considered delinquent
To calculate the percent delinquent fixes; divide the number of delinquent fixes by the
number of non-delinquent fixes and multiply by 100 i.e. PDF = (Fd / Fn ) * 100
This metric is also measured better by severity since the consequences of having a high
delinquent fixes for severe defects is typically much greater than for less severe or minor
defects
Defective Fixes
A defect for which a fix has been prepared that later turns out to be defective or worse,
creates one or more additional problems, is called a defective fix
The defective fixes metric is a count of the number of such fixes
To accurately measure the number of defective fixes, your organization must not only
keep track of defects that have been closed and then reopened but must also keep track of
new defects that were caused by a defect fix
202
203
Historically, lines of code count have been used to measure software size
LOC count is only one of the operational definitions of size and due to lack of
standardization in LOC counting and the resulting variations in actual practices,
alternative measures were investigated
One such measure is function point
Increasingly function points are gaining in popularity, based on the availability of
industry data
The value of using function points is in the consistency of the metric
If we use lines of code to compare ourselves to other organizations we face the problems
associated with the inconsistencies in counting lines of code
Differences in language complexity levels and inconsistency in counting rules quickly
lead us to conclude that line of code counting, even within an organization, can be
problematic and ineffective
This is not the case with function points
Function points also serve many different measurement types
Again, lines of code measure the value of the rate of delivery during the coding phase
Function points measure the value of the entire deliverable from a productivity
perspective, a quality perspective, and a cost perspective
Metrics
Measure
Productivity
Responsiveness
Quality
Business
204
Those items provided by the user that describe distinct application-oriented data (such as
file names and menu selections). These items do not include inquiries, which are counted
separately
For example, document filename, personal dictionary-name
External Outputs
Those items provided to the user that generate distinct application-oriented data (such as
reports and messages, rather than the individual components of these)
For example, misspelled word report, number-of-words-processed message, number-oferrors-so-far message
External Inquiries
External Files
Internal Files
Function Points
FP Complexity Weights
Item
Simple
Weighting
Factor Average
Complex
External Inputs
External Outputs
External
inquiries
External files
10
15
Internal files
10
Function Points
205
The complexity classification of each component is based on a set of standards that define
complexity in terms of objective guidelines
For instance, for the external output component, if the number of data element types is 20
or more and the number of file types referenced is two or more, then complexity is high
If the number of data element types is 5 or fewer and the number of file types referenced
is two to three
With the weighting factors, the first step is to calculate the Unadjusted function counts
(UFC) based on the formula, which is obtained by the by multiplying the weighting
factors with the corresponding components, and adding them up
There are fifteen (15) different varieties of items/components in theory
Function Points
Data
communications
Distributed functions
Performance
Operational ease
Complex interface
Online update
Complex
processing
Reusability
Installation ease
Multiple sites
Facilitate change
Function Points
The scores (ranging from 0 to 5) for these characteristics are then summed, based on the
following formula, to form the technical complexity factor (TCF)
Technical Complexity Factor
Function Points
Finally, the number of function points is obtained by multiplying unadjusted function counts
and the value adjustment factor:
FP = UFC * TCF
206
Over the years, the function point metric has gained acceptance as a key productivity
measure in the application world
In 1986 the IFPUG was established. The IFPUG counting practices committee is the
de facto standards organization for function point counting methods
Problems
Problems
Problems
Problems
Problems
Problems
Problems
Problems
with
with
with
with
with
with
with
with
Complexity
Ideally, we would like the complexity of the solution to be no greater than the complexity
of the problem, but that is not always the case
The complexity of a problem as the amount of resources required for an optimal solution
to the problem
The complexity of a solution can be regarded in terms of the resources needed to
implement a particular solution
Time complexity
o Where the resource is computer time
Space complexity
o Where the resource is computer memory
Problem complexity
Algorithmic complexity
Structural complexity
Cognitive complexity
Problem Complexity
Algorithmic Complexity
Reflects the complexity of the algorithm implemented to solve the problem; in some
sense, this type of complexity measures the efficiency of the software
We use Big-O notation to determine the complexity of algorithms
Structural Complexity
207
For example, we look at control flow structure, hierarchical structure, and modular
structure to extract this type of measure
Structural Measures
All other things being equal, we would like to assume that a large module takes longer to
specify, design, code, and test than a small one. But experience shows us that such an
assumption is not valid; the structure of the product plays a part, not only in requiring
development effort but also in how the product is maintained
We must investigate characteristics of product structure, and determine how they affect
the outcomes we seek
Structure has three parts
o Control-flow structure
o Data-flow structure
o Data structure
Control-Flow Structure
The control-flow addresses the sequence in which instructions are executed in a program.
This aspect of structure reflects the iterative and looping nature of programs. Whereas size
counts an instruction just once, control flow makes more visible the fact an instruction may
be executed many times as the program is actually run
Data-Flow Structure
Data flow follows the trail of a data item as it is created or handled by a program. Many
times, the transactions applied to data are more complex than the instructions that implement
them; data-flow measures depict the behavior of the data as it interacts with the program
Data Structure
Data structure is the organization of the data itself, independent of the program. When
data elements are arranged as lists, queues, stacks, or other well-defined structure, the
algorithms for creating, modifying, or deleting them are more likely to be well-defined,
too. So the structure of the data tells us a great deal about the difficulty involved in
writing programs to handle the data, and in defining test cases for verifying that the
programs are correct
Sometimes a program is complex due to a complex data structure rather than complex
control or data flow
Control-Flow Structure
The control flow measures are usually modeled with directed graphs, where each node (or
point) corresponds to a program statement, and each arc (or directed edge) indicates the
flow of control from one statement to another
These directed graphs are called control-flow graphs or flowgraphs
In-degree (Fan-in)
o A count of the number of modules that call a given module
208
Out-degree (Fan-out)
o A count of the number of modules that are called by a given module
Path
o A sequence of consecutive (directed) edges, some of which may be traversed
more than once during the sequence
Simple path
Modules that have large fan-in or large fan-out may indicate a poor design. Such modules
have probably not been decomposed correctly and are candidates for redesign
209
What we have seen is a control graph of a simple program that might contain two if
statements. If we count the edges, nodes, and disconnected part of the graph, we see that
o e = 8, n = 7
o and that M = 8 7 + 2 = 3
Note that M is equal to number of binary decisions plus one (1)
To have good testability and maintainability, McCabe recommended that no program
should exceed a cyclomatic complexity of 10. Because the complexity metric is based on
decisions and branches, which is consistent with the logic pattern of design and
programming, it appeals to software professionals
Many experts in software testing recommend use of the cyclomatic representation to
ensure adequate test coverage; the use of McCabes complexity measure has been gaining
acceptance in practitioners
Essential Complexity
210
All managers and personnel who have worked in a software quality assurance have a
process that represents the way an SQA organization operates
Some mature software organizations have an SQA process that is defined by a set of
standards for documentation and operations. Other smaller firms may have a less welldefined and more ad-hoc process; the primary repository for information regarding this
process is in the minds of its SQA personnel
In both cases, a process does exist
Lacking any better definition, the process is the way SQA is done here
What would an organization do with a process model for software quality assurance?
o A tool to train new SQA engineers
o A model to use in estimating
o A model to use in the process improvement
o A model to use to select or develop a set of standards and templates for SQA
documentation
o A tool to show the scope of SQA activities to new developers
o A graphic for showing managers and developers the tasks SQA engineers perform
The top-level of the SQA process is shown graphically, using input-process-output (IPO)
technique
The SQA process is based on relationships, interfaces, and tasks that are defined in other
project plans (for example, project management plan, configuration management plan)
A significant task to be addressed by SQA is coordination of risk management to assist
project management. This involves scheduling and convening regular meetings to address
211
risk management, maintaining lists of active risks, and tracking status of risk mitigation
plans for project management
SQA must be actively involved in review meetings (including, but not limited to, peer
reviews and formal design reviews with the customer)
The SQA organization must be actively involved in monitoring the test program on the
project
The number of significant formally controlled documents (for example, SQA plan, risk
management plan, metrics plan) result from performance of the SQA
Such a model is vital for effectively identifying, estimating, and performing SQA tasks
The key to effective SQA is the development of an SQA plan to serve as a contract
between the project team and the SQA organization
The SQA plan must be based on the process model that identifies the specific tasks SQA
needs to perform to effectively support the project team
This SQA process model has eight sub- processes
o Review program/project- level plans
o Develop quality assurance plan
o Coordinate metrics
o Coordinate risk program
o Perform audits
o Coordinate review meetings
o Facilitate process improvement
o Monitor test program
212
This review also identifies top-level tasks that SQA is tasked to performed (specified in
these other plans), and identifies the interfaces between SQA and other groups in the
development organization
The plans to be reviewed are configuration management plan, project management plan,
software development plan, configuration system, and standards/templates
The test plan is not reviewed at this point in the project; it is not normally written until
after the requirements are generated. Thus, the test plan is developed later than the
document review that is taking place in this process
There are three sub-processes that are components of the review project-plans process
o Review the project management plan
o Review the configuration management plan
o Review the software development plan
Members of the SQA organization should review each of these plans and report on the
results of the review
The review should determine that the project plans are complete, consistent, and correct
Outputs from these processes are simple
o Documented list of issues found while reviewing the various plans
o SQA-approved project plans
2. Develop QA Plan
Sub-Processes
213
Identify standards
Specify reviews and audits
Review CM interface
Review defect reporting
Develop metrics strategy
Identify tools and techniques
Define supplier control
Define records approach
Document SQA plan
Review and approve SQA plan
3. Coordinate Metrics
This sub-process defines the activities and tasks that SQA performs while coordinating
and collecting metrics
Inputs are
o Project management plan
o Defect tracking system
o Metrics database
o Product metrics data
o Process metrics data
o Standards
There are nine (9) sub-processes associated with coordinate metrics
Both process and products metrics are addressed
Metrics reports must be published in a timely manner so that any course of corrections
indicated by the metrics can be made quickly
Sub-Processes
214
Sub-Processes
5. Perform Audits
The process for perform audits is one of the few activities remaining in modern quality
assurance that can be described as a classic policeman activity
Inputs include
o Audit materials (including requirements, criteria, checklists)
215
o CM system
o Defect tracking system
o Process templates
o Project plans, including the test plan(s)
o Risks
o Standards and templates
A fully engaged SQA organization will also perform a number of less formal audits.
These should be performed according to a documented and approved audit plan, and will
include audits of the CM database
Product audits can include review of operator manuals
There are nine (9) sub-processes
Sub-Processes
Sub-Processes
216
Sub-Processes
217
Sub-Processes
Summary Starts
218
The SQA process model provides a tool for constructing a work breakdown structure in
estimating the cost and schedule for SQA activities
The model establishes the framework and list of SQA tasks. Then completing the estimate
becomes an exercise in using existing metrics data or engineering judgment to determine
the elapsed time and staffing levels to complete each task
Finally, each task can be entered into a schedule tracking package so activities can be
scheduled and tracked on an ongoing basis
The SQA process model can be used as a guide to effective implementation of process
improvement
The process model as presented is ideal; a process model can also be developed that
represents the current process in use by the SQA organization
219
220