Unit 2 Software Quality Assurance
Unit 2 Software Quality Assurance
Unit 2 Software Quality Assurance
BACKGROUND ISSUES:
Quality control and assurance are essential activities for any business that produces products
to be used by others. Prior to the twentieth century, quality control was the sole responsibility
of the craftsperson who built a product. As time passed and mass production techniques
became commonplace, quality control became an activity performed by people other than the
ones who built the product.
The first formal quality assurance and control function was introduced at Bell Labs in 1916
and spread rapidly throughout the manufacturing world. During the 1940s, more formal
approaches to quality control were suggested. These relied on measurement and continuous
process improvement as key elements of quality management.
The history of quality assurance in software development parallels the history of quality in
hardware manufacturing. During the early days of computing(1950s and 1960s), quality was
the sole responsibility of the programmer. Standards for quality assurance for software were
introduced in military contract software development during the 1970s and have spread
rapidly into software development in the commercial world. Extending the definition
presented earlier, software quality assurance is a “planned and systematic pattern of actions”
that are required to ensure high quality in software. The scope of quality assurance
responsibility might best be characterized by paraphrasing a once-popular automobile
commercial: “Quality Is Job #1.” The implication for software is that many different
constituencies have software quality assurance responsibility — software engineers, project
managers, customers, sales people, and the individuals who serve within an SQA group.
software quality assurance, it’s important to note that SQA procedures and approaches that
work in one software environment may not work as well in another. Even within a company
that adopts a consistent approach to software engineering, different software products may
exhibit different levels of quality.
The solution to this dilemma is to understand the specific quality requirements for a software
product and then select the process and specific SQA actions and tasks that will be used to
meet those requirements. The Software Engineering Institute’s CMMI and ISO 9000
standards are the most commonly used process frameworks. Each proposes “a syntax and
semantics” that will lead to the implementation of software engineering practices that
improve product quality. Rather than instantiating either framework in its entirety, a software
organization can “harmonize” the two models by selecting elements of both frameworks and
matching them to the quality requirements of an individual product.
SQA Tasks:
The charter of the SQA group is to assist the software team in achieving a high-quality end
product. The Software Engineering Institute recommends a set of SQA activities that address
quality assurance planning, oversight, record keeping, analysis, and reporting. These
activities are performed (or facilitated) by an independent SQA group that:
Prepares an SQA plan for a project. The plan is developed as part of project planning and
is reviewed by all stakeholders. Quality assurance activities performed by the software
engineering team and the SQA group are governed by the plan. The plan identifies
evaluations to be performed, audits and reviews to be conducted, standards that are applicable
to the project, procedures for error reporting and tracking, work products that are produced by
the SQA group, and feedback that will be provided to the software team.
Participates in the development of the project’s software process description. The
software team selects a process for the work to be performed. The SQA group reviews the
process description for compliance with organizational policy, internal software standards,
externally imposed standards (e.g., ISO-9001), and other parts of the software project plan.
Reviews software engineering activities to verify compliance with the defined software
process. The SQA group identifies, documents, and tracks deviations from the process and
verifies that corrections have been made.
Audits designated software work products to verify compliance with those defined as
part of the software process. The SQA group reviews selected work products; identifies,
documents, and tracks deviations; verifies that corrections have been made; and periodically
reports the results of its work to the project manager.
Ensures that deviations in software work and work products are documented and
handled according to a documented procedure. Deviations may be encountered in the
project plan, process description, applicable standards, or software engineering work
products.
Records any noncompliance and reports to senior management. Noncompliance items
are tracked until they are resolved.
Statistical quality assurance reflects a growing trend throughout the industry to become more
quantitative about quality. For software, statistical quality assurance implies the following
steps:
1. Information about software errors and defects is collected and categorized.
2. An attempt is made to trace each error and defect to its underlying cause (e.g., non
conformance to specifications, design error, violation of standards, poor communication with
the customer).
3. Using the Pareto principle (80 percent of the defects can be traced to 20 percent of all
possible causes), isolate the 20 percent (the vital few ).
4. Once the vital few causes have been identified, move to correct the problems that have
caused the errors and defects.
This relatively simple concept represents an important step toward the creation of an adaptive
software process in which changes are made to improve those elements of the process that
introduce error.
A Generic Example
To illustrate the use of statistical methods for software engineering work, assume that a
software engineering organization collects information on errors and defects for a period of
one year. Some of the errors are uncovered as software is being developed. Other defects are
encountered after the software has been released to its end users. Although hundreds of
different problems are uncovered, all can be tracked to one (or more) of the following causes:
• Incomplete or erroneous specifications (IES).
• Misinterpretation of customer communication (MCC).
• Intentional deviation from specifications (IDS).
• Violation of programming standards (VPS).
• Error in data representation (EDR).
• Inconsistent component interface (ICI).
• Error in design logic (EDL).
• Incomplete or erroneous testing (IET).
• Inaccurate or incomplete documentation (IID).
• Error in programming language translation of design (PLT).
• Ambiguous or inconsistent human/computer interface (HCI).
• Miscellaneous (MIS).
Six Sigma for Software Engineering:
Six Sigma is the most widely used strategy for statistical quality assurance in industry today.
Originally popularized by Motorola in the 1980s, the Six Sigma strategy “is a rigorous and
disciplined methodology that uses data and statistical analysis to measure and improve a
company’s operational performance by identifying and eliminating defects in manufacturing
and service-related processes”. The term Six Sigma is derived from six standard deviations—
3.4instances (defects) per million occurrences—implying an extremely high-quality standard.
The Six Sigma methodology defines three core steps:
• Define customer requirements and deliverables and project goals via well-defined methods
of customer communication.
• Measure the existing process and its output to determine current quality performance
(collect defect metrics).
• Analyze defect metrics and determine the vital few causes. If an existing software process is
in place, but improvement is required, Six Sigma suggests two additional steps:
• Improve the process by eliminating the root causes of defects.
• Control the process to ensure that future work does not reintroduce the causes of defects.
These core and additional steps are sometimes referred to as the DMAIC (define, measure,
analyze, improve, and control) method.
If an organization is developing a software process (rather than improving an existing
process), the core steps are augmented as follows:
• Design the process to (1) avoid the root causes of defects and
(2) to meet customer requirements.
• Verify that the process model will, in fact, avoid defects and meet customer requirements.
This variation is sometimes called the DMADV (define, measure, analyze, design, and
verify) method.
A comprehensive discussion of Six Sigma is best left to resources dedicated to the subject.
SOFTWARE RELIABILITY:
There is no doubt that the reliability of a computer program is an important element of its
overall quality. If a program repeatedly and frequently fails to perform, it matters little
whether other software quality factors are acceptable. Software reliability, unlike many other
quality factors, can be measured directly and estimated using historical and developmental
data. Software reliability is defined in statistical terms as “the probability of failure-free
operation of a computer program in a specified environment for a specified time” .
To illustrate, program X is estimated to have a reliability of 0.999 over eight elapsed
processing hours. In other words, if program X were to be executed 1000 times and require a
total of eight hours of elapsed processing time (execution time), it is likely to operate
correctly (without failure) 999 times.
Whenever software reliability is discussed, a pivotal question arises: What is meant by the
term failure ? In the context of any discussion of software quality and reliability, failure is
non conformance to software requirements. Yet, even within this definition, there are
gradations. Failures can be only annoying or catastrophic. One failure can be corrected within
seconds, while another requires weeks or even months to correct. Complicating the issue
even further, the correction of one failure may in fact result in the introduction of other errors
that ultimately result in other failures.
Software Safety:
Software safety is a software quality assurance activity that focuses on the identification and
assessment of potential hazards that may affect software negatively and cause an entire
system to fail. If hazards can be identified early in the software process, software design
features can be specified that will either eliminate or control potential hazards.
A modeling and analysis process is conducted as part of software safety. Initially, hazards are
identified and categorized by criticality and risk. For example, some of the hazards associated
with a computer-based cruise control for an automobile might be: (1) causes uncontrolled
acceleration that cannot be stopped, (2) does not respond to depression of brake pedal (by
turning off), (3) does not engage when switch is activated, and (4) slowly loses or gains
speed. Once these system-level hazards are identified, analysis techniques are used to assign
severity and probability of occurrence. 4 To be effective, software must be analyzed in the
context of the entire system. For example, a subtle user input error (people are system
components) may be magnified by a software fault to produce control data that improperly
positions a mechanical device. If and only if a set of external environmental conditions is
met, the improper position of the mechanical device will cause a disastrous failure. Analysis
techniques such as fault tree analysis, real-time logic, and Petri net models can be used to
predict the chain of events that can cause hazards and the probability that each of the events
will occur to create the chain.
Once hazards are identified and analyzed, safety-related requirements can be specified for the
software. That is, the specification can contain a list of undesirable events and the desired
system responses to these events. The role of software in managing undesirable events is then
indicated.
Although software reliability and software safety are closely related to one another, it is
important to understand the subtle difference between them. Software reliability uses
statistical analysis to determine the likelihood that a software failure will occur.
However, the occurrence of a failure does not necessarily result in a hazard or mishap.
Software safety examines the ways in which failures result in conditions that can lead to
a mishap. That is, failures are not considered in a vacuum, but are evaluated in the
context of an entire computer-based system and its environment.
The SQA Plan provides a road map for instituting software quality assurance. Developed by
the SQA group (or by the software team if an SQA group does not exist), the plan serves as a
template for SQA activities that are instituted for each software project.
A standard for SQA plans has been published by the IEEE. The standard recommends a
structure that identifies: (1) the purpose and scope of the plan, (2) a description of all
software engineering work products (e.g., models, documents, source code) that fall within
the purview of SQA, (3) all applicable standards and practices that are applied during the
software process, (4) SQA actions and tasks (including reviews and audits) and their
placement throughout the software process, (5) the tools and methods that support SQA
actions and tasks, (6) software configuration management procedures, (7) methods for
assembling, safeguarding, and maintaining all SQA-related records, and (8) organizational
roles and responsibilities relative to product quality.