Unit 06
Unit 06
It is difficult, and in some cases impossible, to develop direct measures 2 of these quality
factors. In fact, many of the metrics defined by McCall et al. can be measured only indirectly.
However, assessing the quality of an application using these factors will provide you with a
solid indication of software quality
Six Sigma for Software Engineering
❑ Six Sigma is the most widely used strategy for statistical quality assurance in industry today.
Originally popularized by Motorola in the 1980s, the Six Sigma strategy “is a rigorous and
disciplined methodology that uses data and statistical analysis to measure and improve a
company’s operational performance by identifying and eliminating defects’ in
manufacturing and service-related processes” [ISI08].
❑ The term Six Sigma is derived from six standard deviations—3.4 instances (defects) per
million occurrences—implying an extremely high quality standard.
❑ The Six Sigma methodology defines three core steps:
❑ Define customer requirements and deliverables and project goals via welldefined methods of customer
communication.
❑ Measure the existing process and its output to determine current quality performance (collect defect
metrics).
❑ Analyze defect metrics and determine the vital few causes.
❑ If an existing software process is in place, but improvement is required, Six Sigma suggests
two additional steps:
❑ Improve the process by eliminating the root causes of defects.
❑ Control the process to ensure that future work does not reintroduce the causes of defects.
Six Sigma for Software Engineering
❑ These core and additional steps are sometimes referred to as the DMAIC (define, measure,
analyze, improve, and control) method.
❑ If an organization is developing a software process (rather than improving an existing
process), the core steps are augmented as follows:
❑ Design the process to
(1) avoid the root causes of defects and
(2) to meet customer requirements.
❑ Verify that the process model will, in fact, avoid defects and meet customer requirements.
❑ This variation is sometimes called the DMADV (define, measure, analyze, design, and
verify) method.
❑ A comprehensive discussion of Six Sigma is best left to resources dedicated to the subject.
Software Quality dilemma
It’s fine to state that software engineers should strive to produce high-quality systems. It’s even
better to apply good practices in your attempt to do so. But the situation discussed by Meyer is
real life and represents a dilemma for even the best software engineering organizations
1. “Good Enough” Software
2. The Cost of Quality - The cost of quality can be divided into costs associated with
prevention, appraisal, and failure.
• Prevention costs include (1) the cost of management activities required to plan and coordinate all quality
control and quality assurance activities, (2) the cost of added technical activities to develop complete
requirements and design models, (3) test planning costs, and (4) the cost of all training associated with
these activities
• Appraisal costs include activities to gain insight into product condition the “first time through” each
process.
• Failure costs are those that would disappear if no errors appeared before or after shipping a product to
customers. Failure costs may be subdivided into internal failure costs and external failure costs. Internal
failure costs are incurred when you detect an error in a product prior to shipment.
• External failure costs are associated with defects found after the product has been shipped to the customer.
Examples of external failure costs are complaint resolution, product return and replacement, help line
support, and labor costs associated with warranty work.
Software Quality dilemma
3. Risks - Poor quality leads to risks, some of them very serious
4. Negligence and Liability - the quality of the delivered system comes into question
5. Quality and Security - To build a secure system, you must focus on quality, and that focus
must begin during design.
6. The Impact of Management Actions - Software quality is often influenced as much by
management decisions as it is by technology decisions. Even the best software engineering
practices can be subverted by poor business decisions and questionable project management
actions As each project task is initiated, a project leader will make decisions that can have a
significant impact on product quality.
• Estimation decisions
• Scheduling decisions
• Risk-oriented decisions
Achieving Software Quality
❑ Management and practice are applied within the context of four broad activities that help a
software team achieve high software quality:
• software engineering methods - If you expect to build high-quality software, you must understand the
problem to be solved. You must also be capable of creating a design that conforms to the problem while at the
same time exhibiting characteristics that lead to software that exhibits the quality dimensions and factors
• project management techniques - The implications are clear: if (1) a project manager uses estimation to
verify that delivery dates are achievable, (2) schedule dependencies are understood and the team resists the
temptation to use short cuts, (3) risk planning is conducted so problems do not breed chaos, software quality
will be affected in a positive way. In addition, the project plan should include explicit techniques for quality
and change management.
• quality control actions - Quality control encompasses a set of software engineering actions that help to
ensure that each work product meets its quality goals. Models are reviewed to ensure that they are complete
and consistent. Code may be inspected in order to uncover and correct errors before testing commences. A
series of testing steps is applied to uncover errors in processing logic, data manipulation, and interface
communication
• software quality assurance - The goal of quality assurance is to provide management and technical staff
with the data necessary to be informed about product quality, thereby gaining insight and confidence that
actions to achieve product quality are working. Of course, if the data provided through quality assurance
identifies problems, it is management’s responsibility to address the problems and apply the necessary
Introduction to Software review techniques
❑ Software reviews are a “filter” for the software process.
❑ That is, reviews are applied at various points during software engineering and serve to
uncover errors and defects that can then be removed.
❑ Software reviews “purify” software engineering work products, including requirements and
design models, code, and testing data
❑ A review—any review—is a way of using the diversity of a group of people to:
1. Point out needed improvements in the product of a single person or team;
2. Confirm those parts of a product in which improvement is either not desired or not
needed;
3. Achieve technical work of more uniform, or at least more predictable, quality than can
be achieved without reviews, in order to make technical work more manageable.
Different types of reviews can be conducted as part of software engineering –
• An informal meeting around the coffee machine is a form of review, if technical problems
are discussed.
• A formal presentation of software architecture to an audience of customers, management,
and technical staff is also a form of review
Introduction to software quality assurance
Quality – developed product meets it’s specification
▪ Software quality can be defined as “the conformance to explicitly stated functional
requirement, explicitly documented development standards and implicit characteristics that
are expected of all professionally developed software”
▪ There are two kind of quality
1. Quality of design is the characteristics of the item which specified for the designer.
2. Quality of conformance is the degree to which the design specifications are followed
during manufacturing.
Thus in software development process quality of design is concerned towards requirements,
specification and design of the system and quality of conformance is concerned with
implementation.
User satisfaction = Compliant product + Good quality + Delivery within budget
Introduction to software quality assurance
Quality Management
Ensuring that required level of product quality is achieved
• Defining procedures and standards
• Applying procedures and standards to the product and process
• Checking that procedures are followed
• Collecting and analysing various quality data
Introduction to software quality assurance
• Software Quality Assurance (SQA) is simply a way to assure quality in the software.
• It is the set of activities which ensure processes, procedures as well as standards suitable for
the project and implemented correctly.
• Software quality assurance (also called quality management) is an umbrella activity that
is applied throughout the software process.
• Software Quality Assurance is a process which works parallel to development of a software.
It focuses on improving the process of development of software so that problems can be
prevented before they become a major issue.
• It is planned and systematic pattern of activities necessary to provide high degree of
confidence in the quality
Introduction to software quality assurance
▪ Software quality assurance (SQA) encompasses
• An SQA process
• Specific quality assurance and quality control tasks
• Effective software engineering practice
• Control of all software work products
• A procedure to ensure compliance with software development standards
• Measurement and reporting mechanisms
SQA Activities
7.2.1 SQA Activities
1. SQA Management Plan:
▪ Make a plan how you will carry out the SQA through out the project. Think which set of
software engineering activities are the best for project.
▪ Identify evaluation to be performed.
▪ Audits and reviews to performed.
▪ Procedures for error reporting and tracking.
▪ Documentation
▪ Check level of SQA team skills.
SQA Activities
2. Set The Check Points:
▪ SQA team should set checkpoints. Evaluate the performance of the project on the basis
of collected data on different check points.
Disadvantage of SQA:
▪ Require more resources
▪ Time consuming process
▪ Employing more workers to help maintain quality
▪ Costly.
ISO 9000 quality standards
▪ ISO 9000 describes quality assurance elements in generic terms that can be applied to any
business regardless of the products or services offered. To become registered to one of the
quality assurance system models contained in ISO 9000, a company’s quality system and
operations are scrutinized by third-party auditors for compliance to the standard and for
effective operation. Upon successful registration, a company is issued a certificate from a
registration body represented by the auditors. Semiannual surveillance audits ensure
continued compliance to the standard.
▪ In order to bring quality in product and service, many organizations are adopting Quality
Assurance System
▪ ISO standards are issued by the International Organization for Standardization (ISO) in
Switzerland
▪ It is organization which standardized the things on international level so things become
easy to judge. Proper documentation is an important part of an ISO 9001 Quality
Management System.
▪ ISO 9001 is the quality assurance standard that applies to software engineering
▪ It includes, requirements that must be present for an effective quality assurance system
▪ ISO 9001 standard is applicable to all engineering discipline
ISO 9000 quality standards
The Guideline steps for ISO 9001:2000 are:
▪ Establish quality management system
▪ Document the quality management system
▪ Support the quality
▪ Satisfy the costumers
▪ Establish quality policy
▪ Conduct quality planning
▪ Perform management reviews
▪ Provide quality resources, infrastructure and environment
▪ Control actual planning, customer processes
▪ Control product development, purchasing function
▪ Control monitoring devices (inspection, audits etc.)
▪ Analyze quality information
▪ Make quality improvement
ISO 9000 quality standards
In order for a software organization to become registered to ISO 9001:2000
Different types of software documents can broadly be classified into the following:
Software
Documents
Internal External
Documentation Documentation
Who Test the Software
Developer
Who Test
Tester
the
Software?
Understands the system but, will Must learn about the system, but,
test "gently" will attempt to break it <Developer
and, is driven by "delivery" and, is driven by quality > O
Testing without plan is of no
Testing need a strategy R
[Tester
Dev team needs to work with
point ]
Test team, “Egoless
It wastes time and effort
Programming”
When to Test the Software?
Component Code Component Code Component Code
Unit Test Unit Test Unit Test
Unit Testing
It concentrate on each
unit of the software as
implemented in source
code
Unit Testing It focuses on each
Integration Testing component individual,
Validation Testing ensuring that it functions
properly as a unit.
System Testing
Software Testing Strategy Cont.
Types of System
Testing
1 Recovery Testing 4 Performance Testing
2 Security Testing 5 Deployment Testing
3 Stress Testing
Types of System Testing
Recovery Testing
It is a system test that forces the software to fail in a variety of ways
and verifies that recovery is properly performed.
If recovery is automatic (performed by the system itself)
Re-initialization, check pointing mechanisms, data recovery, and restart
are evaluated for correctness.
If recovery requires human intervention
The mean-time-to-repair (MTTR) is evaluated to determine whether it
is within acceptable limits
Security Testing
It attempts to verify software’s protection mechanisms, which
protect it from improper penetration (access).
During this test, the tester plays the role of the individual who
desires to penetrate the system.
Types of System Testing Cont.
Stress Testing
It executes a system in a manner that demands resources in
abnormal quantity, frequency or volume.
A variation of stress testing is a technique called sensitivity testing.
Performance Testing
It is designed to test the run-time performance of software.
It occurs throughout all steps in the testing process.
Even at the unit testing level, the performance of an individual
module may be tested.
Types of System Testing Cont.
Deployment Testing
It exercises the software in each environment in which it is to
operate.
In addition, it examines
All installation procedures
Specialized installation software that will be used by customers
All documentation that will be used to introduce the software to end
users
Acceptance Testing
It is a level of the software testing where a system is tested for acceptability.
The purpose of this test is to evaluate the system’s compliance with the business
requirements.
It is a formal testing conducted to determine whether or not a system satisfies the
acceptance criteria with respect to user needs, requirements, and business processes
It enables the customer to determine, whether or not to accept the system.
It is performed after System Testing and before making the system available for actual use.
Views of Test Objects
Black Box Testing White Box Testing Grey Box Testing
Close Box Testing Open Box Testing Partial knowledge of
Testing based only on Testing based on actual source code
specification source code
Black Box Testing
Also known as specification-based testing
Tester has access only to running code and the specification it is supposed to satisfy
Test cases are written with no knowledge of internal workings of the code
No access to source code
So test cases don’t worry about structure
Emphasis is only on ensuring that the contract is met
Advantages
Scalable; not dependent on size of code
Testing needs no knowledge of implementation
Tester and developer can be truly independent of each other
Tests are done with requirements in mind
Does not excuse inconsistencies in the specifications
Test cases can be developed in parallel with code
Black Box Testing Cont.
Disadvantages Test Case Design
Examine pre-condition, and identify Test size will have to be small
equivalence classes Specifications must be clear, concise,
All possible inputs such that all classes are and correct
covered May leave many program paths
Apply the specification to input to write down untested
expected output Weighting of program paths is not
possible
Test Case 1
Specification Input: x1 (sat. X)
Specification-
Operation op Exp. Output: y2
Based Test
Pre: X
Case Test Case 2
Post: Y
Design Input: x2 (sat. X)
Exp. Output: y2
Black Box Testing Cont.
Exhausting testing is not always possible when there is a large set of input combinations,
because of budget and time constraint.
The special techniques are needed which select test-cases smartly from the all combination
of test-cases in such a way that all scenarios are covered
Equivalence Partitioning
Input data for a program unit usually falls into a number of
partitions, e.g. all negative integers, zero, all positive numbers
Each partition of input data makes the program behave in a similar
way
Two test cases based on members from the same partition is likely to
reveal the same bugs
Equivalence Partitioning (Black Box Testing)
By identifying and testing one member of each partition we gain 'good' coverage with 'small'
number of test cases
Testing one member of a partition should be as good as testing any member of the partition
Example - Equivalence
Partitioning
For binary search the following partitions exist Pick specific conditions of the array
Inputs that conform to pre-conditions The array has a single value
Inputs where the precondition is false Array length is even
Inputs where the key element is a member of the array Array length is odd
Inputs where the key element is not a member of the array
Equivalence Partitioning (Black Box Testing) Cont.
Example - Equivalence
Partitioning
Example: Assume that we have to test field which accepts SPI (Semester Performance
Index) as input (SPI range is 0 to 10)
Equivalence Partitioning
Invalid Valid Invalid
<=-1 0 to 10 >=11
Valid Class: 0 – 10, pick any one input test data from 0 to 10
Invalid Class 1: <=-1, pick any one input test data less than or equal to -1
Invalid Class 2: >=11, pick any one input test data greater than or equal to 11
Boundary Value Analysis (BVA) (Black Box Testing)
It arises from the fact that most program fail at input boundaries
Boundary testing is the process of testing between extreme ends or boundaries between
partitions of the input values.
In Boundary Testing, Equivalence Class Partitioning plays a good role
Boundary Testing comes after the Equivalence Class Partitioning
The basic idea in boundary value testing is to select input variable values at their:
Boundary Values
Boundary Value Analysis (BVA) (Black Box Testing)
Suppose system asks for “a number between 100 and 999
inclusive”
998 999
The boundaries are 100 and 999 99 100 101
1000
We therefore test for values Lower Upper
BVA - Advantages boundary boundary
The BVA is easy to use and remember because of the uniformity of identified tests and the
automated nature of this technique.
One can easily control the expenses made on the testing by controlling the number of identified
test cases.
BVA is the best approach in cases where the functionality of a software is based on numerous
variables representing physical quantities.
The technique is best at user input troubles in the software.
The procedure and guidelines are crystal clear and easy when it comes to determining the test
cases through BVA.
The test cases generated through BVA are very small.
Boundary Value Analysis (BVA) (Black Box Testing) Cont.
BVA - Disadvantages
This technique sometimes fails to test all the potential input values. And so, the results
are unsure.
The dependencies with BVA are not tested between two inputs.
This technique doesn’t fit well when it comes to Boolean Variables.
It only works well with independent variables that depict quantity.
White Box Testing
Also known as structural testing
White Box Testing is a software testing method in which the internal
structure/design/implementation of the module being tested is known to the tester
Focus is on ensuring that even abnormal invocations are handled gracefully
Using white-box testing methods, you can derive test cases that
Guarantee that all independent paths within a module have been exercised at least once
Exercise all logical decisions on their true and false sides
Execute all loops at their boundaries
Exercise internal data structures to ensure their validity
Advantages Disadvantages
Testing can be commenced Since tests can be very complex, highly skilled
at an earlier stage as one resources are required, with thorough knowledge of
need not wait for the GUI to programming and implementation
be available. Test script maintenance can be a burden, if the
Testing is more thorough, implementation changes too frequently
with the possibility of Since this method of testing is closely tied with the
covering most paths application being testing, tools to cater to every kind of
implementation/platform may not be readily available
White-box testing strategies
One white-box testing strategy is said to be stronger than another strategy, if all types of
errors detected by the first testing strategy is also detected by the second testing strategy, and
the second testing strategy additionally detects some more types of errors.
White-box testing
strategies
1 Statement coverage 2 Branch coverage 3 Path coverage
Statement coverage
It aims to design test cases so that every statement in a program is executed at least once
Principal idea is unless a statement is executed, it is very hard to determine if an error exists
in that statement
Unless a statement is executed, it is very difficult to observe whether it causes failure due to
some illegal memory access, wrong result computation, etc.
White-box testing strategies Cont.
Consider the Euclid’s GCD computation algorithm