STQA
STQA
Testing Methodology
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra 1
Evolution of Software Testing
Evolution of Software Testing
• In the early days of software development, Software Testing was
considered only as a debugging process for removing the errors
after the development of software.
• By 1970, software engineering term was in common use. But
software testing was just a beginning at that time.
• In 1978, G.J. Myers realized the need to discuss the techniques of
software testing in a separate subject. He wrote the book “The Art
of Software Testing” which is a classic work on software testing.
• Myers discussed the psychology of testing and emphasized that
testing should be done with the mind-set of finding the errors not to
demonstrate that errors are not present.
• By 1980, software professionals and organizations started talking
about the quality in software. Organizations started Quality
assurance teams for the project, which take care of all the testing
activities for the project right from the beginning.
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University 2
Pooja Malhotra
Evolution of Software Testing
Evolution of Software Testing
• In the 1990s testing tools finally came into their own. There was
flood of various tools, which are absolutely vital to adequate testing
of the software systems. However, they do not solve all the
problems and cannot replace a testing process.
• Gelperin and Hetzel [79] have characterized the growth of software
testing with time. Based on this, we can divide the evolution of
software testing in following phases:
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University 3
Pooja Malhotra
Evolution of Software Testing
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University 4
Pooja Malhotra
Psychology for Software Testing:
Testing is the process of demonstrating that there are no errors.
Testing is the process of executing a program with the intent of finding
errors.
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University 5
Pooja Malhotra
The Quality Revolution
The Shewhart cycle
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University 7
Pooja Malhotra
Testing produces Reliability and Quality
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University 8
Pooja Malhotra
Quality leads to customer satisfaction
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University 9
Pooja Malhotra
Testing control Risk factors
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University 10
Pooja Malhotra
Software Testing Definitions
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University 11
Pooja Malhotra
Software Testing Definitions
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University 12
Pooja Malhotra
Model for Software Testing
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University 13
Pooja Malhotra
Effective Software Testing vs Exhaustive Software
Testing
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University 14
Pooja Malhotra
Effective Software Testing vs Exhaustive Software
Testing
• Valid Inputs
• Invalid Inputs
• Edited Inputs
• Race Conditions
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University 15
Pooja Malhotra
Effective Software Testing vs Exhaustive Software
Testing
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University 16
Pooja Malhotra
Effective Software Testing vs Exhaustive Software
Testing
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University 17
Pooja Malhotra
Effective Software Testing vs Exhaustive Software
Testing
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
18
Pooja Malhotra
Effective Software Testing vs Exhaustive Software
Testing
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University 20
Pooja Malhotra
Software Testing as a Process
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University 21
Pooja Malhotra
Software Testing Terminology
• Failure
The inability of a system or component to perform a required
function according to its specification.
22
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Software Testing Terminology
Error
Whenever a member of development team makes any mistake in
any phase of SDLC, errors are produced. It might be a typographical
error, a misleading of a specification, a misunderstanding of what a
subroutine does and so on. Thus, error is a very general term used
for human mistakes.
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University 23
Pooja Malhotra
Software Testing Terminology
Module A()
{
---
While(a > n+1);
{
---
print(“The value of x is”,x);
}
---
}
24
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Software Testing Terminology
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University 25
Pooja Malhotra
Software Testing Terminology
• Testware
The documents created during the testing activities are known as
Testware. (Test Plan, test specifications, test case design , test
reports etc.)
• Incident
The symptom(s) associated with a failure that alerts the user to the
occurrence of a failure.
• Test Oracle
To judge the success or failure of a test(correctness of the system
for some test) *Comparing actual results with expected results by
hand.*
26
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Life Cycle of a Bug
27
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
States of a Bug
28
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Bug affects Economics of Software Testing
Software Testing Myths
29
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Bug Classification based on Criticality
• Critical Bugs
the worst effect on the functioning of software such that it stops or
hangs the normal functioning of the software.
• Major Bug
This type of bug does not stop the functioning of the software but
it causes a functionality to fail to meet its requirements as
expected.
• Medium Bugs
Medium bugs are less critical in nature as compared to critical and
major bugs.(not according to standards- Redundant /Truncated
output)
• Minor Bugs
This type of bug does not affect the functioning of the
software.(Typographical error or misaligned printout)
30
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Bug Classification based on SDLC
31
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Testing Principles
33
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Software Testing Life Cycle (STLC):Well defined series of
steps to ensure successful and effective testing.
34
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Software Testing Life Cycle (STLC):Well defined series of
steps to ensure successful and effective testing.
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University 35
Pooja Malhotra
Test Planning
36
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Test Planning
The major output of test planning is the test plan document. Test
plans are developed for each level of testing. After analysing the
issues, the following activities are performed:
• Develop a test case format.
• Develop test case plans according to every phase of SDLC.
• Identify test cases to be automated.
• Prioritize the test cases according to their importance and
criticality.
• Define areas of stress and performance testing.
• Plan the test cycles required for regression testing.
All the details specified in the test design phase are documented in
the test design specification. This document provides the details of the
input specifications, output specifications, environmental needs, and
other procedural requirements for the test case.
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University 39
Pooja Malhotra
Test Execution: Verification and Validation
In this phase, all test cases are executed including verification and validation.
• Verification test cases are started at the end of each phase of SDLC.
• Validation test cases are started after the completion of a module.
• It is the decision of the test team to opt for automation or manual execution.
• Test results are documented in the test incident reports, test logs, testing status,
and test summary reports etc.
40
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Post-Execution / Test Review
42
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Test Strategy
46
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Development of Test Strategy
The testing strategy should start at the component level and finish at the
integration of the entire system. Thus, a test strategy includes testing the
components being built for the system, and slowly shifts towards testing the
whole system. This gives rise to two basic terms—Verification and
Validation—the basis for any type of testing. It can also be said that the
testing process is a combination of verification and validation.
49
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Validation Activities
• Unit Testing
• Integration Testing
• System Testing
• Acceptance Testing
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University 50
Pooja Malhotra
Testing Tactics
The ways to perform various types of testing under a specific test strategy.
#Manual Testing
#Automated Testing
Testing Tools :
Resource for performing a test process
51
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Considerations in Developing
Testing Methodologies
52
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Verification and Validation (V & V) Activities
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University 53
Pooja Malhotra
VERIFICATION
Verification is a set of activities that ensures correct implementation of
specific functions in a software.
Verification is to check whether the software conforms to
specifications.
• If verification is not performed at early stages, there are always a
chance of mismatch between the required product and the delivered
product.
• Verification exposes more errors.
• Early verification decreases the cost of fixing bugs.
• Early verification enhances the quality of software.
VERIFICATION ACTIVITIES :All the verification activities are performed
in connection with the different phases of SDLC. The following
verification activities have been identified:
– Verification of Requirements and Objectives
– Verification of High-Level Design
– Verification of Low-Level Design
54
– Verification of Coding SoftwareVerification)
Reference:(Unit Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Verification of Requirements
• Correctness
• Unambiguous(Every requirement has only one interpretation. )
• Consistent(No specification should contradict or conflict with another.)
• Completeness
• Updation
• Traceability
– Backward Traceability 55
– Forward Traceability.
Verification of High Level Design
2. The tester also prepares a Function Test Plan which is based on the
SRS. This plan will be referenced at the time of Function Testing .
3. The tester also prepares an Integration Test Plan which will be
referred at the time of integration testing.
4. The tester verifies that all the components and their interfaces are in
tune with requirements of the user. Every requirement in SRS should
map the design.
56
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Verification of High Level Design
58
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Verification of High Level Design
• Check that every design specification in HLD and LLD has been
coded using traceability matrix.
• Examine the code against a language specification checklist.
• Verify every statement, control structure, loop, and logic
• Misunderstood or incorrect Arithmetic precedence
• Mixed mode operations
• Incorrect initialization
• Precision Inaccuracy
• Incorrect symbolic representation of an expression
• Different data types
• Improper or nonexistent loop termination 61
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
• Failure to exit Pooja Malhotra
How to Verify Code
Two kinds of techniques are used to verify the coding:
(a) static testing, and (b) dynamic testing.
62
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
How to Verify Code
UNIT VERIFICATION :
Verification of coding cannot be done for the whole system. Moreover, the
system is divided into modules. Therefore, verification of coding means the
verification of code of modules by their developers. This is also known as
unit verification testing.
Listed below are the points to be considered while performing unit
verification :
• Interfaces are verified to ensure that information properly flows in and
out of the program unit under test.
• The local data structure is verified to maintain data integrity.
• Boundary conditions are checked to verify that the module is working
fine on boundaries also.
• All independent paths through the control structure are exercised to
ensure that all statements in a module have been executed at least
once.
• All error handling paths are tested.
• Developing tests that will determine whether the product satisfies the
users’ requirements, as stated in the requirement specification.
• The bugs, which are still existing in the software after coding need to
be uncovered.
• last chance to discover the bugs otherwise these bugs will move to
the final product released to the customer.
65
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Concept of Unit Testing
• Unit is?
– Function
– Procedure
– Method
– Module
– Component
Unit Testing
– Testing program unit in isolation i.e. in a stand alone
manner.
– Objective: Unit works as expected.
66
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Unit Testing
module
to be
tested
interface
local data structures
boundary conditions
independent paths
error handling paths
test cases
67
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Unit Testing
Drivers
70
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Unit Validation Testing
Evolution of Software Testing
Stubs
Stub can be defined as a piece of software that works similar to
a unit which is referenced by the Unit being tested, but it is much
simpler that the actual unit.
71
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Integration Testing
72
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Integration Testing
73
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Software Testing
Decomposition based Myths
integration testing
74
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Software Testing
Decomposition based Myths
integration testing
75
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Incremental Integration Testing
76
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Practical Approach for Integration Testing
77
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Call Graph Based Integration
Refine the functional decomposition tree into a form of module calling graph,
then we are moving towards behavioural testing at the integration level.
This can be done with the help of a call graph
A call graph is a directed graph wherein nodes are modules or units and a
directed edge from one node to another node means one module has called
another module. The call graph can be captured in a matrix form which is
known as adjacency matrix.
78
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Pair-wise Integration
Number of test
sessions=no. of edges in
call graph
= 19
79
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Neighborhood Integration
80
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Path Based Integration
This passing of control from one unit to another unit is necessary for
integration testing. Also, there should be information within the module
regarding instructions that call the module or return to the module. This must
be tested at the time of integration. It can be done with the help of path-based
integration defined by Paul C.
Source Node :It is an instruction in the module at which the execution starts or
resumes. The nodes where the control is being transferred after calling the
module are also source nodes.
81
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Path Based Integration
Message:When the control from one unit is transferred to another unit, then
the programming language mechanism used to do this is known as a
message.
MM-Path Graph:
It can be defined as an extended flow graph where nodes are MEPs and
edges are messages. It returns from the last called unit to the first unit where
the call was made.
*In this graph, messages are highlighted with thick lines.* 82
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Path Based Integration
83
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Path Based Integration
MEP Graph
84
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Function Testing
87
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Categories of System Tests
88
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Recovery Testing
89
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Security Testing
Security tests are designed to verify that the system meets the security
requirements. Security may include controlling access to data, encrypting data
in communication, ensuring secrecy of stored data, auditing security events, etc
• Confidentiality-It is the requirement that data and the processes be
protected from unauthorized disclosure
• Integrity-It is the requirement that data and process be protected from
unauthorized modification
• Availability-It is the requirement that data and processes be protected form
the denial of service to authorized users
• Authentication- A measure designed to establish the validity of a
transmission, message, or originator. It allows the receiver to have
confidence that the information it receives originates from a specific known
source.
• Authorization- It is the process of determining that a requester is allowed to
receive a service or perform an operation. Access control is an example of
authorization.
• Non-repudiation- A measure intended to prevent the later denial that an
90
action happened, or a communication took place, etc.
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Security Testing
Security Testing is the process of attempting to devise
test cases to evaluate the adequacy of protective
procedures and countermeasures.
• Security test scenarios should include negative scenarios
such as misuse and abuse of the software system.
• Security requirements should be associated with each
functional requirement. For example, the log-on requirement
in a client-server system must specify the number of retries
allowed, the action to be taken if the log-on fails, and so on.
• A software project has security issues that are global in
nature, and are therefore, related to the application’s
architecture and overall implementation. For example, a
Web application may have a global requirement that all
private customer data of any kind is stored in encrypted form
in the database
91
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Security Testing
– Useful types of security tests includes the following:
• Verify that only authorized accesses to the system are
permitted
• Verify the correctness of both encryption and decryption
algorithms for systems where data/messages are encoded.
• Verify that illegal reading of files, to which the perpetrator
is not authorized, is not allowed
• Ensure that virus checkers prevent or curtail entry of
viruses into the system
• Try to identify any “backdoors” in the system usually left
open by the software developers
92
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Performance Testing
93
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Performance Testing
• Tests are designed to determine the performance of the
actual system compared to the expected one
• Tests are designed to verify response time, execution time,
throughput, resource utilization and traffic rate
• One needs to be clear about the specific data to be captured
in order to evaluate performance metrics.
• For example, if the objective is to evaluate the response
time, then one needs to capture
– End-to-end response time (as seen by external user)
– CPU time
– Network connection time
– Database access time
– Network connection time
– Waiting time
94
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Stress Tests
• The goal of stress testing is to evaluate and determine the behavior
of a software component while the offered load is in excess of its
designed capacity
• The system is deliberately stressed by pushing it to and beyond its
specified limits
• It ensures that the system can perform acceptably under worst-case
conditions, under an expected peak load. If the limit is exceeded and
the system does fail, then the recovery mechanism should be
invoked
• Stress tests are targeted to bring out the problems associated with
one or more of the following:
– Memory leak: A failure in a program to release discarded memory
– Buffer allocation: To control the allocation and freeing of buffers
– Memory carving: A useful tool for analyzing physical and virtual
memory dumps when the memory structures are unknown or
have been overwritten.
95
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Load and Stability Tests
• Tests are designed to ensure that the system remains
stable for a long period of time under full load
• When a large number of users are introduced and
applications that run for months without restarting, a
number of problems are likely to occur:
– the system slows down
– the system encounters functionality problems
– the system crashes altogether
• Load and stability testing typically involves exercising
the system with virtual users and measuring the
performance to verify whether the system can support
the anticipated load
• This kind of testing help one to understand the ways the
system will fare in real-life situations
96
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
97
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
98
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Usability Testing
• Ease of Use
• Interface steps
• Response Time
• Help System
• Error Messages
99
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Usability Testing
Graphical User Interface Tests
– Tests are designed to look-and-feel the interface to the users of an
application system
– Tests are designed to verify different components such as icons,
menu bars, dialog boxes, scroll bars, list boxes, and radio buttons
– The GUI can be utilized to test the functionality behind the
interface, such as accurate response to database queries
– Tests the usefulness of the on-line help, error messages, tutorials,
and user manuals
– The usability characteristics of the GUI is tested, which includes
the following
• Accessibility: Can users enter, navigate, and exit with relative ease?
• Responsiveness: Can users do what they want and when they want in a
way that is clear?
• Efficiency: Can users do what they want to with minimum number of
steps and time?
• Comprehensibility: Do users understand the product structure with a
minimum amount of effort?
100
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Compatibility/Conversion/Configuration Testing
• Operating systems: The specifications must state all the targeted end-
user operating systems on which the system being developed will be run.
• Software/ Hardware: The product may need to operate with certain
versions of web browsers, with hardware devices such as printers, or with
other software, such as virus scanners or word processors.
• Conversion Testing: Compatibility may also extend to upgrades from
previous versions of the software. Therefore, in this case, the system
must be upgraded properly and all the data and information from the
previous version should also be considered.
• Ranking of possible configurations(most to the least common, for the
target system)
• Testers must identify appropriate test cases and data for compatibility
testing. 101
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Acceptance Testing
102
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Acceptance Testing
Alpha Testing :
Beta Testing:
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University 105
Pooja Malhotra
Acceptance Testing
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University 106
Pooja Malhotra
References:
107
Reference: Software Testing Principles and Practices, Naresh Chauhan, Oxford University
Pooja Malhotra
Module 2
Testing Techniques
1
Reference: Software Testing Principles and Practices,Naresh Chauhan , Oxford University
Pooja Malhotra
Static Testing
2
Reference: Software Testing Principles and Practices,Naresh Chauhan , Oxford University
Pooja Malhotra
Static Testing
3
Reference: Software Testing Principles and Practices,Naresh Chauhan , Oxford University
Pooja Malhotra
Static Testing
4
Reference: Software Testing Principles and Practices,Naresh Chauhan , Oxford University
Pooja Malhotra
Static Testing
Evolution of Software Testing
Types of Static Testing
• Software Inspections
• Walkthroughs
• Technical Reviews
5
Reference: Software Testing Principles and Practices,Naresh Chauhan , Oxford University
Pooja Malhotra
Inspections
6
Reference: Software Testing Principles and Practices,Naresh Chauhan , Oxford University
Pooja Malhotra
Inspections
• Inspection steps
• Roles for participants
• Item being inspected
7
Reference: Software Testing Principles and Practices,Naresh Chauhan , Oxford University
Pooja Malhotra
Inspection Process
8
Reference: Software Testing Principles and Practices,Naresh Chauhan , Oxford University
Pooja Malhotra
Steps in the Inspection
9
Reference: Software Testing Principles and Practices,Naresh Chauhan , Oxford University
Pooja Malhotra
Steps in the Inspection
1.Planning : During this phase, the following is executed:
• The product to be inspected is identified.
• A moderator is assigned.
• The objective of the inspection is stated. If the objective is defect detection,
then the type of defect detection like design error, interface error, code
error must be specified.
During planning, the moderator performs the following activities:
• Assures that the product is ready for inspection
• Selects the inspection team and assigns their roles
• Schedules the meeting venue and time
• Distributes the inspection material like the item to be inspected, checklists,
etc.
Readiness Criteria
• Completeness ,Minimal functionality
• Readability, Complexity, Requirements and design documents 10
Reference: Software Testing Principles and Practices,Naresh Chauhan , Oxford University
Pooja Malhotra
Steps in the Inspection
Inspection Team:
• Moderator
• Author
• Presenter
• Record keeper
• Reviewers
• Observer
2.Overview: In this stage, the inspection team is provided with the
background information for inspection. The author presents the rationale for
the product, its relationship to the rest of the products being developed, its
function and intended use, and the approach used to develop it. This
information is necessary for the inspection team to perform a successful
inspection.
The opening meeting may also be called by the moderator. In this meeting, the
objective of inspection is explained to the team members. The idea is that
every member should be familiar with the overall purpose of the inspection.
11
Reference: Software Testing Principles and Practices,Naresh Chauhan , Oxford University
Pooja Malhotra
Steps in the Inspection
3.Individual Preparation:After the overview, the reviewers individually
prepare themselves for the inspection process by studying the documents
provided to them in the overview session.
– List of questions
– Potential Change Request (CR)
– Suggested improvement opportunities
Completed preparation logs are submitted to the moderator prior to the
inspection meeting.
Inspection Meeting/Examination:
– The author makes a presentation
– The presenter reads the code
– The record keeper documents the CR
– Moderator ensures the review is on track
12
Reference: Software Testing Principles and Practices,Naresh Chauhan , Oxford University
Pooja Malhotra
Steps in the Inspection
At the end, the moderator concludes the meeting and produces a
summary of the inspection meeting.
13
Reference: Software Testing Principles and Practices,Naresh Chauhan , Oxford University
Pooja Malhotra
Steps in the Inspection
4.Re-work: The summary list of the bugs that arise during the inspection
meeting needs to be reworked by the author.
– Make the list of all the CRs
– Make a list of improvements
– Record the minutes meeting
– Author works on the CRs to fix the issue
• Bug Reduction
• Bug Prevention
• Productivity
• Real-time Feedback to Software Engineers
• Reduction in Development Resource
• Quality Improvement
• Project Management
• Checking Coupling and Cohesion
• Learning through Inspection
• Process Improvement
15
Reference: Software Testing Principles and Practices,Naresh Chauhan , Oxford University
Pooja Malhotra
Variants of Inspection process
16
Reference: Software Testing Principles and Practices,Naresh Chauhan , Oxford University
Pooja Malhotra
Active Design Reviews
17
Reference: Software Testing Principles and Practices,Naresh Chauhan , Oxford University
Pooja Malhotra
Formal Technical Asynchronous
review method (FTArm)
18
Reference: Software Testing Principles and Practices,Naresh Chauhan , Oxford University
Pooja Malhotra
Gilb Inspection
19
Reference: Software Testing Principles and Practices,Naresh Chauhan , Oxford University
Pooja Malhotra
Humphrey’s Inspection Process
20
Reference: Software Testing Principles and Practices,Naresh Chauhan , Oxford University
Pooja Malhotra
N-Fold Inspection
21
Reference: Software Testing Principles and Practices,Naresh Chauhan , Oxford University
Pooja Malhotra
Checklist
Structure:
❏ Does the code completely and correctly implement the design?
❏ Does the code conform to any applicable coding standards?
❏ Is the code well-structured, consistent in style, and consistently
formatted?
❏ Are there any uncalled or unneeded procedures or any
unreachable code?
❏ Are there any leftover stubs or test routines in the code?
❏ Can any code be replaced by calls to external reusable
components or library functions?
❏ Are there any blocks of repeated code that could be condensed
into a single procedure?
❏ Is storage use efficient?
❏ Are any modules excessively complex and should be restructured
or split into multiple routines? 23
Reference: Software Testing Principles and Practices,Naresh Chauhan , Oxford University
Pooja Malhotra
Checklist
Arithmetic Operations:
❏ Does the code avoid comparing floating-point
numbers for equality?
❏ Does the code systematically prevent rounding
errors?
❏ Are divisors tested for zero or noise?
24
Reference: Software Testing Principles and Practices,Naresh Chauhan , Oxford University
Pooja Malhotra
Checklist
Loops and Branches:
❏ Are all loops, branches, and logic constructs complete, correct,
and properly nested?
❏ Are all cases covered in an IF- -ELSEIF or CASE block,
including ELSE or DEFAULT clauses?
❏ Does every case statement have a default?
❏ Are loop termination conditions obvious and always achievable?
❏ Are indexes or subscripts properly initialized, just prior to the
loop?
❏ Does the code in the loop avoid manipulating the index variable
or using it upon exit from the loop?
25
Reference: Software Testing Principles and Practices,Naresh Chauhan , Oxford University
Pooja Malhotra
Checklist
Documentation:
❏ Is the code clearly and adequately documented
with an easy-to-maintain commenting style?
❏ Are all comments consistent with the code?
Variables:
❏ Are all variables properly defined with
meaningful, consistent, and clear names?
❏ Do all assigned variables have proper type
consistency or casting?
❏ Are there any redundant or unused variables?
26
Reference: Software Testing Principles and Practices,Naresh Chauhan , Oxford University
Pooja Malhotra
Checklist
Input / Output errors:
• If the file or peripheral is not ready, is that error
condition handled?
• Does the software handle the situation of the
external device being disconnected?
• Have all error messages been checked for
correctness, appropriateness, grammar, and
spelling?
• Are all exceptions handled by some part of the
code?
27
Reference: Software Testing Principles and Practices,Naresh Chauhan , Oxford University
Pooja Malhotra
Scenario based Reading
28
Reference: Software Testing Principles and Practices,Naresh Chauhan , Oxford University
Pooja Malhotra
Scenario based Reading
29
Reference: Software Testing Principles and Practices,Naresh Chauhan , Oxford University
Pooja Malhotra
Structured Walkthroughs
30
Reference: Software Testing Principles and Practices,Naresh Chauhan , Oxford University
Pooja Malhotra
Technical Reviews
32
Reference: Software Testing Principles and Practices,Naresh Chauhan , Oxford University
Pooja Malhotra
Black Box Testing
33
Reference: Software Testing Principles and Practices,Naresh Chauhan , Oxford University
Pooja Malhotra
Black Box Testing
Evolution of Software Testing
• To test the modules independently.
34
Reference: Software Testing Principles and Practices,Naresh Chauhan , Oxford University
Pooja Malhotra
Boundary Value Analysis (BVA)
35
Reference: Software Testing Principles and Practices,Naresh Chauhan , Oxford University
Pooja Malhotra
Boundary Value Analysis (BVA)
36
Reference: Software Testing Principles and Practices,Naresh Chauhan , Oxford University
Pooja Malhotra
Guidelines for Boundary Value Analysis
• The equivalence class specifies a range
– If an equivalence class specifies a range of values, then construct
test cases by considering the boundary points of the range and
points just beyond the boundaries of the range
38
Reference: Software Testing Principles and Practices,Naresh Chauhan , Oxford University
Pooja Malhotra
BVA- “Single fault "assumption theory
The basic form of implementation is to maintain all but one
of the variables at their nominal (normal or average) values
and allowing the remaining variable to take on its extreme
values. The values used to test the extremities are:
• Min ------------------------------------ - Minimal
• Min+ --------------------------- Just above Minimal
• Nom ---------------------------------- Average
• Max- ------------- -------- Just below Maximum
• Max --------------- --------------- Maximum
39
Reference: Software Testing Principles and Practices,Naresh Chauhan , Oxford University
Pooja Malhotra
Boundary Value Checking
• Anom, Bmin
• Anom, Bmin+
• Anom, Bmax
• Anom, Bmax-
• Amin, Bnom
• Amin+, Bnom
• Amax, Bnom
• Amax-, Bnom
• Anom, Bnom
41
Reference: Software Testing Principles and Practices,Naresh Chauhan , Oxford University
Pooja Malhotra
Robustness Testing Method
• Amax+, Bnom
• Amin-, Bnom
• Anom, Bmax+
• Anom, Bmin-
•
• It can be generalized that for n input variables in a module,
6n+1 test cases are designed with Robustness testing.
42
Reference: Software Testing Principles and Practices,Naresh Chauhan , Oxford University
Pooja Malhotra
Software Testing
Worst Case Myths
Testing Method
• When more than one variable are in extreme values, i.e. when more
than one variable are on the boundary. It is called Worst case
testing method.
43
Reference: Software Testing Principles and Practices,Naresh Chauhan , Oxford University
Pooja Malhotra
Example
44
Reference: Software Testing Principles and Practices,Naresh Chauhan , Oxford University
Pooja Malhotra
Example
45
Reference: Software Testing Principles and Practices,Naresh Chauhan , Oxford University
Pooja Malhotra
BVA- The triangle problem
The triangle problem accepts three integers (a, b and c)as its input,
each of which are taken to be sides of a triangle . The values of these
inputs are used to determine the type of the triangle (Equilateral,
Isosceles, Scalene or not a triangle).
For the inputs to be declared as being a triangle they must satisfy the
six conditions:
C1. 1 ≤ a ≤ 200. C2. 1 ≤ b ≤ 200.
C3. 1 ≤c ≤ 200. C4. a < b + c.
C5. b < a + c. C6. c < a + b.
Otherwise this is declared not to be a triangle.
The type of the triangle, provided the conditions are met, is determined
as follows:
1. If all three sides are equal, the output is Equilateral.
2. If exactly one pair of sides is equal, the output is Isosceles.
3. If no pair of sides is equal, the output is Scalene.
46
Reference: Software Testing Principles and Practices,Naresh Chauhan , Oxford University
Pooja Malhotra
Test Cases for the Triangle
Problem
Boundary Value Analysis Test Cases
Case a b c Expected Output
1 100 100 1 Isosceles
2 100 100 2 Isosceles
min = 1
min+ = 2
3 100 100 100 Equilateral
nom = 100 4 100 100 199 Isosceles
max- = 199 5 100 100 200 Not a Triangle
max = 200
6 100 1 100 Isosceles
7 100 2 100 Isosceles
8 100 199 100 Isosceles
9 100 200 100 Not a Triangle
10 1 100 100 Isosceles
11 2 100 100 Isosceles
12 199 100 100 Isosceles
13 200 100 100 Not a Triangle
47
Reference: Software Testing Principles and Practices,Naresh Chauhan , Oxford University
Pooja Malhotra
Equivalence Class Testing
• An input domain may be too large for all its elements to be used as test
input
• The input domain is partitioned into a finite number of subdomains
• Each subdomain is known as an equivalence class, and it serves as a source
of at least one test input
• A valid input to a system is an element of the input domain that is expected
to return a non error value
• An invalid input is an input that is expected to return an error value.
48
Figure (a)Reference:
Too many test input;
Software Testing (b)Practices,Naresh
Principles and One input is selected
Chauhan from each of the subdomain
, Oxford University
Pooja Malhotra
Equivalence Class Testing
49
Reference: Software Testing Principles and Practices,Naresh Chauhan , Oxford University
Pooja Malhotra
Guidelines for Equivalence Class Partitioning
• An input condition specifies a range [a, b]
– one equivalence class for a < X < b, and
– two other classes for X < a and X > b to test the system with invalid
inputs
• An input condition specifies a set of values
– one equivalence class for each element of the set {M1}, {M2}, ....,
{MN}, and
– one equivalence class for elements outside the set {M 1,M2, ...,MN}
• Input condition specifies for each individual value
– If the system handles each valid input differently then create one
equivalence class for each valid input
• An input condition specifies the number of valid values (Say N)
– Create one equivalence class for the correct number of inputs
– two equivalence classes for invalid inputs – one for zero values and one
for more than N values
• An input condition specifies a “must be” value
– Create one equivalence class for a “must be” value, and
– one equivalence class for something that is not a “must be” value
50
Reference: Software Testing Principles and Practices,Naresh Chauhan , Oxford University
Pooja Malhotra
Identification of Test Cases
Test cases for each equivalence class can be identified by:
• For each equivalence class with valid input that has not
been covered by test cases yet, write a new test case
covering as many uncovered equivalence classes as possible
• For each equivalence class with invalid input that has not
been covered by test cases, write a new test case that covers
one and only one of the uncovered equivalence classes
51
Reference: Software Testing Principles and Practices,Naresh Chauhan , Oxford University
Pooja Malhotra
Example
I1 = {<A,B,C> : 1 ≤ A ≤ 50}
I2 = {<A,B,C> : 1 ≤ B ≤ 50}
I3 = {<A,B,C> : 1 ≤ C ≤ 50}
I4 = {<A,B,C> : A < 1}
I5 = {<A,B,C> : A > 50}
I6 = {<A,B,C> : B < 1}
I7 = {<A,B,C> : B > 50}
I8 = {<A,B,C> : C < 1}
I9 = {<A,B,C> : C > 50}
52
Reference: Software Testing Principles and Practices,Naresh Chauhan , Oxford University
Pooja Malhotra
Example
53
Reference: Software Testing Principles and Practices,Naresh Chauhan , Oxford University
Pooja Malhotra
Advantages of Equivalence Class Partitioning
• One gets a better idea about the input domain being covered
with the selected test cases
55
Reference: Software Testing Principles and Practices,Naresh Chauhan , Oxford University
Pooja Malhotra
Formation of Decision Table
• Condition Stub
• Action Stub
• Condition Entry
• Action Entry
56
Reference: Software Testing Principles and Practices,Naresh Chauhan , Oxford University
Pooja Malhotra
Formation of Decision Table
• It comprises a set of conditions (or, causes) and a set of effects (or,
results) arranged in the form of a column on the left of the table
• In the second column, next to each condition, we have its possible
values: Yes (Y), No (N), and Don’t Care (Immaterial) state.
• To the right of the “Values” column, we have a set of rules. For each
combination of the three conditions {C1,C2,C3}, there exists a rule
from the set {R1,R2, ..}
• Each rule comprises a Yes (Y), No (N), or Don’t Care (“-”) response,
and contains an associated list of effects(actions) {E1,E2,E3}
• For each relevant effect, an effect sequence number specifies the order
in which the effect should be carried out, if the associated set of
conditions are satisfied
• Each rule of a decision table represents a test case
57
Reference: Software Testing Principles and Practices,Naresh Chauhan , Oxford University
Pooja Malhotra
Test case design using decision table
• The columns in the decision table are transformed into test cases.
• If there are K rules over n binary conditions, there are at least K test
cases and at the most 2^n test cases.
59
Reference: Software Testing Principles and Practices,Naresh Chauhan , Oxford University
Decision Table Based Testing
Example
• A program calculates the total salary of an employee
with the conditions that if the working hours are less than
or equal to 48, then give normal salary. The hours over
48 on normal working days are calculated at the rate of
1.25 of the salary. However, on holidays or Sundays, the
hours are calculated at the rate of 2.00 times of the
salary. Design the test cases using decision table
testing.
60
Reference: Software Testing Principles and Practices,Naresh Chauhan , Oxford University
Pooja Malhotra
Decision Table Based Testing
The test cases derived from the decision table are given below:
61
Reference: Software Testing Principles and Practices,Naresh Chauhan , Oxford University
Pooja Malhotra
Dynamic Testing: White Box Testing Techniques
White-box testing (also known as clear box testing, glass box testing,
transparent box testing, and structural testing) is a method of testing
software that tests internal structures or workings of an application. White-
box testing can be applied at the unit, integration and system levels of the
software testing process.
• White box testing needs the full understanding of the logic/structure
of the program.
• Test case designing using white box testing techniques
– Control Flow testing method
• Basis Path testing method
• Loop testing
– Data Flow testing method
– Mutation testing method
• Control flow refers to flow of control from one instruction to another
• Data flow refers to propagation of values from one variable or constant to
another variable
62
Reference: Software Testing Principles and Practices,Naresh Chauhan , Oxford University
Basis Path Testing
Basis path testing is the technique of selecting the paths that provide a basis
set of execution paths through the program.
• Path Testing is based on control structure of the program for which flow
graph is prepared.
65
Reference: Software Testing Principles and Practices,Naresh Chauhan , Oxford University
Path Testing Terminology
Path: A path through a program is a sequence of instructions or statements that starts
at an entry, junction, or decision and ends at another, or possibly the same, junction,
decision, or exit.
Segment: Paths consist of segments. The smallest segment is a link, that is, a single
process that lies between two nodes (e.g., junction-process-junction, junction process-
decision, decision-process-junction, decision-process-decision).
Length of a Path: The length of a path is measured by the number of links in it and
not by the number of instructions or statements executed along the path. An
alternative way to measure the length of a path is by the number of nodes traversed.
Independent Path: An independent path is any path through the graph that
introduces at least one new set of processing statements or new conditions. An
independent path must move along at least one edge that has not been traversed
before the path is
66
defined. Reference: Software Testing Principles and Practices,Naresh Chauhan , Oxford University
Path Testing Terminology
Cyclomatic complexity is a software metric used to indicate the complexity
of a program. It is a quantitative measure of the number of linearly
independent paths through a program's source code. It was developed by
Thomas J. McCabe, Sr. in 1976.
• The testing strategy, called basis path testing by McCabe who first
proposed it, is to test each linearly independent path through the program;
in this case, the number of test cases will equal the cyclomatic complexity
of the program.
• Cyclomatic Complexity (logical complexity of program)
71
Reference: Software Testing Principles and Practices,Naresh Chauhan , Oxford University
Example
Cyclomatic Complexity
V(G) = e – n + 2 * P
= 10 – 8 +2
= 4
V(G) = Number of predicate nodes + 1
= 3 (Nodes B,C and F) + 1
= 4
72
Reference: Software Testing Principles and Practices,Naresh Chauhan , Oxford University
Example
Independent Paths
• A-B-F-H
• A-B-F-G-H
• A-B-C-E-B-F-G-H
• A-B-C-D-F-H
73
Reference: Software Testing Principles and Practices,Naresh Chauhan , Oxford University
Loop Testing
Simple Loops
• Check whether you can bypass the loop or not. If the test case for
bypassing the loop is executed and, still you enter inside the loop, it
means there is a bug.
• Check whether the loop control variable is negative.
• Write one test case that executes the statements inside the loop.
• Write test cases for a typical number of iterations through the loop.
• Write test cases for checking the boundary values of maximum and
minimum number of iterations defined (say min and max) in the loop. It
means we should test for the min, min+1, min-1, max-1, max and
max+1 number of iterations through the loop.
74
Reference: Software Testing Principles and Practices,Naresh Chauhan , Oxford University
Loop Testing
Nested Loops: Nested loops When two or more loops are embedded, it
is called a nested loop.
The the strategy is to start with the innermost loops while holding outer
loops to their minimum values. Continue this outward in this manner
until all loops have been covered
75
Reference: Software Testing Principles and Practices,Naresh Chauhan , Oxford University
Loop Testing
Concatenated Loops:
76
Reference: Software Testing Principles and Practices,Naresh Chauhan , Oxford University
Data Flow Testing
78
Reference: Software Testing Principles and Practices,Naresh Chauhan , Oxford University
Data Flow testing helps us to pinpoint any of
the following issues:
• A variable that is declared but never used within
the program.
• A variable that is used but never declared.
• A variable that is defined multiple times before it
is used.
• Deallocating a variable before it is used.
79
Reference: Software Testing Principles and Practices,Naresh Chauhan , Oxford University
Data Flow Testing
Closely examines the state of the data in the control flow graph
resulting in a richer test suite than the one obtained from control flow
graph based path testing strategies like branch coverage, all statement
coverage, etc.
80
Reference: Software Testing Principles and Practices,Naresh Chauhan , Oxford University
Data Flow Testing
82
Reference: Software Testing Principles and Practices,Naresh Chauhan , Oxford University
Data Flow Testing
It can be observed that not all data-flow anomalies are harmful, but most of
them are suspicious and indicate that an error can occur.
83
Reference: Software Testing Principles and Practices,Naresh Chauhan , Oxford University
Data Flow Testing
85
Reference: Software Testing Principles and Practices,Naresh Chauhan , Oxford University
Data Flow Testing
Static Data Flow Testing : With static analysis, the source code is analyzed
without executing it.
86
Reference: Software Testing Principles and Practices,Naresh Chauhan , Oxford University
Data Flow Testing
87
Reference: Software Testing Principles and Practices,Naresh Chauhan , Oxford University
Data Flow Testing
88
Reference: Software Testing Principles and Practices,Naresh Chauhan , Oxford University
Data Flow Testing
Find :
du Paths
dc Paths
for variable payment. Variable Defined at Used at
Payment 0,3,7,10,12 7,10,11,12,16
89
Reference: Software Testing Principles and Practices,Naresh Chauhan , Oxford University
Pooja Malhotra
Data Flow Testing
90
Reference: Software Testing Principles and Practices,Naresh Chauhan , Oxford University
Mutation Testing
Mutation testing is a technique that focuses on
measuring the adequacy of test data (or test cases).
The original intention behind mutation testing was to
expose and locate weaknesses in test cases. Thus,
mutation testing is a way to measure the quality of test
cases.
91
Reference: Software Testing Principles and Practices,Naresh Chauhan , Oxford University
Mutation Testing
• Mutation testing helps a user create test data by interacting with the
user to iteratively strengthen the quality of test data. During mutation
testing, faults are introduced into a program by creating many
versions of the program, each of which contains one fault. Test data
are used to execute these faulty programs with the goal of causing
each faulty program to fail.
92
Reference: Software Testing Principles and Practices,Naresh Chauhan , Oxford University
Mutation Testing
• Modify a program by introducing a single small change to
the code
• A modified program is called mutant
• A mutant is said to be killed when the execution of test
case cause it to fail. The mutant is considered to be dead
• A mutant is an equivalent to the given program if it
always produce the same output as the original program
• A mutant is called killable or stubborn, if the existing set
of test cases is insufficient to kill it.
93
Reference: Software Testing Principles and Practices,Naresh Chauhan , Oxford University
Mutation Score
A mutation score for a set of test cases is the percentage
of non-equivalent mutants killed by the test suite
100*D/(N-E) where
D -> Dead
N-> Total No of Mutants
E-> No of equivalent mutants
Primary Mutants:
• Let us take one example of C program shown below
…
If (a>b)
x = x + y;
else
x = y;
printf(“%d”,x);
….
We can consider the following mutants for above example:
• M1: x = x – y;
• M2: x = x / y;
• M3: x = x+1;
• M4: printf(“%d”,y);
95
Reference: Software Testing Principles and Practices,Naresh Chauhan , Oxford University
Mutation Testing
Secondary Mutants:
Multiple levels of mutation are applied on the initial program.
Example Program:
If(a<b)
c=a;
Mutant for this code may be :
If(a==b)
c=a+1;
96
Reference: Software Testing Principles and Practices,Naresh Chauhan , Oxford University
Mutation Testing Process
• Step 1: Begin with a program P and a set of test
cases T known to be correct.
• Step 2: Run each test case in T against the
program P.
– If it fails (o/p incorrect) P must be modified
and restarted. Else, go to step 3
• Step 3: Create a set of mutants {Pi }, each
differing from P by a simple, syntactically correct
modification of P.
97
Reference: Software Testing Principles and Practices,Naresh Chauhan , Oxford University
Mutation Testing Process
• Step 4: Execute each test case in T against
each mutant Pi .
• If the o/p is differ the mutant Pi is considered
incorrect and is said to be killed by the test case
• If Pi produces exactly the same results:
– P and Pi are equivalent
– Pi is killable (new test cases must be created
to kill it)
98
Reference: Software Testing Principles and Practices,Naresh Chauhan , Oxford University
Mutation Testing Process
• Step 5: Calculate the mutation score for the set
of test cases T.
• Mutation score = 100×D/(N −E),
99
Reference: Software Testing Principles and Practices,Naresh Chauhan , Oxford University
Regression Testing
• This testing is done to make sure that new code changes should
not have side effects on the existing functionalities. It ensures that
old code still works once the new code changes are done.
• Regression Testing is necessary to maintain software whenever
there is update in it.
• Regression testing is not another testing activity. Rather, it is the
re-execution of some or all of the already developed test cases.
• Regression testing increases quality of software. 100
Reference: Software Testing Principles and Practices,Naresh Chauhan , Oxford University
Regression Testing produces Quality Software
Evolution of Software Testing
101
Reference: Software Testing Principles and Practices,Naresh Chauhan , Oxford University
Objectives of Regression Testing
102
Reference: Software Testing Principles and Practices,Naresh Chauhan , Oxford University
Need / When to do regression testing?
• Software maintenance
Corrective maintenance
Adaptive maintenance
Perfective maintenance
Preventive maintenance
Bug-Fix Regression
104
Reference: Software Testing Principles and Practices,Naresh Chauhan , Oxford University
Regression Testing Types
Bug-Fix regression:
This testing is performed after a bug has been reported and fixed. Its
goal is to repeat the test cases that expose the problem in the first place.
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University 105
Pooja Malhotra
Usability testing
107
Reference: Software Testing Principles and Practices,Naresh Chauhan , Oxford University
Usability testing
111
Reference: Software Testing Principles and Practices,Naresh Chauhan , Oxford University
Usability testing
113
Reference: Software Testing Principles and Practices,Naresh Chauhan , Oxford University
WHEN TO DO USABILITY TESTING?
114
Reference: Software Testing Principles and Practices,Naresh Chauhan , Oxford University
WHEN TO DO USABILITY TESTING?
Usability design is verified through several means. Some of them are as follows:
• Style sheets : Style sheets are grouping of user interface design elements.
Use of style sheets ensures consistency of design elements across several
screens and testing the style sheet ensures that the basic usability design is
tested. Style sheets also include frames, where each frame is considered as
a separate screen by the user. Style sheets are reviewed to check whether
they force font size, color scheme, and so on, which may affect usability.
• Screen prototypes : Screen prototype is another way to test usability design.
The screens are designed as they will be shipped to the customers, but are
not integrated with other modules of the product. Therefore, this user
interface is tested independently without integrating with the functionality
modules. This prototype will have other user interface functions simulated
such as screen navigation, message display, and so on. The prototype gives
an idea of how exactly the screens will look and function when the product is
released. The test team and some real-life users test this prototype and their
ideas for improvements are incorporated in the user interface. Once this
prototype is completely tested, it is integrated with other modules of 115 the
product. Reference: Software Testing Principles and Practices,Naresh Chauhan , Oxford University
WHEN TO DO USABILITY TESTING?
116
Reference: Software Testing Principles and Practices,Naresh Chauhan , Oxford University
WHEN TO DO USABILITY TESTING?
117
Reference: Software Testing Principles and Practices,Naresh Chauhan , Oxford University
QUALITY FACTORS FOR USABILITY
Some quality factors are very important when performing usability testing. As
was explained earlier, usability is subjective and not all requirements for
usability can be documented clearly. However focusing on some of the quality
factors given below help in improving objectivity in usability testing are as
follows.
Comprehensibility : The product should have simple and logical structure of
features and documentation. They should be grouped on the basis of user
scenarios and usage. The most frequent operations that are performed early in
a scenario should be presented first, using the user interfaces. When features
and components are grouped in a product, they should be based on user
terminologies, not technology or implementation.
Consistency: A product needs to be consistent with any applicable standards,
platform look-and-feel, base infrastructure, and earlier versions of the same
product. Also, if there are multiple products from the same company, it would
be worthwhile to have some consistency in the look-and-feel of these multiple
products. Following same standards for usability helps in meeting the
consistency aspect of the usability. 118
Reference: Software Testing Principles and Practices,Naresh Chauhan , Oxford University
WHEN TO DO USABILITY TESTING?
120
Reference: Software Testing Principles and Practices,Naresh Chauhan , Oxford University
Accessibility testing
122
Reference: Software Testing Principles and Practices,Naresh Chauhan , Oxford University
Accessibility testing
125
Reference: Software Testing Principles and Practices,Naresh Chauhan , Oxford University
126
Reference: Software Testing Principles and Practices,Naresh Chauhan , Oxford University
Accessibility testing
128
Reference: Software Testing Principles and Practices,Naresh Chauhan , Oxford University
References:
129
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Module 3 :
Testing Metrics for Monitoring and Controlling the
Testing Process
1
Pooja Malhotra
Software Metrics
Metrics can be defined as “STANDARDS OF MEASUREMENT”.
Software Metrics are used to measure the quality of the project. Simply,
Metric is a unit used for describing an attribute. Metric is a scale for
measurement.
Suppose, in general, “Kilogram” is a metric for measuring the attribute
“Weight”. Similarly, in software, “How many issues are found in
thousand lines of code?”
Test metrics example:
• Understanding
• Control
• Improvement
3
Pooja Malhotra
Evolution Software
of Software
Metrics Testing
Product Metrics
Measures of the software product at any stage of its development
From requirements to installed system.
– Complexity of S/W design and code
– Size of the final program
– Number of pages of documentation produced
Process Metrics
Measures of the S/W development process
– Overall development time
– Type of methodology used
– Average level of experience of programming staff
4
Pooja Malhotra
Measurement Objectives for Testing
Evolution of Software Testing
The objectives for assessing a test process should
be well defined.
GQM(Goal Question Metric) Framework:
• List the major goals of the test process.
• Drives from each goal, the questions that must
be answered to determine if the goals are being
met.
• Decides what must be measured in order to
answer the questions adequately.
5
Pooja Malhotra
Attributes and Corresponding Metrics
Evolution of Software Testing
in Software Testing
6
Pooja Malhotra
Attributes : Progress
Evolution of Software Testing
• Scope of testing: Overall amount of work involved
• Test Progress: Schudule , budget, resouces
– Major milestones
– NPT
– Test case Escapes (TCE)
– Planned versus Actual Execution (PAE) Rate
– Execution Status of Test (EST) Cases(Failed, Passed, Blocked, Invalid
and Untested)
• Defect Backlog: Number of defects that are
unresolved/outstanding
• Staff productivity: Time spent in test planning , designing,
Number of test cases developed.(useful to estimate the cost and
duration for testing activities) 7
Pooja Malhotra
Attributes and Corresponding Metrics
Evolution of Software Testing
Attributes
in Software: Testing
Progress
9
Pooja Malhotra
Attributes: Cost
Evolution of Software Testing
10
Pooja Malhotra
Effectiveness of Test Cases
Evolution of Software Testing
• Number of faults found in testing.
• Number of failures observed by the customer which can be used
as a reflection of the effectiveness of the test cases.
• Defect age
Defect age is used in another metric called defect spoilage to measure
the effectiveness of defect removal activities.
11
Pooja Malhotra
Effectiveness of Test Cases
Evolution of Software Testing
• Defect Removal Efficiency (DRE) metric
defined as follows:
DRE
Number of DefectsFound in Testing
Number of DefectsFound in Testing Number of DetectsNot Found
12
Pooja Malhotra
Spoilage Metric
• Defects are injected and removed at different phases of a software
development cycle
• The cost of each defect injected in phase X and removed in phase Y
increases with the increase in the distance between X and Y
• An effective testing method would find defects earlier than a less effective
testing method .
• An useful measure of test effectiveness is defect age, called PhAge
13
Spoliage
(Number of Defects Discovered Phase) 14
15
Table: Number of defects weighted by defect age on project Boomerang
Pooja Malhotra
Spoilage Metric
• The spoilage value for the Boomerang test
project is 2.2
• A spoilage value close to 1 is an indication of a
more effective defect discovery process
• This metric is useful in measuring the long-
term trend of test effectiveness in an
organization
16
Pooja Malhotra
Measuring Test completeness
18
Pooja Malhotra
Attributes : Size
• Number of test cases
reused
• Number of test cases added
to test database
• Number of test cases rerun
when changes are made to
the S/W
• Number of planned
regression tests executed
• Number of planned
regression tests executed
and passed
19
Pooja Malhotra
Size Metrics
Program Vocabulary
n = n1 + n2
where n = program vocabulary
n1 = number of unique operators
n2 = number of unique operands
Program Length
N = N1 + N2
Where N = program length
N1 = all operators appearing in the implementation
N2 = all operands appearing in the implementation
20
Pooja Malhotra
Token Count
V = N log2 n
where V = Program volume
N = Program length
n = Program vocabulary
21
Pooja Malhotra
Estimation models for estimating
testing efforts
1) Halstead metrics for estimating testing effort
• PL = 1/ [(n1 / 2) * (N2 / n2)]
• e(effort) = V/PL
• V= describe number of volume of information in bits required to specify a
program.
• PL= measure of software complexity
• Percentage of testing effort (k) for module k = e(k) / ∑e(i)
e(k) : effort required for module k
∑e(i): sum of halstead effort across all modules of the system
2) Development Ratio Method(No. of testing personal required is estimated on
the basis of developer – tester ratio)
– Type and complexity of the software being developed
– Testing level
– Scope of testing
– Test effectiveness
– Error tolerance level for testing
22
– Available budget Pooja Malhotra
Estimation models for estimating
testing efforts
23
Pooja Malhotra
Estimation models for estimating
testing efforts
24
Pooja Malhotra
Architectural Design Metric used for Testing
Structural Complexity
2
S(m) = f out(m)
where S is the structural complexity of a module m
and fout(m) is the fan-out of module m.
This metric gives us the number of stubs required for unit testing of the module
m.(Unit Testing)
Data Complexity
D(m) = v(m) / [fout(m) + 1]
where v(m) is the number of input and output variables that are passed to and
from module m.
This metric measures the complexity in the internal interface for a module m and
indicates the probability of errors in module m.
System Complexity
SC(m) = S(m) + D(m)
It is defined as the sum of structural and data complexity.
Overall architectural complexity of system is the sum total of system complexities
of all the modules.
Efforts required for integration testing increases with the architectural complexity
25
of the system. Pooja Malhotra
Information Flow Metrics used for Testing
• Global Flow
• Fan-in of a module
• Fan-out of a module
26
Pooja Malhotra
Information Flow Metrics used for Testing
27
Pooja Malhotra
Information Flow Metrics used for Testing :
28
Pooja Malhotra
Cyclomatic Complexity Measures for Testing
29
Pooja Malhotra
Function Point Metrics for Testing
31
Pooja Malhotra
Test Point Analysis
32
Pooja Malhotra
Test Point Analysis
34
Pooja Malhotra
Test Point Analysis
35
Pooja Malhotra
Test Point Analysis
36
Pooja Malhotra
Test Point Analysis
37
Pooja Malhotra
Test Point Analysis
38
Pooja Malhotra
Testing Progress Metrics
• Test Procedure Execution Status:
Test proc Exec. Status=Number of executed test cases/Total number of
test cases
• Defect Aging :Turnaround time for a bug to be corrected.
Defect aging = closing date of bug - start date when bug was opened
• Defect Fix Time to Retest:
Defect Fix Time to Retest = Closing date of bug and releasing in new
build – Date of retesting the bug
• Defect Trend Analysis: defined as the trend in the number of
defects found as testing progresses.
Tester Productivity:
41
Pooja Malhotra
Testing Progress Metrics
• Budget and Resource Monitoring Measures:
Earned value tracking
For the planned earned values, we need the following measurement
data :
1. Total estimated time or cost for overall testing effort
2. Estimated time or cost for each testing activity
3. Actual time or cost for each testing activity
42
Pooja Malhotra
Agile software Testing
Pooja Malhotra 1
Agile Methodology
AGILE methodology is a practice that
promotes continuous iteration of development and
testing throughout the software development lifecycle of
the project. Both development and testing activities are
concurrent unlike the Waterfall model.
• The agile software development emphasizes on four
core values.
– Individual and team interactions over processes and tools
– Working software over comprehensive documentation
– Customer collaboration over contract negotiation
– Responding to change over following a plan
Pooja Malhotra 2
Agile Methodology : Values more for items on left side than
right one
Pooja Malhotra 3
What is Agile Testing?
Pooja Malhotra 4
Advantages of Agile Testing
Pooja Malhotra 5
Principles of Agile Testing
• Testing is NOT a Phase: Agile team tests continuously and continuous testing is the only way
to ensure continuous progress.
• Testing Moves the project Forward: When following conventional methods, testing is
considered as quality gate but agile testing provide feedback on an ongoing basis and the
product meets the business demands.
• Everyone Tests: In conventional SDLC, only test team tests while in agile including developers
and BA's test the application.
• Shortening Feedback Response Time: In conventional SDLC, only during the acceptance
testing, the Business team will get to know the product development, while in agile for each
and every iteration, they are involved and continuous feedback shortens the feedback
response time and cost involved in fixing is also less.
• Clean Code: Raised defects are fixed within the same iteration and thereby keeping the code
clean.
• Reduce Test Documentation: Instead of very lengthy documentation, agile testers use
reusable checklist, focus on the essence of the test rather than the incidental details.
• Test Driven: In conventional methods, testing is performed after implementation while in
agile testing, testing is done while implementation.
• More is less : More Interactions and More Focus leads to Less doubts and Less Defects.
Pooja Malhotra 6
Scrum
Scrum is a framework for managing software
development. It is designed for teams of three to nine
developers who: break their work into actions that can be
completed within fixed duration cycles (called "sprints"),
track progress and re-plan in daily 15-minute stand-up
meetings, and collaborate to deliver workable software
every sprint.
• SCRUM is an agile development method which
concentrates specifically on how to manage tasks
within a team-based development environment.
Basically, Scrum is derived from activity that occurs
during a rugby match. Scrum believes in empowering
the development team and advocates working in small
teams (say- 7 to 9 members).
Pooja Malhotra 7
Scrum
Pooja Malhotra 8
Scrum
Pooja Malhotra 9
Scrum
Pooja Malhotra 10
It consists of three roles, and their responsibilities are explained as follows:
• Scrum Master
– Master is responsible for setting up the team, sprint meeting and removes obstacles to progress
– Helping the product owner maintain the product backlog in a way that ensures the needed work is well understood so
the team can continually make forward progress
– Helping the team to determine the definition of done for the product, with input from key stakeholders
– Coaching the team, within the Scrum principles, in order to deliver high-quality features for its product
– Promoting self-organization within the team
– Helping the scrum team to avoid or remove impediments to its progress, whether internal or external to the team
– Facilitating team events to ensure regular progress
– Educating key stakeholders in the product on Scrum principles
• Product owner
– The product owner represents the product's stakeholders and the voice of the customer; and is accountable for
ensuring that the team delivers value to the business. The product owner defines the product in customer-centric
terms (typically user stories), adds them to the product backlog, and prioritizes them based on importance and
dependencies.]Scrum teams should have one product owner
– is responsible for the delivery of the functionality at each iteration
• Scrum Team
– The development team is responsible for delivering potentially shippable product increments every sprint (the sprint
goal).
– The team has from three to nine members who carry out all tasks required to build the product increments (analysis,
design, development, testing, technical writing, etc.)
Product Backlog
– This is a repository where requirements are tracked with details on the no of requirements to be
completed for each release. It should be maintained and prioritized by Product Owner, and it should
be distributed to the scrum team. Team can also request for a new requirement addition or
modification or deletion
Pooja Malhotra 11
• The Sprint Review is equivalent to a user acceptance test.
• Sprint Retrospective is equivalent to a project post-mortem.
• The Sprint Review is focused on the "product" and maximizing the business
value of the results of the work of the previous sprint and the Sprint
Retrospective is focused on the process and continuous process
improvement.
Pooja Malhotra 12
Pooja Malhotra 13
Scrum
Process flow of Scrum Methodologies:
• Each iteration of a scrum is known as Sprint
• Product backlog is a list where all details are
entered to get end product
• During each Sprint, top items of Product backlog
are selected and turned into Sprint backlog
• Team works on the defined sprint backlog
• Team checks for the daily work
• At the end of the sprint, team delivers product
functionality
Pooja Malhotra 14
Agile Testing: Test-Driven Development (TDD)
Pooja Malhotra 15
Test-Driven Development
Pooja Malhotra 16
Test-Driven Development
Pooja Malhotra 17
Agile Testing Life Cycle
Pooja Malhotra 18
Testing in Scrum Phases
Module 4
Automation and Testing Tools
2
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Automation and Testing Tools
• The automation software can also enter test data into the System
under Test , compare expected and actual results and generate
detailed test reports.
• Simulated testing
• Internal Testing
• Test Enablers
5
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Static Testing Tools
Evolution of Software Testing
Static Program Analyzers which scan the source code and detect
possible faults and anomalies.
• Interface Analysis
• Path Analysis
6
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Dynamic Testing Tools
Program Monitors:
• List the number of times a component is called or line of code is
executed. This information is used by testers about the
statement or path coverage of their test cases.
• Report on whether a decision point has branched in all
directions, thereby providing information about branch coverage.
• Report summary statistics providing a high level view of the
percentage of statements, paths, and branches that have been
covered by the collective set of test cases run. This information
is important when test objectives are stated in terms of coverage.
7
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Testing Activity Tools
9
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Test Selection Guidelines for Automation
10
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Costs incurred in Testing Tools
• Training is required
• Configuration Management
11
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Software Testing
Guidelines for Myths
Automated Testing
12
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Test Automation Framework
14
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Example of keywords
Keywords Description
Login Login to demo site
Emails Send Email
logouts Log out from demo site
Notifications Find unread notifications
15
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Some Commercial Testing Tools
• Apache’s JMeter
16
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Reference:
17
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Testing Web based Systems
Module 4
Testing Web based Systems
2
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Evolution of Software
Web Technology EvolutionTesting
3
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Challenges in Testing for
Web-based Software
• Dynamic Environment
client side programs and contents may be generated dynamically
• Continuous evolution
4
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Quality Aspects
• Reliability
• Performance
• Security :
– Security of Infrastructure hosting the web application
– Vulnerability is a cyber-security term that refers to a flaw in a system
that can leave it open to attack. A vulnerability may also refer to any
type of weakness in a computer system itself, in a set of procedures, or
in anything that leaves information security exposed to a threat.
• Usability
• Scalability
• Availability
5
• Maintainability
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Navigation Testing
The errors must be checked during navigation testing for the following:
•The links should not be broken due to any reasons.
•The redirected links should be with proper messages displayed to the user.
•Check that all possible navigation paths are active.
•Check that all possible navigation paths are relevant.
•Check the navigations for the Back and Forward buttons, whether these
are properly working if allowed.
6
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Security Testing
Security Test Plan
• Buffer overflows
9
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Cross-site scripting
10
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Buffer overflows
11
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Buffer overflows
12
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
SQL Injection
13
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Performance Testing
Performance Parameters
• Resource utilization
• Throughput
• Response time
• Database load
• Scalability
14
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Performance Testing
Load testing
15
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Reference:
16
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra
Mobile Application Testing
Module 4
Mobile application testing
Mobile application testing is a process by
which application software developed for
handheld mobile devices is tested for its
functionality, usability and consistency. Mobile
application testing can be an automated or
manual type of testing. Mobile applications
either come pre-installed or can be installed
from mobile software distribution platforms.
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
2
Pooja Malhotra
Types of Mobile Testing
There are broadly 2 kinds of testing that take place on mobile devices:
#1. Hardware testing:
• The device including the internal processors, internal hardware, screen
sizes, resolution, space or memory, camera, radio, Bluetooth, WIFI etc.
This is sometimes referred to as, simple “Mobile Testing”.
#2. Software or Application testing:
• The applications that work on mobile devices and their functionality are
tested. It is called the “Mobile Application Testing” to differentiate it from
the earlier method. Even in the mobile applications, there are few basic
differences that are important to understand:
a) Native apps: A native application is created for use on a platform like
mobile and tablets.
b) Mobile web apps are server-side apps to access website/s on mobile using
different browsers like chrome, Firefox by connecting to a mobile network or
wireless network like WIFI.
c) Hybrid apps are combinations of native app and web app. They run on
devices or offline and are written using web technologies like HTML5 and CSS.
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University 3
Pooja Malhotra
Basic differences
• Native apps have single platform affinity while mobile web apps
have cross platform affinity.
• Native apps are written in platforms like SDKs while Mobile web
apps are written with web technologies like html, css, asp.net, java,
php.
• For a native app, installation is required but for mobile web apps,
no installation is required.
• Native app can be updated from play store or app store while
mobile web apps are centralized updates.
• Many native app don’t require Internet connection but for mobile
web apps it’s a must.
• Native app works faster when compared to mobile web apps.
• Native apps are installed from app stores like Google play
store or app store where mobile web are websites and are only
accessible through Internet.
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra 4
Challenges
Testing applications on mobile devices is more challenging than
testing web apps on desktop due to :
• Wide varieties of mobile devices like HTC, Samsung, Apple and
Nokia.
• Different range of mobile devices with different screen sizes and
hardware configurations like hard keypad, virtual keypad (touch
screen) and trackball etc.
• Different mobile operating systems like Android, Symbian,
Windows, Blackberry and IOS.
• Different versions of operation system like iOS 5.x, iOS 6.x, BB5.x,
BB6.x etc.
• Different mobile network operators like GSM(Global System for
Mobile Communication) and CDMA(Code Division Multiple
Access).
• Frequent updates (like android- 4.2, 4.3, 4.4, iOS-5.x, 6.x) – with
each update a new testing cycle is recommended to make sure no
application functionality is impacted.
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
5
Pooja Malhotra
Basic Difference between Mobile and Desktop
Application Testing
• On desktop, the application is tested on a central
processing unit. On a mobile device, the
application is tested on handsets like Samsung,
Nokia, Apple and HTC.
• Mobile device screen size is smaller than desktop.
• Mobile devices have less memory than desktop.
• Mobiles use network connections like 2G, 3G, 4G
or WIFI where desktop use broadband or dial up
connections.
• The automation tool used for desktop application
testing might not work on mobile applications.
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
6
Pooja Malhotra
Types of Mobile App Testing
• Compatibility testing– Testing of the application in
different mobiles devices, browsers, screen sizes and
OS versions according to the requirements.
• Interface testing– Testing of menu options, buttons,
bookmarks, history, settings, and navigation flow of the
application.
• Services testing– Testing the services of the application
online and offline.
• Low level resource testing: Testing of memory usage,
auto deletion of temporary files, local database
growing issues known as low level resource testing.
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
7
Pooja Malhotra
Types of Mobile App Testing
• Performance testing– Testing the performance of the
application by changing the connection from 2G, 3G to WIFI,
sharing the documents, battery consumption, etc.
• Testing the performance and behaviour of the application
under certain conditions such as low battery, bad network
coverage, low available memory, simultaneous access to
application’s server by several users and other conditions.
Performance of an application can be affected from two
sides: application’s server side and client’s side. Performance
testing is carried out to check both.
• Operational testing– Testing of backups and recovery plan if
battery goes down, or data loss while upgrading the
application from store.
• Security Testing– Testing an application to validate if the
information system protects data or not.
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
8
Pooja Malhotra
Types of Mobile App Testing
The fundamental objective of security testing is to ensure that the application’s data and networking security requirements are
met as per guidelines.
The following are the most crucial areas for checking the security of Mobile applications.
• To validate that the application is able to withstand any brute force attack which is an automated process of trial and error
used to guess a person’s username, password or credit-card number.
• To validate whether an application is not permitting an attacker to access sensitive content or functionality without proper
authentication.
• To validate that the application has a strong password protection system and it does not permit an attacker to obtain,
change or recover another user’s password.
• To validate that the application does not suffer from insufficient session expiration.
• To identify the dynamic dependencies and take measures to prevent any attacker for accessing these vulnerabilities.
• To prevent from SQL injection related attacks.
• To identify and recover from any unmanaged code scenarios.
• To ensure whether the certificates are validated, does the application implement Certificate Pinning or not.
• To protect the application and the network from the denial of service attacks.
• To enable the session management for preventing unauthorized users to access unsolicited information.
• To check if any cryptography code is broken and ensure that it is repaired.
• To validate whether the business logic implementation is secured and not vulnerable to any attack from outside.
• To analyze file system interactions, determine any vulnerability and correct these problems.
• To validate the protocol handlers for example trying to reconfigure the default landing page for the application using a
malicious iframe.
• To protect against malicious client side injections. And malicious runtime injections.
• To investigate file caching and prevent any malicious possibilities from the same.
• To prevent from insecure data storage in the keyboard cache of the applications.
• To investigate cookies and preventing any malicious deeds from the cookies.
• Investigate custom created files and preventing any malicious deeds from the custom created files.
• To prevent from buffer overflows and memory corruption cases. 9
Types of Mobile App Testing
• Functional Testing: Functional testing ensures that the application
is working as per the requirements. Most of the test conducted for
this is driven by the user interface and call flow
• Laboratory Testing: Laboratory testing, usually carried out by
network carriers, is done by simulating the complete wireless
network. This test is performed to find out any glitches when a
mobile application uses voice and/or data connection to perform
some functions.
• Installation testing: Validation of the application by installing
/uninstalling it on the devices.
– Certain mobile applications come pre-installed on the device whereas
others have to be installed from the store. Installation testing verifies
that the installation process goes smoothly without the user having to
face any difficulty. This testing process covers installation, updating
and uninstalling of an application
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
10
Pooja Malhotra
Types of Mobile App Testing
• Memory Leakage Testing: Memory leakage happens when a computer program or
application is unable to manage the memory it is allocated resulting in poor
performance of the application and the overall slowdown of the system. As mobile
devices have significant constraints of available memory, memory leakage testing
is crucial for the proper functioning of an application
• Interrupt Testing: An application while functioning may face several interruptions
like incoming calls or network coverage outage and recovery. The different types of
interruptions are:
– Incoming and Outgoing SMS and MMS
– Incoming and Outgoing calls
– Incoming Notifications
– Battery Removal
– Cable Insertion and Removal for data transfer
– Network outage and recovery
– Media Player on/off
– Device Power cycle
– An application should be able to handle these interruptions by going into a suspended state
and resuming afterwards.
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University 11
Pooja Malhotra
Types of Mobile App Testing
Usability testing: To make sure that the mobile app is easy to use
and provides a satisfactory user experience to the customers.
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
12
Pooja Malhotra
Types of Mobile App Testing
• Certification Testing: To get a certificate of compliance, each mobile device
needs to be tested against the guidelines set by different mobile platforms.
• The Certified Mobile Application Tester (CMAT) certification exam is offered
by the Global Association for Quality Management (GAQM) via Pearson Vue
Testing Center worldwide to benefit the Mobile Application Testing
Community.
• Location Testing: Connectivity changes with network and location, but you
can't mimic those fluctuating conditions in a lab. Only in Country non
automated testers can perform comprehensive usability and functionality
testing.
• Outdated Software Testing: Not everyone regularly updates their operating
system. Some Android users might not even have access to the newest
version. Professional Testers can test outdated software.
• Load Testing: When many users all attempt to download, load, and use your
app or game simultaneously, slow load times or crashes can occur causing
many customers to abandon your app, game, or website. In-country human
13
testing done manually is the most effective way to test load.
Types of Mobile App Testing
• Black box Testing: This type of testing doesn't include the internally coding
logic of the application. Tester tests the application with functionality without
peering with internally structure of the application. This method of test can
be applied virtually to every level of software testing: unit, integration,
system and acceptance.
• CrowdSourced Testing: In recent years, crowdsourced testing has become
popular as companies can test mobile applications faster and cheaper using a
global community of testers. Due to growing diversity of devices and
operating systems as well as localization needs, it is difficult to
comprehensively test mobile applications with small in-house testing teams.
A global community of testers provides easy access to different devices and
platforms. A globally distributed team can also test it in multiple locations
and under different network conditions. Finally, localization issues can be
tested by hiring testers in required geographies. Since real users using real
devices test the application, it is more likely to find issues faced by users
under real world conditions.
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
14
Pooja Malhotra
Mobile Application Testing Strategy
The Test strategy should make sure that all the quality and performance guidelines are
met. A few pointers in this area:
1) Selection of the devices – Analyze the market and choose the devices that are widely
used. (This decision mostly relies on the clients. The client or the app builders consider
the popularity factor of a certain devices as well as the marketing needs for the
application to decide what handsets to use for testing.)
2) Emulators – The use of these is extremely useful in the initial stages of development,
as they allow quick and efficient checking of the app. Emulator is a system that runs
software from one environment to another environment without changing the software
itself. It duplicates the features and work on real system.
Types of Mobile Emulators
•Device Emulator- provided by device manufacturers
•Browser Emulator- simulates mobile browser environments.
•Operating systems Emulator- Apple provides emulators for iPhones, Microsoft for
Windows phones and Google Android phones
List of few free and easy to use mobile device emulators
– Mobile Phone Emulator – Used to test handsets like iPhone, blackberry, HTC, Samsung etc.
– MobiReady – With this, not only can we test the web app, we can also check the code.
– Responsivepx – It checks the responses of the web pages, appearances and functionality of the websites.
15
– Screenfly – It is a customizable tool and used to test websites under different categories.
Mobile Application Testing Strategy
3) After a satisfactory level of development is complete
for the mobile app, you could move to test on the
physical devices for a more real life scenarios based
testing.
4) Consider cloud computing based testing: Cloud
computing is basically running devices on multiple
systems or networks via Internet where applications
can be tested, updated and managed. For testing
purposes, it creates the web based mobile environment
on a simulator to access the mobile app.
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra 16
Mobile Application Testing Strategy
5)Automation vs. Manual testing
– If the application contains new functionality, test it manually.
– If the application requires testing once or twice, do it manually.
– Automate the scripts for regression test cases. If regression tests are repeated,
automated testing is perfect for that.
– Automate the scripts for complex scenarios which are time consuming if
executed manually.
6) Network configuration is also necessary part of mobile testing. It’s important to
validate the application on different networks like 2G, 3G, 4G or WIFI.
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
17
Pooja Malhotra
Sample Test Cases for Testing a
Mobile App
In addition to functionality based test cases, Mobile application testing
requires special test cases which should cover following scenarios.
• Battery usage– It’s important to keep a track of battery consumption
while running application on the mobile devices.
• Speed of the application- the response time on different devices,
with different memory parameters, with different network types etc.
• Data requirements – For installation as well as to verify if the user
with limited data plan will able to download it.
• Memory requirement– again, to download, install and run
• Functionality of the application– make sure application is not
crashing due to network failure or anything else.
Reference: Software Testing Principles and Practices, Naresh Chauhan , Oxford University
Pooja Malhotra 18
Pooja Malhotra
SOFTWARE QUALITY MANAGEMENT
Module 5
1
Quality as Multidimensional Concept
Pooja Malhotra
The degree to which a product or service possesses
a desired combination of attributes
2
WHAT IS QUALITY?
Pooja Malhotra
“fitness for use”
“conformance to requirements”
3
Broadening the Concept of Quality
EVOLUTION OF SOFTWARE TESTING
• Product Quality
• Process Quality
Product Quality
Pooja Malhotra
Process
4
FIVE VIEWS OF SOFTWARE QUALITY
Transcendental view
User view
Pooja Malhotra
Manufacturing view
Product view
Value-based view
5
FIVE VIEWS OF SOFTWARE QUALITY
Transcendental view
Quality is something that can be recognized through experience, but
not defined in some tractable form.
Pooja Malhotra
A good quality object stands out, and it is easily recognized.
It is "something toward which we strive as an ideal, but
may never implement completely“
It's mainly feelings about something.
User view
Quality concerns the extent to which a product meets user needs and
expectations.
Is a product fit for use?
This view is highly personalized.
A product is of good quality if it satisfies a large number of users.
It is useful to identify the product attributes which the users consider to
be important.
This view may encompass many subject elements, such as usability,
6
reliability, and efficiency.
FIVE VIEWS OF SOFTWARE QUALITY
Manufacturing view
This view has its genesis in the manufacturing industry – auto and
electronics.
Pooja Malhotra
Key idea: Does a product satisfy the requirements?
Any deviation from the requirements is seen as reducing the quality of
the product.
The concept of process plays a key role.
Products are manufactured “right the first time” so that the cost is
reduced
Development cost
Maintenance cost
Conformance to requirements leads to uniformity in products.
Product quality can be incrementally improved by improving the
process.
The CMM and ISO 9001 models are based on the manufacturing view. 7
FIVE VIEWS OF SOFTWARE QUALITY
Product view
Hypothesis: If a product is manufactured with good internal
properties, then it will have good external properties.
Pooja Malhotra
One can explore the causal relationship between internal properties
and external qualities.
Example: Modularity enables testability.
Value-based view
This represents the merger of two concepts: excellence and worth.
Quality is a measure of excellence, and value is a measure of worth.
Central idea
How much a customer is willing to pay for a certain level of quality.
Quality is meaningless if a product does not make economic sense. 8
The value-based view makes a trade-off between cost and quality.
MCCALL’S QUALITY FACTORS AND CRITERIA
Quality Factors
McCall, Richards, and Walters studied the concept of
Pooja Malhotra
software quality in terms of two key concepts as follows:
quality factors, and
quality criteria.
9
MCCALL’S QUALITY FACTORS AND CRITERIA
Pooja Malhotra
10
Table : McCall’s quality factors
MCCALL’S QUALITY FACTORS
Pooja Malhotra
11
MCCALL’S QUALITY FACTORS AND CRITERIA
Pooja Malhotra
12
MCCALL’S QUALITY FACTORS AND CRITERIA
Quality Criteria
A quality criterion is an attribute of a quality factor that is
Pooja Malhotra
related to software development.
Example:
Modularity is an attribute of the architecture of a software system.
A highly modular software allows designers to put cohesive
components in one module, thereby increasing the maintainability of
the system.
13
MCCALL’S QUALITY FACTORS AND CRITERIA
Pooja Malhotra
Table 17.3: McCall’s quality criteria [10]. 14
MCCALL’S QUALITY FACTORS AND CRITERIA
Relationship Between Quality Factors and Quality
Criteria
Pooja Malhotra
Each quality factor is positively influenced by a set of quality
criteria, and the same quality criterion impacts a number of
quality factors.
Example: Simplicity impacts reliability, usability, and testability.
If an effort is made to improve one quality factor, another
quality factor may be degraded.
Portable code may be less efficient.
Some quality factors positively impact others.
An effort to improve the correctness of a system will increase its
reliability.
15
MCCALL’S QUALITY
Pooja Malhotra
FACTORS AND
CRITERIA
16
QUALITY
QualityMANAGEMENT
management is to
Pooja Malhotra
by everyone
1
THE QUALITY REVOLUTION - PDCA CYCLE
The Shewhart cycle
Pooja Malhotra
Deming introduced Shewhart’s PDCA cycle to Japanese researchers
It illustrate the activity sequence:
Setting goals
Assigning them to measurable milestones
Assessing the progress against the milestones
Take action to improve the process in the next cycle
18
QUALITY COST & BENEFITS OF INVESTMENT ON
QUALITY
Pooja Malhotra
3. Failure Costs(analyse and remove failures)
Internal Failure Costs(developer’s site)
External Failure Costs(customer’s site)
19
QUALITY CONTROL AND QUALITY ASSURANCE
Pooja Malhotra
variation is checked at each step of development. Quality control may
include the following activities: Reviews, Testing using manual
techniques or with automated tools (V & V).
20
Methods of quality management
SOFTWARE TESTING MYTHS
Pooja Malhotra
PROCEDURAL APPROACH TO QM
SQA activities(Product evaluation and process
monitoring)
Pooja Malhotra
Products- Standards and Process- Procedures
SQA relationships with other assurance activities
Configuration Management monitoring
Verification and validation monitoring
Formal test monitoring
SQA during SDLC
22
SOFTWARE QUALITY ASSURANCE BEST
PRACTICE
Continuous improvement: All the standard process in SQA
must be improved frequently and made official so that the
other can follow. This process should be certified by popular
organization such as ISO, CMMI… etc.
Documentation: All the QA policies and methods, which are
defined by QA team, should be documented for training and reuse
Pooja Malhotra
for future projects.
Experience: Choosing the members who are seasoned SQA
auditors is a good way to ensure the quality of management
review
Tool Usage: Utilizing tool such as the tracking tool,
management tool for SQA process reduces SQA effort and project
cost.
Metrics: Developing and creating metrics to track the software
quality in its current state, as well as to compare the
improvement with previous versions, will help increase the value
and maturity of the Testing process
Responsibility: The SQA process is not the SQA member’s task,
but everyone’s task. Everybody in the team is responsible for
23
quality of product, not just the test lead or manager.
QUANTITATIVE APPROACH TO QM
Major Issues
Setting Quality Goal
Pooja Malhotra
Estimate for defects(P)current Project=
Defects(SP)X effort estimate(P)/Actual effort(SP)
Managing software development process
quantitatively- Intermediate goals
24
PAUL GOODMAN MODEL FOR SOFTWARE
METRICS PROGRAM
Pooja Malhotra
SOFTWARE QUALITY METRICS
Pooja Malhotra
MTTF metric is an estimate of the average or mean time
until a product’s first failure occurs.
26
SOFTWARE QUALITY METRICS
Pooja Malhotra
27
Software Quality Metrics
Pooja Malhotra
BMI = (Number of problems closed during the month /
Number of problem arrivals during the month) x 100 %
28
Capability Maturity Model (CMM)
Pooja Malhotra
CAPABILITY MATURITY MODEL (CMM)
Pooja Malhotra
CAPABILITY MATURITY MODEL (CMM)
Pooja Malhotra
31
CAPABILITY MATURITY MODEL (CMM)
Pooja Malhotra
32
CAPABILITY MATURITY MODEL (CMM)
Pooja Malhotra
33
6-SIGMA
Originated by Motorola in Schaumburg, IL
Based on competitive pressures in 1980s – “Our
Pooja Malhotra
quality stinks”
3δ 6δ
Pooja Malhotra
Five short or long landings at any One short or long landing in 10
major airport years at all airports in the US
Pooja Malhotra
Measure
The Six Sigma team is responsible for identifying a set of relevant
metrics.
Analyze
With data in hand, the team can analyze the data for trends, patterns,
or relationships. Statistical analysis allows for testing hypotheses,
modeling, or conducting experiments.
Improve
Based on solid evidence, improvements can be proposed and
implemented. The Measure-Analyze-Improve steps are generally
iterative to achieve target levels of performance.
Control
Once target levels of performance are achieved, control methods and 36
tools are put into place in order to maintain performance.
6-SIGMA ROLES & RESPONSIBILITIES
Master black belts
People within the organization who have the highest level of
technical and organizational experience and expertise. Master
black belts train black.
Pooja Malhotra
Black belts
Should be technically competent and held in high esteem by their
peers. They are actively involved in the Six Sigma change process.
Green belts
Are Six Sigma team leaders or project managers. Black belts
generally help green belts choose their projects, attend training
with them, and then assist them with their projects once the
project begins.
Champions
Leaders who are committed to the success of the Six Sigma project
and can ensure that barriers to the Six Sigma project are removed.
Usually a high-level manager who can remove obstacles that may
involve funding, support, bureaucracy, or other issues that black 37
belts are unable to solve on their own.
SOSOFTWARE TOTAL QUALITY MANAGEMENT
(STQM)
TQM is defined as a quality-centered, customer-focused,
fact-based, team-driven, senior-management-led process
to achieve an organization’s strategic imperative through
Pooja Malhotra
continuous process improvement.
38
SOSOFTWARE TOTAL QUALITY MANAGEMENT
(STQM)
Pooja Malhotra
Customer-focus/Customer-focus in software
development
Process / Process, technology, and development quality
39
ISO 9000 STANDARD
Pooja Malhotra
other stakeholders while meeting statutory and
regulatory requirements related to a product or
program. ISO 9000 deals with the fundamentals of
quality management systems, including the seven
quality management principles upon which the
family of standards is based. ISO 9001 deals with
the requirements that organizations wishing to meet
the standard must fulfill.
40
ISO 9000:2015 SOFTWARE QUALITY
STANDARD AND FUNDAMENTALS
This International Standard provides the fundamental
concepts, principles and vocabulary for quality management
systems (QMS) and provides the foundation for other QMS
standards. This International Standard is intended to help
Pooja Malhotra
the user to understand the fundamental concepts, principles
and vocabulary of quality management, in order to be able to
effectively and efficiently implement a QMS and realize value
from other QMS standards.
This International Standard proposes a well-defined QMS,
based on a framework that integrates established
fundamental concepts, principles, processes and resources
related to quality, in order to help organizations realize their
objectives. It is applicable to all organizations, regardless of
size, complexity or business model. Its aim is to increase an
organization’s awareness of its duties and commitment in
fulfilling the needs and expectations of its customers and
interested parties, and in achieving satisfaction with its
products and services. 41
ISO 9000:2015 SOFTWARE QUALITY
STANDARD AND FUNDAMENTALS
This International Standard describes the fundamental concepts
and principles of quality management which are universally
applicable to the following:
— organizations seeking sustained success through the
Pooja Malhotra
implementation of a quality management system;
— customers seeking confidence in an organization’s ability to
consistently provide products and services conforming to their
requirements;
— organizations seeking confidence in their supply chain that
product and service requirements will be met;
— organizations and interested parties seeking to improve
communication through a common understanding of the
vocabulary used in quality management;
— organizations performing conformity assessments against the
requirements of ISO 9001;
— providers of training, assessment or advice in quality
management; 42
— developers of related standards.
ISO 9000:2015 SOFTWARE QUALITY STANDARD AND
FUNDAMENTALS
Pooja Malhotra
is to determine the conformity of the requirements (customers
and organizations), facilitate effective deployment and improve
the quality management system
43
ISO 9000:2015 SOFTWARE QUALITY STANDARD AND
FUNDAMENTALS
Pooja Malhotra
recurrence that is appropriate for the effects of the nonconformity.
To conform to the requirements of this International Standard, an
organization needs to plan and implement actions to address risks and
opportunities. Addressing both risks and opportunities establishes a basis for
increasing the effectiveness of the quality management system, achieving
improved results and preventing negative effects.
Opportunities can arise as a result of a situation favourable to achieving an
intended result, for example, a set of circumstances that allow the
organization to attract customers, develop new products and services, reduce
waste or improve productivity. Actions to address opportunities can also
include consideration of associated risks. Risk is the effect of uncertainty and
any such uncertainty can have positive or negative effects. A positive
deviation arising from a risk can provide an opportunity, but not all positive
effects of risk result in opportunities. 44
ISO 9000:2015 SOFTWARE QUALITY
STANDARD AND FUNDAMENTALS
The ISO 9000 series are based on seven quality
management principles (QMP)
Pooja Malhotra
QMP 1 – Customer focus
QMP 2 – Leadership
QMP 3 – Engagement of people
QMP 4 – Process approach
QMP 5 – Improvement
QMP 6 – Evidence-based decision making
QMP 7 – Relationship management
45
ISO 9001:2015 REQUIREMENTS
SO 9001:2015 Quality management systems —
Requirements is a document of approximately 30 pages
which is available from the national standards
Pooja Malhotra
organization in each country.
ISO 9001:2015 is the latest revision of the ISO 9001
standard. In it, there are 10 sections (clauses) with
supporting subsections (sub clauses). The requirements
to be applied to your quality management system
(QMS) are covered in sections 4-10. To successfully
implement ISO 9001:2015 within your organization,
you must satisfy the requirements within clauses 4-10.
46
ISO 9001:2015 REQUIREMENTS
Some of the key changes include:
High Level Structure of 10 clauses is implemented. Now all new standard
released by ISO will have this High level structure.
Greater emphasis on building a management system suited to each
Pooja Malhotra
organization's particular needs
A requirement that those at the top of an organization be involved and
accountable, aligning quality with wider business strategy
Risk-based thinking throughout the standard makes the whole
management system a preventive tool and encourages continuous
improvement
Less prescriptive requirements for documentation: the organization
can now decide what documented information it needs and what
format it should be in
Alignment with other key management system standards through the
use of a common structure and core text
47
Inclusion of Knowledge Management principles
Quality Manual & Management representative is now not mandatory
requirements.
ISO 9001:2015 REQUIREMENTS
Contents of ISO 9001:2015 are as follows:
Section 1: Scope
Pooja Malhotra
Section 2: Normative references
Section 3: Terms and definitions
Section 4: Context of the organization
Section 5: Leadership
Section 6: Planning
Section 7: Support
Section 8: Operation
Section 9: Performance evaluation
Section 10: Improvement
48
SOFTWARE QUALITY TOOLS
Ishikawa Diagram
Check List
Pooja Malhotra
Control Chart
Flow Chart
Pareto Chart
Histogram
49
CAUSE & EFFECT DIAGRAMS
Pooja Malhotra
Cause and effect diagrams (Ishikawa Diagram) are used for
understanding organizational or business problem causes.
Organizations face problems everyday and it is required to
understand the causes of these problems in order to solve
them effectively. Cause and effect diagrams exercise is
usually a teamwork.
A brainstorming session is required in order to come up
with an effective cause and effect diagram.
All the main components of a problem area are listed and
possible causes from each area is listed.
Then, most likely causes of the problems are identified to
carry out further analysis.
50
ISHIKAWA, OR FISHBONE DIAGRAM
BEST DEVELOPED BY BRAINSTORMING OR BY USING A
LEARNING CYCLE APPROACH
Pooja Malhotra
51
CONTROL CHARTS
Walter A. Shewhart (1891 – 1967)
Worked for Western Electric Company (Bell
Telephones)
Pooja Malhotra
Introduced the concept of the control chart as a tool
better to understand variation and to allow
management to shift its focus away from inspection
and more towards the prevention of problems and the
improvement of processes.
52
CONTROL CHARTS
A quality control chart is a graphic that depicts
whether sampled products or processes are meeting
their intended specifications and, if not, the degree by
which they vary from those specifications.
Pooja Malhotra
The control chart is a graph used to study how a
process changes over time. Data are plotted in time
order. A control chart always has a central line for the
average, an upper line for the upper control limit, and
a lower line for the lower control limit. These lines are
determined from historical data. By comparing
current data to these lines, you can draw conclusions
about whether the process variation is consistent (in
control) or is unpredictable (out of control, affected by
special causes of variation). 53
CONTROL CHARTS
Common and special causes are the two distinct
origins of variation in a process, as defined in the
statistical thinking and methods of Walter A.
Pooja Malhotra
Shewhart and W. Edwards Deming. Briefly,
"common causes", also called Natural patterns,
are the usual, historical, quantifiable variation in
a system, while "special causes" are unusual, not
previously observed, non-quantifiable variation.
54
CONTROL CHARTS
Common causes
Inappropriate procedures
Poor design
Pooja Malhotra
Measurement error
Special causes
Faulty controllers
Machine malfunction
Computer crash
55
CONTROL CHARTS
Pooja Malhotra
56
CONTROL CHARTS
Purpose:
The primary purpose of a control chart is
to predict expected product outcome.
Benefits:
Predict process out of control and out of
specification limits
Distinguish between specific, identifiable
causes of variation
Can be used for statistical process
control
Pooja Malhotra 57
PARETO CHART
Pooja Malhotra
58
PARETO CHART
Pareto charts are used for identifying a set of
priorities. You can chart any number of
issues/variables related to a specific concern and
Pooja Malhotra
record the number of occurrences.
This way you can figure out the parameters that
have the highest impact on the specific concern.
This helps you to work on the propriety issues in
order to get the condition under control.
59
FLOW CHART
Pooja Malhotra
FOR PROJECT
SCOPE
VERIFICATION
60
Pooja Malhotra
61
CHECK LIST
CHECK LIST/SHEET
Pooja Malhotra
such as Microsoft Excel, you can derive further
analysis graphs and automate through macros
available.
Therefore, it is always a good idea to use a software
check sheet for information gathering and organizing
needs.
One can always use a paper-based check sheet when
the information gathered is only used for backup or
storing purposes other than further processing.
62
HISTOGRAM
A histogram is a graphical representation that
organizes a group of data points into user-specified
ranges. It is similar in appearance to a bar graph.
Pooja Malhotra
The histogram condenses a data series into an easily
interpreted visual by taking many data points and
grouping them into logical ranges or bins.
A histogram is a bar graph-like representation of
data that buckets a range of outcomes into
columns along the x-axis.
The y-axis represents the number count or
percentage of occurrences in the data for each
column and can be used to visualize data
distributions. 63
Pooja Malhotra
64
HISTOGRAM
Pooja Malhotra
65
HISTOGRAM