Software Testing and Quality Assurance
Software Testing and Quality Assurance
Quality Assurance
Terminology
• Failure
• Fault / Defect / Bug
• Error
• Test Case
• Testware
• Test Oracle
Testing Principles
Effective Testing not Exhaustive testing
Testing is not a single phase performed in SDLC
Destructive approach for constructive testing
Early testing is the best policy
Testing strategy should start at the smallest module level and expand
towards the whole program
Testing should also be performed by an independent team
Everything must be recorded in software testing
Life-Cycle of a Bug
States of a life-
cycle of bug
Software Testing Life Cycle
Test Planning
Test Design
Test Execution
Post Execution /
Test Review
Drivers and stubs Driver for
Module A
Module A
Input as
parameter
output
…
Expresses inconsistency between the modules such as improper call or return sequences.
Integration Testing
Refine the functional decomposition tree in the form of a module calling graph.
The call graph can be captured in a matrix form known as adjacency matrix.
Module Execution Path (MEP) consists of a set of executable statements within a module like in a flow graph.
Message – When the control from one unit is transferred to another unit
MM-path graph – an extended flow graph where nodes are MEPs and edges are messages
Testing Group Hierarchy
Test
Manager
Test Leader
Test Engineers
The size of the program measured in LOC Level of experience of programming staff
Classification of software metrics
• Product vs Process Metrics
• Objective vs Subjective Metrics
• Primitive vs Computed Metrics
• Private vs Public Metrics
The size of the program measured in LOC Level of experience of programming staff
Classification of software metrics
• Product vs Process Metrics
• Objective vs Subjective Metrics
• Primitive vs Computed Metrics
• Private vs Public Metrics
The size of the program measured in LOC Level of experience of programming staff
Size Metrics
• Line of Code (LOC)
• It is based on the number of lines of code present in a program.
• It is often presented on thousands of lines of Code (KLOC)
Limitation:
LOC is not consistent as all lines of code are not at the same level.
Size Metrics
Halstead Product Metrics (Token Count)
software product can be measured by counting the number of operators and operands.
• Determine the type of project for which the function point count is to be calculated.
• Identify transactional functions (EI, EO, and EQ) and their complexity
Internal External
Logical External Interface
Files Input Files
External
Output
Complexity Matrix for ILF, EIF Complexity Matrix for EI, EO, EQ
RET DET FTR DET
1 - 19 20 - 50 >= 51 1-5 6 - 19 >= 20
<1 Low Low Average <2 Low Low Average
2–3 Low Average High
2–5 Low Average High
>3 Average High High
>5 Average High High
Components Function Levels
Low Average High
ILF 7 10 15
Elf 5 7 10
EI 3 4 6
EO 4 5 7
EQ 3 4 6
Count all DET / FTR for all five components and determine the level of complexity
Count VAF
VAF = (TDI *0.01) +0.065
UFP = 50 * 4 + 40 * 5 + 35 * 4 + 6 * 10 + 4 * 7 = 628
TDI = (4+3+2+4) +10 = 23
VAF = (23 * 0.01) + 0.65 = 0.88
AFP = 628 * 0.88 = 552.64
Consider a project with the following parameters: EI (low)= 30,
EO(average) = 20, EQ(average) = 35, ILF(high) = 08 and ELF(high) = 05. In
addition, the system requires critical end-user efficiency, moderate
distributed data processing and critical data communication. Other GSC are
incidental. Compute the Function point using FPA
UFP = 30 * 3 + 20 * 5 + 35 * 4 + 8 * 15 + 5 * 10 = 500
TDI = 4+2+4+11 = 21
VAF = (21 * 0.01) + 0.65 = 0.86
AFP = 500 * 0.86 = 430
Category Attributes
Progress Scope of testing
Test progress
Defect backlog
Staff productivity
Suspension criteria
Exit criteria
Cost Testing cost estimation
Duration of testing
Resource requirements
Training needs of testing groups and tool requirement
Cost-effectiveness of automated tool
Quality Effectiveness of test cases
Effectiveness of smoke test
Quality of test plan
Test completeness
Size Estimation of test cases
Number of regression tests
Tests to automate
Quality – Effectiveness of test cases
• Effectiveness of test cases • Number of faults found in testing
• Effectiveness of smoke test • Number of faults found by customers
• Quality of Test Plan • Defect Removal Efficiency (DRE)
• Test Completeness • Defect age
* 100
Considers potential issues like severity of bugs and time interval of failure
Helpful in determining the test effectiveness in long run.
Example: During rigorous testing In a mobile application development project, the QA team
identifies 120 defects. However, after the app is launched, users report an additional 30 defects.
Defect age is the period between the time the defect is detected and the time it is resolved.
It is suitable for measuring the long-term trend of test effectiveness.
Example: During rigorous testing In a mobile application development project, the QA team
identifies 120 defects. However, after the app is launched, users report an additional 30 defects.
Consider a project with the following data and calculate its defect spoilage
SDLC Phase No. of Defects Defect age
Requirement Spec. 34 2
HLD 25 4
LLD 17 5
Coding 10 6
Spoilage = ?
Quality – Effectiveness of smoke test
• Effectiveness of test cases
• Effectiveness of smoke test
• Quality of Test Plan
• Test Completeness
Smoke tests are required to ensure that the application is stable for testing
It is time-consuming.
Covers basic operations such as logging in, managing records, file handling.
Quality – quality of test plan
• Effectiveness of test cases
• Effectiveness of smoke test
• Quality of Test Plan
• Test Completeness The test plan should be effective in giving a high number of defects
R1 R2 R2
TC1 1
TC2 1 1
TC3 1 1
TC4
TC4
Size – Estimation of test cases
• Estimation of test cases
• Number of regression tests
• Tests to automate
Refers to how much of code and requirements are covered by the test set.
It provides the ability to design new test cases and improve existing ones
Size – Number of regression tests
• Estimation of test cases
• Number of regression tests
• Tests to automate
e = V / PL
First the number of person-hours are calculated for the full project.
Then it is estimated on individual test activity bases
Architectural Design Metric
Card and Class introduced three software design complexity
D
V(m) -> number of input and
S -> Structural complexity output variables that are passed “sum of the structural and data
fout(m) -> fan-out of the module to and from the module m. complexity.”
m.
“Measures the complexity in the Testing effort increases with
“number of stubs required for internal interface for the module increase in architectural
unit testing of the module m” m.” complexity
Indicates the probability of errors
in module m