0% found this document useful (0 votes)
20 views55 pages

Software Testing and Quality Assurance

Uploaded by

ratheeanshumann
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views55 pages

Software Testing and Quality Assurance

Uploaded by

ratheeanshumann
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 55

Software Testing and

Quality Assurance
Terminology
• Failure
• Fault / Defect / Bug
• Error
• Test Case
• Testware
• Test Oracle
Testing Principles
Effective Testing not Exhaustive testing
Testing is not a single phase performed in SDLC
Destructive approach for constructive testing
Early testing is the best policy
Testing strategy should start at the smallest module level and expand
towards the whole program
Testing should also be performed by an independent team
Everything must be recorded in software testing
Life-Cycle of a Bug
States of a life-
cycle of bug
Software Testing Life Cycle

Test Planning

Test Design

Test Execution

Post Execution /
Test Review
Drivers and stubs Driver for
Module A

Module A
Input as
parameter

Module B Module C Module B


to be
tested

Module D Module E Module F

Stub for Stub for


Module D Module D
Drivers

• Driver can be defined as a software module that is used to invoke a


module under test, provide test inputs, and monitor and report test
results:

• Provides facilities to the unit under test:


• Intializes the environment for testing
• Provides simulated inputs in the desired formats
Stubs

• Driver can be defined as a software module that is used to invoke a


module under test, provide test inputs, and monitor and report test
results:

• Provides facilities to the unit under test:


• Intializes the environment for testing
• Provides simulated inputs in the desired formats
Drivers and Stubs
Input as parameter
(module call) Unit to
User input Driver be
Modules tested
Module return

output

Stub Stub Stub


Integration Testing

Decomposition Call-Graph based Path-based


based integration integration integration

Expresses inconsistency between the modules such as improper call or return sequences.
Integration Testing

Decomposition Call-Graph based Path-based


based integration integration integration

Refine the functional decomposition tree in the form of a module calling graph.

The call graph can be captured in a matrix form known as adjacency matrix.

Pair-wise integration and neighbourhood integration


Integration Testing

Decomposition Call-Graph based Path-based


based integration integration integration

Module Execution Path (MEP) consists of a set of executable statements within a module like in a flow graph.

Message – When the control from one unit is transferred to another unit

MM-path graph – an extended flow graph where nodes are MEPs and edges are messages
Testing Group Hierarchy

Test
Manager

Test Leader

Test Engineers

Junior Test Engineers


Test Plan Components
• Test Plan Identifier
• Introduction
• Test- item to be tested
• Features to be tested & not to be tested
• Approach
• Item Pass/fail criteria
• Suspension criteria and resumption requirements
• Test deliverables
• Testing tasks
• Environmental needs
• Responsibilities
• Staffing and training needs
• Scheduling
• Risks and contingencies
• Testing costs
• Approvals
Test Plan Components
• Test Plan Identifier
• Introduction
• Test- item to be tested
• Features to be tested & not to be tested
• Approach
• Item Pass/fail criteria
• Suspension criteria and resumption requirements
• Test deliverables
• Testing tasks
• Environmental needs
• Responsibilities
• Staffing and training needs
• Scheduling
• Risks and contingencies
• Testing costs
• Approvals
Test Plan Components
• Test Plan Identifier
• Introduction
• Test- item to be tested
• Features to be tested & not to be tested
• Approach This defines a set of criteria based on which a
• Item Pass/fail criteria test case is passed or failed.
• Suspension criteria and resumption requirements
• Test deliverables Failure criteria are based on the severity level of
the defect.
• Testing tasks
• Environmental needs
• Responsibilities
• Staffing and training needs
• Scheduling
• Risks and contingencies
• Testing costs
• Approvals
Test Plan Components
• Test Plan Identifier
• Introduction
• Test- item to be tested
• Features to be tested & not to be tested
• Approach
Suspension:
• Item Pass/fail criteria Specify the criteria for suspending all or part of
• Suspension criteria and resumption requirements testing activities.
• Test deliverables
• Testing tasks Resumption
Specify the criteria for resuming the test
• Environmental needs activities.
• Responsibilities
• Staffing and training needs
• Scheduling
• Risks and contingencies
• Testing costs
• Approvals
Test Plan Components
• Test Plan Identifier
• Introduction
• Test- item to be tested
• Features to be tested & not to be tested
• Approach Deliverable documents:
• Item Pass/fail criteria Test plan
• Suspension criteria and resumption requirements Test design specification
• Test deliverables Test case specification
Test item transmittal report
• Testing tasks Test logs
• Environmental needs Test incident reports
• Responsibilities Test summary reports
• Staffing and training needs Test harness reports
• Scheduling
• Risks and contingencies
• Testing costs
• Approvals
Test Plan Components
• Test Plan Identifier
• Introduction
• Test- item to be tested
• Features to be tested & not to be tested
• Approach Suspension:
• Item Pass/fail criteria Specify the criteria for suspending all or part of
• Suspension criteria and resumption requirements testing activities.
• Test deliverables
Resumption
• Testing tasks Specify the criteria for resuming the test
• Environmental needs activities.
• Responsibilities
• Staffing and training needs
• Scheduling
• Risks and contingencies
• Testing costs
• Approvals
Test Plan Components
• Test Plan Identifier
• Introduction
• Test- item to be tested
• Features to be tested & not to be tested
• Approach
• Item Pass/fail criteria
• Suspension criteria and resumption requirements
• Test deliverables
• Testing tasks
• Environmental needs
• Responsibilities
• Staffing and training needs
• Scheduling
• Risks and contingencies
• Testing costs
• Approvals
Test Plan Components
• Test Plan Identifier
• Introduction
• Test- item to be tested
• Features to be tested & not to be tested
• Approach
• Item Pass/fail criteria
• Suspension criteria and resumption requirements
• Test deliverables
• Testing tasks
• Environmental needs
• Responsibilities
• Staffing and training needs
• Scheduling
• Risks and contingencies
• Testing costs
• Approvals
Test Plan Components
• Test Plan Identifier
• Introduction
• Test- item to be tested
• Features to be tested & not to be tested
• Approach
• Item Pass/fail criteria
• Suspension criteria and resumption requirements
• Test deliverables
• Testing tasks
• Environmental needs
• Responsibilities
• Staffing and training needs
• Scheduling
• Risks and contingencies
• Testing costs
• Approvals
• It is important to control the software testing process to monitor its
progress regarding time, budget, and resource constraints.
• Metrics can be used to quantify the development, operation, and
maintenance of software.
• They measure the attributes that are critical to the software project.
• Measurement helps in predicting outcomes and determining risks.
Understanding
Software
Software Control engineering
Metrics process
Improvement
Software metrics
The IEEE Standard Glossary of Software Engineering Terms defines
A software metric is “ a quantitative measure of the degree to which a
system component or process possesses a given attribute”.

Entities for Software measurement include:


• Processes
• Product
• Resource
Classification of software metrics
• Product vs Process Metrics
• Objective vs Subjective Metrics
• Primitive vs Computed Metrics
• Private vs Public Metrics

Product metrics : Process metrics :


The complexity of software design Overall development time
Size of the program Development methodology used
Number of pages of documentation Level of experience of programming staff
Classification of software metrics
• Product vs Process Metrics
• Objective vs Subjective Metrics
• Primitive vs Computed Metrics
• Private vs Public Metrics

Objective metrics : Subjective metrics :

The size of the program measured in LOC Level of experience of programming staff
Classification of software metrics
• Product vs Process Metrics
• Objective vs Subjective Metrics
• Primitive vs Computed Metrics
• Private vs Public Metrics

Primitive metrics : Computed metrics :

The size of the program measured in LOC Level of experience of programming staff
Classification of software metrics
• Product vs Process Metrics
• Objective vs Subjective Metrics
• Primitive vs Computed Metrics
• Private vs Public Metrics

Primitive metrics : Computed metrics :

The size of the program measured in LOC Level of experience of programming staff
Size Metrics
• Line of Code (LOC)
• It is based on the number of lines of code present in a program.
• It is often presented on thousands of lines of Code (KLOC)

Limitation:
LOC is not consistent as all lines of code are not at the same level.
Size Metrics
Halstead Product Metrics (Token Count)
software product can be measured by counting the number of operators and operands.

Program Vocabulary (n): n = n1 + n2


n1 = number of unique operators, n2 = number of unique operands

Program Length (N): N = N1 + N2


N1 = all operators appearing in the implementation
N2 = all operands appearing in the implementation

Program Volume (V): V = N * log2n


Size Metrics
Function Point Analysis (FPA)

• Determine the type of project for which the function point count is to be calculated.

• Identify the counting scope and the application boundary

• Identify data functions (ILF and EIF) and their complexity

• Identify transactional functions (EI, EO, and EQ) and their complexity

• Determine the unadjusted function point count (UFP)

• Determine the value adjustment factor (VAF)

• Calculate the adjusted function point count (AFP)


External External External
Input Output Inquiry

Internal External
Logical External Interface
Files Input Files

External
Output
Complexity Matrix for ILF, EIF Complexity Matrix for EI, EO, EQ
RET DET FTR DET
1 - 19 20 - 50 >= 51 1-5 6 - 19 >= 20
<1 Low Low Average <2 Low Low Average
2–3 Low Average High
2–5 Low Average High
>3 Average High High
>5 Average High High
Components Function Levels
Low Average High
ILF 7 10 15
Elf 5 7 10
EI 3 4 6
EO 4 5 7
EQ 3 4 6

Count all DET / FTR for all five components and determine the level of complexity

Count the number of components and multiply it by the appropriate weight.

Add all the five results to get UFP.


General System
Factor Meaning
F1 Data Communication
F2
F3
Performance
Transaction Rate
Characteristics(GS
F4 End user efficiency C)
F5 Complex processing Degree of Meaning
F6 Installation ease Influence
F7 Multiple sites
F8 Distributed data processing 0 Not present or No influence
F9 Heavily user configuration 1 Incidental Influence
F10 Online Data Entry 2 Moderate Influence
F11 Online update
3 Average Influence
F12 Reusability
F13 Operational ease 4 Significant (critical) Influence
F14 Facilitate Change 5 Strongly Influence
F15 Security Framework
F16 Multi language support
F17 Multi Form Factors
Steps to calculate AFP
Evaluate the 14 GSC on the scale 0 – 5 and determine the DI for each GSC

Add the DI to obtain the total degree of influence (TDI)

Count VAF
VAF = (TDI *0.01) +0.065

Compute the adjusted Function Point (AFP)

AFP = UFP * VAF


Consider a project with the following parameters: EI = 50, EO = 40, EQ = 35,
ILF = 06 and ELF = 04. Assume all weighting factors as average. In addition,
the system requires critical performance, average end-user efficiency,
moderate distributed data processing and critical data communication.
Other GSC are incidental. Compute the Function point using FPA

UFP = 50 * 4 + 40 * 5 + 35 * 4 + 6 * 10 + 4 * 7 = 628
TDI = (4+3+2+4) +10 = 23
VAF = (23 * 0.01) + 0.65 = 0.88
AFP = 628 * 0.88 = 552.64
Consider a project with the following parameters: EI (low)= 30,
EO(average) = 20, EQ(average) = 35, ILF(high) = 08 and ELF(high) = 05. In
addition, the system requires critical end-user efficiency, moderate
distributed data processing and critical data communication. Other GSC are
incidental. Compute the Function point using FPA

UFP = 30 * 3 + 20 * 5 + 35 * 4 + 8 * 15 + 5 * 10 = 500
TDI = 4+2+4+11 = 21
VAF = (21 * 0.01) + 0.65 = 0.86
AFP = 500 * 0.86 = 430
Category Attributes
Progress Scope of testing
Test progress
Defect backlog
Staff productivity
Suspension criteria
Exit criteria
Cost Testing cost estimation
Duration of testing
Resource requirements
Training needs of testing groups and tool requirement
Cost-effectiveness of automated tool
Quality Effectiveness of test cases
Effectiveness of smoke test
Quality of test plan
Test completeness
Size Estimation of test cases
Number of regression tests
Tests to automate
Quality – Effectiveness of test cases
• Effectiveness of test cases • Number of faults found in testing
• Effectiveness of smoke test • Number of faults found by customers
• Quality of Test Plan • Defect Removal Efficiency (DRE)
• Test Completeness • Defect age

* 100

Considers potential issues like severity of bugs and time interval of failure
Helpful in determining the test effectiveness in long run.

Example: During rigorous testing In a mobile application development project, the QA team
identifies 120 defects. However, after the app is launched, users report an additional 30 defects.

DRE = (120 / (120 + 30)) x 100 = 80%


Quality – Effectiveness of test cases
• Effectiveness of test cases • Number of faults found in testing
• Effectiveness of smoke test • Number of faults found by customers
• Quality of Test Plan • Defect Removal Efficiency (DRE)
• Test Completeness • Defect age

𝑆𝑢𝑚𝑜𝑓 (𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑑𝑒𝑓𝑒𝑐𝑡𝑠 ∗ 𝐷𝑒𝑓𝑒𝑐𝑡 𝑎𝑔𝑒)


Spoilage=
𝑇𝑜𝑡𝑎𝑙𝑛𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑑𝑒𝑓𝑒𝑐𝑡𝑠

Defect age is the period between the time the defect is detected and the time it is resolved.
It is suitable for measuring the long-term trend of test effectiveness.

Example: During rigorous testing In a mobile application development project, the QA team
identifies 120 defects. However, after the app is launched, users report an additional 30 defects.

DRE = (120 / (120 + 30)) x 100 = 80%


Quality – Effectiveness of test cases
• Effectiveness of test cases • Number of faults found in testing
• Effectiveness of smoke test • Number of faults found by customers
• Quality of Test Plan • Defect Removal Efficiency (DRE)
• Test Completeness • Defect age
𝑆𝑢𝑚𝑜𝑓 (𝑁𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑑𝑒𝑓𝑒𝑐𝑡𝑠 ∗ 𝐷𝑒𝑓𝑒𝑐𝑡 𝑎𝑔𝑒)
Spoilage=
𝑇𝑜𝑡𝑎𝑙𝑛𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑑𝑒𝑓𝑒𝑐𝑡𝑠

Consider a project with the following data and calculate its defect spoilage
SDLC Phase No. of Defects Defect age
Requirement Spec. 34 2
HLD 25 4
LLD 17 5
Coding 10 6
Spoilage = ?
Quality – Effectiveness of smoke test
• Effectiveness of test cases
• Effectiveness of smoke test
• Quality of Test Plan
• Test Completeness

Smoke tests are required to ensure that the application is stable for testing
It is time-consuming.
Covers basic operations such as logging in, managing records, file handling.
Quality – quality of test plan
• Effectiveness of test cases
• Effectiveness of smoke test
• Quality of Test Plan
• Test Completeness The test plan should be effective in giving a high number of defects

Berger evaluates a test plan by describing a multi-dimensional qualitative


method using rubrics
• Theory of objective
• Theory of scope
• Theory of coverage
• Theory of risk
• Theory of data
• Theory of originality
• Theory of communication
• Theory of usefulness
• Theory of completeness
• Theory of Insightfulness
Quality – Test completeness
• Effectiveness of test cases
• Effectiveness of smoke test
• Quality of Test Plan
• Test Completeness

R1 R2 R2
TC1 1
TC2 1 1
TC3 1 1
TC4
TC4
Size – Estimation of test cases
• Estimation of test cases
• Number of regression tests
• Tests to automate

Refers to how much of code and requirements are covered by the test set.
It provides the ability to design new test cases and improve existing ones
Size – Number of regression tests
• Estimation of test cases
• Number of regression tests
• Tests to automate

Number of test cases reused


Number of test cases to the tool repository
Number of test cases rerun when changes are made to the software
Number of planned regression tests executed
Number of planned regression tests executed and passed
Size – Tests to automate
• Estimation of test cases
• Number of regression tests
• Tests to automate
Regression tests
Smoke Tests
Load tests
Performance tests
Estimating Testing Efforts
• Halstead Metrics
• Development Ratio Method
• Project-Staff Ratio Method
• Test Procedure Method Using Halstead definitions, effort can be calculated as

• Task Planning Method

e = V / PL

% of overall testing effort can be allocated to a module


k using
Estimating Testing Efforts
• Halstead Metrics
• Development Ratio Method
• Project-Staff Ratio Method
• Test Procedure Method The number of testing person required is based on
developer-tester ratio.
• Task Planning Method
Dependent on
type of software
complexity of the software
Testing level
Scope of testing
Test effectiveness
Estimating Testing Efforts
• Halstead Metrics
• Development Ratio Method
• Project-Staff Ratio Method
• Test Procedure Method The number of testing person required is based on
Project team size-tester ratio.
• Task Planning Method
Dependent on
type of software
complexity of the software
Testing level
Scope of testing
Test effectiveness
Test Procedure Method
The baseline for estimation is the historical records of the number of person hours expended to
perform testing task

Number of Test Procedure Person-hours consumed Hours per Test Procedure


(NTP) for testing (PH) PH/NTP
840 6000 7.14
1000 7140 7.14

First the number of person-hours are calculated for the full project.
Then it is estimated on individual test activity bases
Architectural Design Metric
Card and Class introduced three software design complexity

Structural Complexity DataComplexity System Complexity

D
V(m) -> number of input and
S -> Structural complexity output variables that are passed “sum of the structural and data
fout(m) -> fan-out of the module to and from the module m. complexity.”
m.
“Measures the complexity in the Testing effort increases with
“number of stubs required for internal interface for the module increase in architectural
unit testing of the module m” m.” complexity
Indicates the probability of errors
in module m

You might also like