Testing Fundamentals
Testing Fundamentals
Index:
1.
Introduction.............................................................................................................................................................. 5
2.
Testing overview....................................................................................................................................................... 6
2.1.
What is testing?................................................................................................................................................... 6
2.2.
Why Testing?....................................................................................................................................................... 6
3.
Software Test Life Cycle............................................................................................................................................ 7
3.1.
Requirement Analysis.......................................................................................................................................... 8
3.2.
Test Strategizing...................................................................................................................................................9
3.3.
Test case development......................................................................................................................................10
3.4.
Test Environment Setup.................................................................................................................................... 11
3.5.
Test execution....................................................................................................................................................12
3.6.
Test cycle closure...............................................................................................................................................14
4.
SDLC Vs STLC........................................................................................................................................................... 19
5.
V -Model of testing..................................................................................................................................................20
6.
Test case design/optimization techniques..............................................................................................................22
6.1.
Need for test case optimization.........................................................................................................................22
6.2.
Functional Technique.........................................................................................................................................23
6.3.
Structural Technique..........................................................................................................................................23
6.4.
Special Technique.............................................................................................................................................. 24
7.
Testing Techniques..................................................................................................................................................24
7.1.
Static Testing:.....................................................................................................................................................25
7.2.
Dynamic Testing:................................................................................................................................................26
7.2.1
White box testing/Structural testing:..................................................................................................... 26
7.2.2
Black Box Testing:....................................................................................................................................26
8.
Types of testing....................................................................................................................................................... 27
8.1.
Functional Testing..............................................................................................................................................28
8.1.1.
Unit Testing............................................................................................................................................. 28
8.1.2.
Integration Testing.................................................................................................................................. 29
8.1.3.
Smoke Testing......................................................................................................................................... 32
8.1.4.
System Testing........................................................................................................................................ 32
8.1.5.
Regression Testing.................................................................................................................................. 32
8.1.6.
User Acceptance Testing.........................................................................................................................33
8.1.7.
Globalization Testing...............................................................................................................................33
8.1.8.
Localization Testing.................................................................................................................................33
8.2.
Non Functional Testing...................................................................................................................................... 33
8.2.1.
Performance Testing:..............................................................................................................................34
8.2.2.
Compatibility Testing.............................................................................................................................. 35
8.2.3.
Data Migration Testing............................................................................................................................35
8.2.4.
Data Conversion Testing:........................................................................................................................ 35
8.2.5.
Security/Penetration Testing:................................................................................................................. 35
8.2.6.
Usability testing.......................................................................................................................................35
8.2.7.
Install/Un-Install Testing......................................................................................................................... 36
9.
Defect reporting and tracking................................................................................................................................. 36
9.1.
Defect Lifecycle..................................................................................................................................................36
9.2.
Defect Management tools................................................................................................................................. 40
9.2.1.
Mercury's Test Director...........................................................................................................................41
9.2.2.
Mozilla's Bugzilla..................................................................................................................................... 42
10.
When to Stop Testing?.......................................................................................................................................43
11.
Testing Case Study............................................................................................................................................. 43
12.
Test Deliverables................................................................................................................................................45
13.
Appendix............................................................................................................................................................50
13.1.
Definition of Quality..................................................................................................................................... 50
13.2.
Quality Assurance Vs Quality Control...........................................................................................................51
13.3.
Measurements and Metrics......................................................................................................................... 52
13.3.1.
Cost Of Quality........................................................................................................................................ 53
13.4.
Definition of the terms used in Project LC....................................................................................................54
13.5.
Common terms in Software Testing.............................................................................................................59
14.
References......................................................................................................................................................... 60
Ensures completeness of the product - Testing a software product ensures that the customer
requirements map to the final product that is delivered.
Reduction in total cost of ownership- By providing software that looks and behaves as shown in the
user documentation, customers require fewer hours of training support from product experts and
thus reduce the total cost of ownership.
Accretion of revenues - A bug free code (which is obtained only after intensive testing) also brings in
customer satisfaction, which leads to repeat business and more revenues.
2. Testing overview
2.2.
What is testing?
"Testing is an activity in which a system or component is executed under specified conditions; the results are
observed and recorded and an evaluation is made of some aspect of the system or component" - IEEE
Software testing is a process used to identify the correctness, completeness and quality of developed
computer software. It includes a set of activities conducted with the intent of finding errors in software so
that it could be corrected before the product is released to the end users.
Test case
development
Environment
setup
Test execution
Test Cycle
closure
specify
system's
quality
Entry-criteria
Requirements Document available (both functional and non functional)
Acceptance criteria defined.
Application architectural document available.
Activities
Analyse business functionality to know the business modules and module specific functionalities.
Identify all transactions in the modules.
Identify all the user profiles.
Gather user interface/authentication, geographic spread requirements.
Identify types of tests to be performed.
Gather details about testing priorities and focus.
Prepare Requirement Traceability Matrix (RTM). Refer to Test Deliverables (RTM section) for the
details of this.
Identify test environment details where testing is supposed to be carried out.
Automation feasibility analysis (if required).
Metrics & Measures
Effort spent on
o
Requirement Analysis to prepare RTM
o
Review and rework of RTM
o
Automation feasibility analysis (if done)
Defects
o
RTM review defects.
Deliverables
RTM
Automation feasibility report (if applicable)
Exit-criteria
Signed off RTM
Test automation feasibility report signed off by the client
Entry-criteria
Requirements Documents
Requirement Traceability matrix.
Test automation feasibility document.
Activities
Analyze various testing approaches available
Finalize on the best suited approach
Preparation of test plan/strategy document for various types of testing
Test tool selection
Test effort estimation
Resource planning and determining roles and responsibilities.
Metrics & Measures
Effort spent on
o
Test plan/strategy preparation
o
Test plan/strategy review
o
Test plan/strategy rework
o
Test tool selection
Defects
o
Test Plan/strategy review defects.
Deliverables
Test plan/strategy document.
Effort estimation document.
Exit-criteria
Approved test plan/strategy document.
Effort estimation document signed off.
Entry-criteria
Requirements Documents
RTM and test plan
Automation analysis report
Activities
Create test cases, automation scripts (where applicable)
Review and baseline test cases and scripts
Create test data
Measures & Metrics
Effort spent on
o
Test case/script preparation
o
Test case/script review
5
o
Productivity
o
No. of test cases or scripts generated/ Effort spent in person hours
Deliverables
Test cases/scripts
Test data
Exit-criteria
Reviewed and signed test Cases/scripts
Reviewed and signed test data
Entry-criteria
System Design and architecture documents are available
Environment set-up plan is available
Activities
Understand the required architecture, environment set-up
Prepare hardware and software requirement list
Finalize connectivity requirements
Prepare environment setup checklist
Setup test Environment and test data
Perform smoke test on the build
Accept/reject the build depending on smoke test result
Measures & Metrics
Effort spent on
o
Test environment setup
o
Test data setup
o
Sanity Test
Defects
o
Test environment setup defects
o
Test data set up defects
o
Defects found in sanity test
Deliverables
Environment ready with test data set up
Smoke Test Results.
6
Entry-criteria
Baselined RTM is available
Baselined Test plan is available
Test environment is ready
Test data set up is done
Baselined Test cases/scripts are available
Unit/Integration test report for the build to be tested is available
Activities
Execute tests as per plan
Document test results, and log defects for failed cases
Update test plans/test cases, if necessary
Map defects to test cases in RTM
Retest the defect fixes
Regression testing of application
Track the defects to closure
Measures & Metrics
Effort
Test case creation/update effort (in case of requirements changes)
Test execution effort
Defect detection and logging effort
o
o
o
Defects
o
o
Productivity
o
No. of test cases/scripts executed/ Execution Effort in person hours
Defect Detection Rate
o
No. of valid defects detected/Test execution effort
Test Effectiveness
o
No. of valid defects reported during testing/( No. of valid defects reported during testing
+ No. of defects reported by client)
Deliverables
Completed RTM with execution status
Test cases updated with results
Defect reports
7
Entry-criteria
Testing has been completed
Test results are available
Defect logs are available
Activities
Evaluate cycle completion criteria based on
o
Time
o
Test coverage
o
Cost
o
Software Quality
o
Critical Business Objectives
Prepare test metrics based on the above parameters.
Document the learning out of the project
Prepare Test closure report
Qualitative and quantitative reporting of quality of the work product to the customer.
Test result analysis to find out the defect distribution by type and severity.
Measures & Metrics
Effort spent on
o
Defect analysis
o
Preparing Test Closure report.
Deliverables
Test Closure report
Test metrics
Exit-criteria
Test Closure report signed off by client
Summary of STLC stages
STLC Stage
Requirement Analysis
Test Strategizing
Entry Criteria
Requirements
Document available
(both functional and
non functional)
Acceptance
criteria
defined.
Application
architectural
document available.
Activity
Exit Criteria
Analyse business functionality to know the
Signed off RTM
business modules and module specific
Test
automation
functionalities.
feasibility report signed
Identify all transactions in the modules.
off by the client
Identify all the user profiles.
Gather
user
interface/authentication,
geographic spread requirements.
Identify types of tests to be performed.
Gather details about testing priorities and
focus.
Prepare Requirement Traceability Matrix (RTM).
Refer to Test Deliverables (RTM section) for the
details of this.
Identify test environment details where testing
is supposed to be carried out..
Automation feasibility analysis (if required).
Requirements
Documents
Requirement
Traceability matrix.
Test
automation
feasibility document.
Requirements
Documents
RTM and test plan
Automation analysis
report
System Design and
architecture
documents
are
available
Environment set-up
plan is available
Understand the
required
architecture,
environment set-up
Prepare hardware and software requirement
list
Finalize connectivity requirements
Prepare environment setup checklist
Setup test Environment and test data
Perform smoke test on the build
Deliverables
RTM
Automation
feasibility
report
(if
applicable)
Approved
test
plan/strategy document.
Effort
estimation
document signed off.
Environment setup is
working as per the plan
and checklist
Test data setup is
complete
Smoke test is successful
Test
plan/strategy
document.
Effort
estimation
document.
Test
cases/scripts
Test data
Environment
ready
with
test data set
up
Smoke Test
Results.
Test Execution
Baselined RTM is
available
Baselined Test plan is
available
Test environment is
ready
Test data set up is
done
Baselined
Test
cases/scripts
are
available
Unit/Integration test
report for the build to
be tested is available
Testing has been
completed
Test
results
are
available
Defect
logs
are
available
Completed
RTM
with
execution
status
Test
cases
updated with
results
Defect
reports
Test Closure
report
Test metrics
4. SDLC Vs STLC
Various stages involved in developing a product can be collectively called as the Software
development lifecycle. It begins when a problem has been identified and the solution for the
same needs to be implemented in the form of software. This ends when the verification and
validation of the developed software is completed and software is accepted by the end customer.
Software test lifecycle is a part of the software development cycle.
The following table gives the details of the activities carried out in each of the development
phase and the corresponding testing phase.
Development Phase
Testing Phase
Acceptance test
Integration test
Unit test
5. V -Model of testing
V - Model talks about lifecycle testing where corresponding to each of the development phase
there is a test associated with it. The checks/tests that happen when the development activities
are going on are in the form of reviews, inspections and walkthroughs. Also planning for Unit,
Integration, System & User acceptance tests are done as per the below diagram. This way as we
progress in the lifecycle, the gap between the development and the test team reduces. Also this
helps in identifying the discrepancies earlier in life cycle where it is easier and cheaper to fix the
defects.
V - Model can be explained with the following diagram (Figure 3)
System Testing
Integration Testing
Unit Testing
CODE
Figure 3 V-Model
Both development team as well as the test team starts working at the beginning of the project
with the same information.
Development team will be working on collating the requirements and building the product as
per the requirements. Reviews, inspections & walkthroughs will be conducted during this time to
check the adherence to processes. These checks which are done during the early lifecycle are
called as the verification activities.
12
13
Boolean variable (T or F)
Relational expression (a<b)
Composed of several simple conditions ((a=b) and (c>d))
15
7. Testing Techniques
Software can be tested either by running the programs and then verifying each step of its
execution against expected result or by statically examining the code against the stated
requirements. These 2 distinct methods have led to the popularization of 2 techniques viz. Static
Testing & Dynamic Testing as given below (Figure 4)
Testing Techniques
Dynamic
Static
Informal Review
Walkthrough
Inspection
White Box
Testing
Figure 4 Testing Techniques
16
Black Box
Testing
Dynamic Testing
7.2.1
This testing technique takes into account the internal structure of a system or component.
Complete access to the object's source code is needed for white-box testing. This is known as
white box' testing because tester gets to see the internal working of the code.
17
Unit testing and some part of integration testing fall under white box testing category.
7.2.2
A testing method where the application under test is viewed as a black box and the internal
behavior of the program is completely ignored. Testing occurs based upon the requirement
specifications.
System testing and regression testing fall under black box testing category.
Advantages of dynamic testing
White box testing
Logic of the system is tested
Those parts which could have been omitted in black box testing are also getting covered.
Redundant code eliminated
Cost effective when appropriate techniques are used
Black box testing
Simulates actual system usage
Makes no assumptions about the system structure
Disadvantages of dynamic testing
White box testing
Does not ensure that all user requirements are met
May not simulate real-time situations
Skill level needed is high
Black box testing
May miss out logical errors
Chances of redundant testing is there
Cannot decide which part of code is not getting executed
Thus a good combination of black box and white box testing can ensure adequate code, logic,
functionality coverage.
18
Types of Testing
Functional Testing
Performance Testing
o
Stress Testing
o
Volume Testing
o
Load Testing
o
Endurance Testing
Scalability Testing
Compatibility Testing
Data Conversion Testing
Security / Penetration Testing
Usability Testing
Unit Testing
Integration Testing
Smoke testing / Sanity testing
System Testing
Regression Testing
User Acceptance Testing
o
Alpha Testing
o
Beta Testing
Globalization Testing
Localization Testing
The following diagram shows the integration of modules in case of Bottom-Up strategy.
20
Payment System
Module A
Module C
Module B
Module D
Module E
Module F
Payment System
Module A
Module C
Module B
Module D
Module E
Module F
22
Payment System
Module A
Module C
Module B
Module D
Module E
Module F
expected result'. In case of discrepancy between the two, the same is logged into the defect
tracking tool with a status
New'. All details required to reproduce the defect are entered into
the tracking tool.
Step 2 - Check the validity of the issue - Development team validates the issue reported and if
found valid and is reported for the first time then assigns it against the concerned developer. The
status changes to
Assigned' in this case. If found invalid then development team will mark it as
Rejected/Not a Defect' and submits back to the tester. The tester closes the defect with proper
comments if he/she is in agreement with the comments given by the developer. Else tester will
Reopen' the defect with data to support it. If the issue reported is not in scope, then developer
will change the status to
Postponed' and if it is reported earlier, then status will be set to
Duplicate'.
Step 3 -Defect resolution - Developer works on the defect assigned against his/her name and
once corrected, will change the status to
Ready for test' or
Retest'. He/she will then assign the
defect back to the test team with details about the correction & the build in which corrected
code will be released. Developer is expected to perform unit test to make sure that the code is
working fine before sending it to the test team to work on it.
Step 4 - Retest the defect - Tester retests the defect in the build in which the fix is available and
if found to be working fine, closes the defect with comments. If not, changes the status to
Reopened' and assigns it back to the developer with retest comments. Now it will go back to
step 2 and the cycle continues. Tester also does regression testing to make sure that the defect
fix has not introduced any new defects in the application.
Defect lifecycle can be illustrated with the help of the following diagram (Figure 10)
27
D
Tester executes the
tests
Developer
starts fixing the
code
Status = In Progress
Found a
defect?
E
No
Yes
Status=
Retest
Status=
Reopen
Yes
Status= Closed
Status= New
S
E
End of Defect Life Cycle
Is it a defect?
Yes
Is it in
scope?
No
Is it already
raised?
Yes
No
No
Yes
Status=
Rejected
Status=
Postponed
Status=
Duplicate
E
28
Status=
Assigned
Properties of a defect
At a minimum, the following information should be captured when reporting a defect.
Defect_ID - Unique identification for the defect.
Defect Description - Detailed description of the defect including information about the
module in which defect was found.
Severity - Severity could be Critical/Major/Minor/Enhancement based on the impact of
the defect on the application.
Priority - This could be High/Medium/Low based on the urgency at which the defect
should be fixed.
Status - Status of the defect (Can be New, Assigned, Closed etc)
Version - Version of the application in which defect was found.
Reference- Provide reference to the documents referenced i.e. requirements, design,
architecture etc
Steps - Detailed steps along with screenshots with which the developer can reproduce
the defects.
Date Raised - Date when the defect is raised
Date Closed - Date when the defect is closed
Detected By - Name/ID of the tester who raised the defect
Fixed by - Name/ID of the developer who fixed it
Differentiating Severity & Priority
Severity describes the impact of the bug on the application, whereas Priority is related to defect
fixing urgency.
The following table gives an example of a severity estimate.
Severity
Ranking criteria
Nature of Bugs
Critical
Severity 1 errors
High/Major
Severity 2 errors
Minor
Severity 4 errors
Enhancement
Severity 5 errors
Issues having low severity could have a high priority but the other way round is generally not
possible.
29
Customer - Easy tracking of the defects. Helps to know the status of the application.
Project Manager - For getting quick access to the project statistics
Tester - Efficient reporting and tracking of defects
Developer - Defect details logged can be used to improve the development process.
User friendliness
Email notification
File attachment
Audit Trail
Configuration Management
Customizable Fields
Metric Reports & graphs
Remote Administering
Report Cross-Referencing
Security Implementation
Web-Based Client
Workflow Support
32
Test cases completed with certain pre determined percentage of test cases passed
Coverage of code/functionality/requirements reaches a specified point
Bug rate falls below a pre determined level
Application crashes immediately after testing
Many critical defects found within a short period of test execution
34
12.Test Deliverables
Requirement Traceability Matrix (RTM) - RTM is a deliverable from the test team during the
requirement Analysis phase.
This provides the mapping between the test cases to business scenario to business
functionality.
Helps to link the business criticality and market priority with test requirements.
Serves as a single source for tracking purposes.
Helps in doing the impact analysis when there is a change in the requirement as the test
cases against a particular requirement can be easily identified using a traceability matrix.
Used for prioritization of the tests during crunch times as this one documents the
criticality of the test cases.
A sample RTM template screenshot is given below (Figure 12)
Test Plan - A software test plan is a document that describes the objectives, scope, approach,
and focus of a software testing effort. This document is prepared during the test strategizing
phase. The process of preparing a test plan is a useful way to think through the efforts needed to
validate the acceptability of a software product. The completed document will help people
35
Please refer to the attached Test Plan template for details (Figure 13)
Test case - Set of steps involved to evaluate a particular aspect of business scenario/condition is
working correctly. A test case should contain at a minimum particulars such as test case
identifier, description, steps, input data requirements and expected results. Note that the
process of developing test cases can help find problems in the requirements or design of an
application, since it requires completely thinking through the operation of the application. For
this reason, it's useful to prepare test cases early in the development cycle if possible.
For eg: If we had to test the login functionality, the test case can have the following steps
A sample test case template screenshot is given below (Figure 14)
36
Test data - The data/values used to test an application is called as the test data. For eg: if we are
checking login functionality, the
user id' and
password' used for testing this functionality forms
test data. Test data can be classified as the following
Static data (permanent data)
Configurable data (parameters driving the application)
Master data (mostly
read only' data used for reference)
Transaction data (operational data)
Test data generation includes 3 different phases mentioned below.
Test data identification - need to identify the types of test data mentioned above as per
the requirements.
Test data setup - This could be done manually or can use a tool to do the same.
Commonly used tools are:
SQL Loader - Utility which helps the user load data from a flat file to one or
more database tables.
Export utility - This helps user to copy data from one database to another
Data factory - Tool which helps populate test databases with syntactically
correct test data. It first reads the database schema and displays database
tables/columns. User can then point, click and populate the database using
the features available.
37
38
Definition
Correctness
The extent to which a program satisfies its specifications and fulfills the
user's mission and goals
Reliability
Integrity
Usability
Maintainability
Testability
Flexibility
Portability
Reusability
Interoperability
40
Unit
Hours
Days
Number
LOC
13.13.13.
Cost Of Quality
Note: in Infosys, COQ is represented using the term Appraisal & Rework Cost (ARC)
41
Studies show that the COQ in IT is approximately 50% of the total cost of building a product. Of the
50% COQ, 40% is failure, 7% is appraisal, and 3% is preven tion. Other studies have shown that $1
spent on appraisal costs will reduce failure costs threefold; and each dollar spent on prevention
costs will reduce failure costs tenfold. Obviously, the right appraisal and prevention methods must
be used to get these benefits.
Description
Collection of the requirement specifications, High
Level Design, Detailed Design, H/W and S/W
42
Business/System Requirement
or Requirement Specifications
Requirement Analysis
Requirement
Matrix (RTM)
Traceability
Test Planning
Test Strategy
Test Plan
Project Schedule
Review
Audit
Test Environment/Bed
Test Stub
Test Driver
Test Data
Test Automation
Test Script
Test Script Parameterization
Business Criticality
Market Priority
Test Execution
Test Execution
Test log
Issue
Defect/Bug/Ticket
Test Report
Test Management
Test Metrics
Test Quality
Test Productivity
Test Effectiveness
Test Coverage
DIR (Defect Injection Rate)
Cost Of Quality
Configuration Management
Change Request
Closure Analysis
done.
It's an impact analysis to determine the risk on the
business if a particular aspect of the requirement is
not tested.
Includes the set of activities done at the end of the
project to understand the areas for improvement
and also to highlight the best practices followed in
the project.
Root cause analysis is the process of finding and
eliminating the cause, which prevents the problem
from recurring.
46
Elapse
time/Calendar
time/Schedule
Code coverage
Emulator
Simulator
Debugging
Validation
Verification
Virtual user
Description
The process of exercising software to verify that it
satisfies specified requirements and to detect errors
The time elapsed between an issue reported to
development team and the development team
getting back to the reporter with the resolution. The
accepted TAT is 2-3 days.
Elapse time is equal to calendar time for which the
project will be executed which includes the actual
effort, holidays and TAT. This will be defined in the
schedule.
An analysis method that determines which parts of
the software have been executed (covered) by the
test case suite and which parts have not been
executed and therefore may require additional
attention.
A device, computer program or system that accepts
the same inputs and produces the same outputs as a
given system.
A device, computer program or system used during
software verification, which behaves or operates like
a given system when provided with a set of
controlled inputs.
The process of finding and removing the causes of
failure in software.
Determination of the correctness of the products of
software development with respect to the user
needs and requirements.
The process of evaluating a system or component to
determine whether the products of the given
development phase satisfy the conditions imposed
at the start of the phase.
A virtual user is a program that acts like a real user
would when making requests to an application.
47
14.References
IVS_Trainingmaterials
PRIDE @ Infosys
Knowledge Shop at Infosys
http://www.softwareqatest.com/qatfaq1.html#FAQ1_3
IEEE Std 610.12-1990 - IEEE Standard Glossary of Software Engineering Terminology
ITS (Aust) QMS V1.0 - Independent Test Services (Australia) Pty Ltd Quality Management System.
Version 1.0
ISBN 0-273-03645-9 - A Glossary of Computing Terms, The British Computer Society, 7th edition
48