0% found this document useful (0 votes)
103 views

Testing Fundamentals

This document discusses software testing and the software test life cycle. It describes the key stages of the software test life cycle including requirement analysis, test strategizing, test case development, test environment setup, test execution, and test cycle closure. It also covers different testing techniques, types of testing including unit testing and integration testing, defect reporting and tracking, and key test deliverables. The goal of testing is to measure quality, check requirements, and determine if software is ready for production use.

Uploaded by

Jean Claude
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
103 views

Testing Fundamentals

This document discusses software testing and the software test life cycle. It describes the key stages of the software test life cycle including requirement analysis, test strategizing, test case development, test environment setup, test execution, and test cycle closure. It also covers different testing techniques, types of testing including unit testing and integration testing, defect reporting and tracking, and key test deliverables. The goal of testing is to measure quality, check requirements, and determine if software is ready for production use.

Uploaded by

Jean Claude
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 48

Infosys Technologies Limited

Index:
1.
Introduction.............................................................................................................................................................. 5
2.
Testing overview....................................................................................................................................................... 6
2.1.
What is testing?................................................................................................................................................... 6
2.2.
Why Testing?....................................................................................................................................................... 6
3.
Software Test Life Cycle............................................................................................................................................ 7
3.1.
Requirement Analysis.......................................................................................................................................... 8
3.2.
Test Strategizing...................................................................................................................................................9
3.3.
Test case development......................................................................................................................................10
3.4.
Test Environment Setup.................................................................................................................................... 11
3.5.
Test execution....................................................................................................................................................12
3.6.
Test cycle closure...............................................................................................................................................14
4.
SDLC Vs STLC........................................................................................................................................................... 19
5.
V -Model of testing..................................................................................................................................................20
6.
Test case design/optimization techniques..............................................................................................................22
6.1.
Need for test case optimization.........................................................................................................................22
6.2.
Functional Technique.........................................................................................................................................23
6.3.
Structural Technique..........................................................................................................................................23
6.4.
Special Technique.............................................................................................................................................. 24
7.
Testing Techniques..................................................................................................................................................24
7.1.
Static Testing:.....................................................................................................................................................25
7.2.
Dynamic Testing:................................................................................................................................................26
7.2.1
White box testing/Structural testing:..................................................................................................... 26
7.2.2
Black Box Testing:....................................................................................................................................26
8.
Types of testing....................................................................................................................................................... 27
8.1.
Functional Testing..............................................................................................................................................28
8.1.1.
Unit Testing............................................................................................................................................. 28
8.1.2.
Integration Testing.................................................................................................................................. 29
8.1.3.
Smoke Testing......................................................................................................................................... 32
8.1.4.
System Testing........................................................................................................................................ 32
8.1.5.
Regression Testing.................................................................................................................................. 32
8.1.6.
User Acceptance Testing.........................................................................................................................33
8.1.7.
Globalization Testing...............................................................................................................................33
8.1.8.
Localization Testing.................................................................................................................................33
8.2.
Non Functional Testing...................................................................................................................................... 33
8.2.1.
Performance Testing:..............................................................................................................................34
8.2.2.
Compatibility Testing.............................................................................................................................. 35
8.2.3.
Data Migration Testing............................................................................................................................35
8.2.4.
Data Conversion Testing:........................................................................................................................ 35
8.2.5.
Security/Penetration Testing:................................................................................................................. 35
8.2.6.
Usability testing.......................................................................................................................................35
8.2.7.
Install/Un-Install Testing......................................................................................................................... 36
9.
Defect reporting and tracking................................................................................................................................. 36
9.1.
Defect Lifecycle..................................................................................................................................................36
9.2.
Defect Management tools................................................................................................................................. 40
9.2.1.
Mercury's Test Director...........................................................................................................................41
9.2.2.
Mozilla's Bugzilla..................................................................................................................................... 42
10.
When to Stop Testing?.......................................................................................................................................43
11.
Testing Case Study............................................................................................................................................. 43
12.
Test Deliverables................................................................................................................................................45
13.
Appendix............................................................................................................................................................50
13.1.
Definition of Quality..................................................................................................................................... 50
13.2.
Quality Assurance Vs Quality Control...........................................................................................................51
13.3.
Measurements and Metrics......................................................................................................................... 52
13.3.1.
Cost Of Quality........................................................................................................................................ 53
13.4.
Definition of the terms used in Project LC....................................................................................................54
13.5.
Common terms in Software Testing.............................................................................................................59
14.
References......................................................................................................................................................... 60

Infosys Technologies Limited


1. Introduction
Testing is a very important activity in product development lifecycle as it measures the quality of product and
helps in determining production readiness of an application. It checks whether all requirements are
implemented correctly and detects non-conformances if any, before deployment. Testing makes software
predictable in nature, improves quality and reliability. It also helps marketability and retention of customers.
The various factors that contribute to making testing a high priority of any software development effort
include:
Reduction of software development cost - Testing software in the initial stages of development
reduces the cost of developing the program. A problem that goes undetected in the initial software
development lifecycle stages can be much more expensive to resolve at a later stage. The following
diagram can be used to explain the same.
Figure 1 gives an idea about the Cost of Correcting Defects over life cycle changes.

Figure 1 Cost of correcting defects over life cycle stages

Ensures completeness of the product - Testing a software product ensures that the customer
requirements map to the final product that is delivered.
Reduction in total cost of ownership- By providing software that looks and behaves as shown in the
user documentation, customers require fewer hours of training support from product experts and
thus reduce the total cost of ownership.
Accretion of revenues - A bug free code (which is obtained only after intensive testing) also brings in
customer satisfaction, which leads to repeat business and more revenues.

2. Testing overview
2.2.

What is testing?

"Testing is an activity in which a system or component is executed under specified conditions; the results are
observed and recorded and an evaluation is made of some aspect of the system or component" - IEEE
Software testing is a process used to identify the correctness, completeness and quality of developed
computer software. It includes a set of activities conducted with the intent of finding errors in software so
that it could be corrected before the product is released to the end users.

2.2. Why Testing?


It is the primary duty of a software vendor to ensure that software delivered does not have any defects and
that the customer's day-to-day operations do not get affected. This can be achieved by rigorously testing the
software.

Infosys Technologies Limited


To protect an organization from any trouble and in order to address various risks involved during a change to
an organization, testing is important. Risks can be related to the ones affecting reputation or resources or
could be the ones leading to legal issues.
The following is a list of major computer system failures caused by software bugs. These examples highlight
the kind of catastrophic consequences that software bugs have on business, on life and property:
In April of 1999 a software bug caused the failure of a $1.2 billion military satellite launch, the
costliest unmanned accident in the history of Cape Canaveral launches. The failure was the latest in a
string of launch failures, triggering a complete military and industry review of U.S. space launch
programs, including software integration and testing processes. Congressional oversight hearings
were requested.
On June 4 1996 the first flight of the European Space Agency's new Ariane 5 rocket failed shortly
after launching, resulting in an estimated uninsured loss of a half billion dollars. It was reportedly due
to the lack of exception handling of a floating-point error in a conversion from a 64-bit integer to a 16
-bit signed integer.
The computer system of a major online U.S. stock trading service failed during trading hours several
times over a period of days in February of 1999 according to nationwide news reports. The problem
was reportedly due to bugs in a software upgrade intended to speed online trade confirmations.
In November of 1997 the stock of a major health industry company dropped 60% due to reports of
failures in computer billing systems, problems with a large database conversion, and inadequate
software testing. It was reported that more than $100,000,000 in receivables had to be written off
and that multi-million dollar fines were levied on the company by government agencies.
Software bugs caused the bank accounts of 823 customers of a major U.S. bank to be credited with
$924,844,208.32 each in May of 1996, according to newspaper reports. The American Bankers
Association claimed it was the largest such error in banking history. A bank spokesman said the
programming errors were corrected and all funds were recovered.
All the above incidents only reiterate the significance of thorough testing of software applications and
products before they are pushed to production. It clearly demonstrates that cost of rectifying defect during
development is much less than rectifying a defect in production.

3. Software Test Life Cycle


Different stages involved in testing and certifying a software product is called as Software Test life cycle (STLC)
. Each of these stages has definite entry criteria, exit criteria, measures/metrics, set of activities &
deliverables associated with it.
Figure 2: The different lifecycles involved in a testing project are given below.
Requirements
Analysis
Test
Strategizing

Test case
development
Environment
setup
Test execution

Figure 2 Software Testing Lifecycle

Test Cycle
closure

3.3. Requirement Analysis


Overview

Infosys Technologies Limited


During this phase, test team studies the requirements from a testing point of view and identify the testable
requirements. This also includes interaction with various stakeholders involved in the project to understand
requirements in detail to define the scope of the work. Automation feasibility analysis (checking the
applicability of various test tools to carry out and manage the testing) is also one of the activities done in this
phase. Requirements could be classified as
Functional Requirements: These specify the functions that a system or system component must be able to
perform.
Non
Functional
Requirements:
Non-functional
requirements
characteristics/attributes like performance/security/availability etc.
Various components of this phase can be found in the below diagram.

specify

system's

quality

Entry-criteria
Requirements Document available (both functional and non functional)
Acceptance criteria defined.
Application architectural document available.
Activities
Analyse business functionality to know the business modules and module specific functionalities.
Identify all transactions in the modules.
Identify all the user profiles.
Gather user interface/authentication, geographic spread requirements.
Identify types of tests to be performed.
Gather details about testing priorities and focus.
Prepare Requirement Traceability Matrix (RTM). Refer to Test Deliverables (RTM section) for the
details of this.
Identify test environment details where testing is supposed to be carried out.
Automation feasibility analysis (if required).
Metrics & Measures
Effort spent on
o
Requirement Analysis to prepare RTM
o
Review and rework of RTM
o
Automation feasibility analysis (if done)
Defects
o
RTM review defects.
Deliverables
RTM
Automation feasibility report (if applicable)
Exit-criteria
Signed off RTM
Test automation feasibility report signed off by the client

3.3. Test Strategizing


Overview

Infosys Technologies Limited


This phase involves selecting the best suited approach and arriving at the effort & cost estimation for
executing the project. Various components of this phase can be found in the below diagram.

Entry-criteria
Requirements Documents
Requirement Traceability matrix.
Test automation feasibility document.
Activities
Analyze various testing approaches available
Finalize on the best suited approach
Preparation of test plan/strategy document for various types of testing
Test tool selection
Test effort estimation
Resource planning and determining roles and responsibilities.
Metrics & Measures
Effort spent on
o
Test plan/strategy preparation
o
Test plan/strategy review
o
Test plan/strategy rework
o
Test tool selection
Defects
o
Test Plan/strategy review defects.
Deliverables
Test plan/strategy document.
Effort estimation document.
Exit-criteria
Approved test plan/strategy document.
Effort estimation document signed off.

3.3. Test case development


Overview
This phase involves creation, verification and rework of test cases & test scripts. Identification of test data,
review and rework of the same is also carried out. Various components of this phase can be found in the
below diagram.

Entry-criteria
Requirements Documents
RTM and test plan
Automation analysis report
Activities
Create test cases, automation scripts (where applicable)
Review and baseline test cases and scripts
Create test data
Measures & Metrics
Effort spent on
o
Test case/script preparation
o
Test case/script review
5

Infosys Technologies Limited


Test case/script rework
Identification of test data
Review of Test data
Rework on test data
o
o
o
o
Defects
o

Test case/script review defects


Test data review defects

o
Productivity
o
No. of test cases or scripts generated/ Effort spent in person hours
Deliverables
Test cases/scripts
Test data
Exit-criteria
Reviewed and signed test Cases/scripts
Reviewed and signed test data

3.3. Test Environment Setup


Overview
Environment set-up is one of the critical aspects of testing process. Test team may not be involved in this
activity if the customer/development team provides the test environment in which case the test team is
required to do a readiness check of the given environment. Test environment decides the software and
hardware conditions under which a work product is tested. Various components of this phase can be found in
the below diagram.

Entry-criteria
System Design and architecture documents are available
Environment set-up plan is available
Activities
Understand the required architecture, environment set-up
Prepare hardware and software requirement list
Finalize connectivity requirements
Prepare environment setup checklist
Setup test Environment and test data
Perform smoke test on the build
Accept/reject the build depending on smoke test result
Measures & Metrics
Effort spent on
o
Test environment setup
o
Test data setup
o
Sanity Test
Defects
o
Test environment setup defects
o
Test data set up defects
o
Defects found in sanity test
Deliverables
Environment ready with test data set up
Smoke Test Results.
6

Infosys Technologies Limited


Exit-criteria
Environment setup is working as per the plan and checklist
Test data setup is complete
Smoke test is successful

3.3. Test execution


Overview
During this phase test team will carry out the testing based on the test plans and the test cases prepared.
Various components of this phase can be found in the below diagram.

Entry-criteria
Baselined RTM is available
Baselined Test plan is available
Test environment is ready
Test data set up is done
Baselined Test cases/scripts are available
Unit/Integration test report for the build to be tested is available
Activities
Execute tests as per plan
Document test results, and log defects for failed cases
Update test plans/test cases, if necessary
Map defects to test cases in RTM
Retest the defect fixes
Regression testing of application
Track the defects to closure
Measures & Metrics
Effort
Test case creation/update effort (in case of requirements changes)
Test execution effort
Defect detection and logging effort
o
o
o
Defects
o

Number of test case/script defects


Number of Application defects

o
Productivity
o
No. of test cases/scripts executed/ Execution Effort in person hours
Defect Detection Rate
o
No. of valid defects detected/Test execution effort
Test Effectiveness
o
No. of valid defects reported during testing/( No. of valid defects reported during testing
+ No. of defects reported by client)
Deliverables
Completed RTM with execution status
Test cases updated with results
Defect reports
7

Infosys Technologies Limited


Exit-criteria
All tests planned are executed
Defects logged and tracked to closure

3.3. Test cycle closure


Overview
The project team members will analyze the defects logged to give a quantitative feedback about the quality
of the application as well as to identify the areas which need improvement. They also document the learning
out of the project and the best practices followed. Various components of this phase can be found in the
below diagram.

Entry-criteria
Testing has been completed
Test results are available
Defect logs are available
Activities
Evaluate cycle completion criteria based on
o
Time
o
Test coverage
o
Cost
o
Software Quality
o
Critical Business Objectives
Prepare test metrics based on the above parameters.
Document the learning out of the project
Prepare Test closure report
Qualitative and quantitative reporting of quality of the work product to the customer.
Test result analysis to find out the defect distribution by type and severity.
Measures & Metrics
Effort spent on
o
Defect analysis
o
Preparing Test Closure report.
Deliverables
Test Closure report
Test metrics
Exit-criteria
Test Closure report signed off by client
Summary of STLC stages

Infosys Technologies Limited

STLC Stage
Requirement Analysis

Test Strategizing

Entry Criteria
Requirements
Document available
(both functional and
non functional)
Acceptance
criteria
defined.
Application
architectural
document available.

Test case development

Test Environment setup

Activity
Exit Criteria
Analyse business functionality to know the
Signed off RTM
business modules and module specific
Test
automation
functionalities.
feasibility report signed
Identify all transactions in the modules.
off by the client
Identify all the user profiles.
Gather
user
interface/authentication,
geographic spread requirements.
Identify types of tests to be performed.
Gather details about testing priorities and
focus.
Prepare Requirement Traceability Matrix (RTM).
Refer to Test Deliverables (RTM section) for the
details of this.
Identify test environment details where testing
is supposed to be carried out..
Automation feasibility analysis (if required).

Requirements
Documents
Requirement
Traceability matrix.
Test
automation
feasibility document.

Requirements
Documents
RTM and test plan
Automation analysis
report
System Design and
architecture
documents
are
available
Environment set-up
plan is available

Analyze various testing approaches available


Finalize on the best suited approach
Preparation of test plan/strategy document for
various types of testing
Test tool selection
Test effort estimation
Resource planning and determining roles and
responsibilities.
Create test cases, automation scripts (where
applicable)
Review and baseline test cases and scripts
Create test data

Understand the
required
architecture,
environment set-up
Prepare hardware and software requirement
list
Finalize connectivity requirements
Prepare environment setup checklist
Setup test Environment and test data
Perform smoke test on the build

Deliverables
RTM
Automation
feasibility
report
(if
applicable)

Approved
test
plan/strategy document.
Effort
estimation
document signed off.

Reviewed and signed


test Cases/scripts
Reviewed and signed
test data

Environment setup is
working as per the plan
and checklist
Test data setup is
complete
Smoke test is successful

Test
plan/strategy
document.
Effort
estimation
document.

Test
cases/scripts
Test data

Environment
ready
with
test data set
up
Smoke Test
Results.

Infosys Technologies Limited

Test Execution

Test Cycle closure

Table 1 Summary of STLC

Baselined RTM is
available
Baselined Test plan is
available
Test environment is
ready
Test data set up is
done
Baselined
Test
cases/scripts
are
available
Unit/Integration test
report for the build to
be tested is available
Testing has been
completed
Test
results
are
available
Defect
logs
are
available

Accept/reject the build depending on smoke


test result
Execute tests as per plan
Document test results, and log defects for
failed cases
Update test plans/test cases, if necessary
Map defects to test cases in RTM
Retest the defect fixes
Regression testing of application
Track the defects to closure

Evaluate cycle completion criteria based on


o
Time
o
Test coverage
o
Cost
o
Software Quality
o
Critical Business Objectives
Prepare test metrics based on the above
parameters.
Document the learning out of the project
Prepare Test closure report
Qualitative and quantitative reporting of
quality of the work product to the customer.
Test result analysis to find out the defect distribution by
type and severity

All tests planned are


executed
Defects logged and
tracked to closure

Test Closure report


signed off by client

Completed
RTM
with
execution
status
Test
cases
updated with
results
Defect
reports

Test Closure
report
Test metrics

Infosys Technologies Limited

4. SDLC Vs STLC
Various stages involved in developing a product can be collectively called as the Software
development lifecycle. It begins when a problem has been identified and the solution for the
same needs to be implemented in the form of software. This ends when the verification and
validation of the developed software is completed and software is accepted by the end customer.
Software test lifecycle is a part of the software development cycle.
The following table gives the details of the activities carried out in each of the development
phase and the corresponding testing phase.
Development Phase

Testing Phase

Requirement capture and analysis

Acceptance test

Requirements Analysis is the process by which


customer requirements are elicited. The objective
is to profile the user in order to identify the user
groups, get a clear picture of the user requirements
in order to minimize rework due to changes.
Requirements for both functional and non
functional features (like performance, security,
usability etc) should be captured during this phase.
Functional Specification
Functional specification is issued as a blueprint for
implementing the application. This should talk
about the functions that the application is
supposed to have in detail. It talks about what the
final product should do, how users are supposed to
interact with it and how the user interface should
look like etc.

Acceptance consists of formal testing conducted by


the customer according to the acceptance test plan
prepared and analysis of test results to determine
whether or not the system satisfies its acceptance
criteria. Planning for the acceptance test should be
done during the requirement analysis phase.
Refer to Functional Testing (Acceptance Testing
section) for details of this.
System test
System test is an activity that is carried out to
validate the software system against the functional
specifications. Test team can start off with the
system test planning while the development team
is working on the functional specification.
Refer to Functional Testing (System Testing section)
for details of this

High Level Design

Integration test

High-Level design is the stage of the life cycle when


a logical view of the solution to the customer
requirements is arrived at. It gives the solution
arrived at with a high level of abstraction. Physical
aspects of the design are touched upon but not
detailed. During this phase the architecture,
database and operating environment designs are
done.

Integration test planning should be done based on


the high level design documents. Integration is a
systematic approach to build the complete
software structure specified in the high level design
from unit-tested modules.

Low level Design

Unit test

During detailed design, the logical view (High level


design) of the application is broken down into
modules/programs. Logic design is done for every
program and then documented in the program
specifications. There should be an attempt to reuse
code and also create reusable code

This is the testing which will be done on a single


module to see whether the standalone module
performs its functions as per the low level design
document/program specifications. The planning for
the same happens during the low level design
phase and the test cases will be derived out of the
program specifications.

Refer to Functional Testing (Integration Testing


section) for details of this.

Refer to Functional Testing (Unit Testing section)


for details of this
Coding Phase
11

Infosys Technologies Limited


During Build stage or coding phase, the required software is developed as per the detailed design. The
Build stage produces the source code, executables, the test data (if applicable), and the drafts of any user
documentation and/or training material.
Table 2 Comparison of SDLC activities Vs STLC activities

5. V -Model of testing
V - Model talks about lifecycle testing where corresponding to each of the development phase
there is a test associated with it. The checks/tests that happen when the development activities
are going on are in the form of reviews, inspections and walkthroughs. Also planning for Unit,
Integration, System & User acceptance tests are done as per the below diagram. This way as we
progress in the lifecycle, the gap between the development and the test team reduces. Also this
helps in identifying the discrepancies earlier in life cycle where it is easier and cheaper to fix the
defects.
V - Model can be explained with the following diagram (Figure 3)

User Acceptance Test Plan


Requirement Analysis

User Acceptance Testing

System Test Plan


Functional Specification

System Testing

Integrated Test Plan


High Level Design

Integration Testing

Unit Test Plan


Detailed Design / Program
Specification

Unit Testing

CODE

Figure 3 V-Model

Both development team as well as the test team starts working at the beginning of the project
with the same information.
Development team will be working on collating the requirements and building the product as
per the requirements. Reviews, inspections & walkthroughs will be conducted during this time to
check the adherence to processes. These checks which are done during the early lifecycle are
called as the verification activities.

12

Infosys Technologies Limited


Same time, testing team will be working on planning and designing the tests based on these
requirements. These tests which are listed above on the right arm of the V (Unit, Integration,
System, UAT) form the validation activities. Test team will be conducting most of these
validation activities at pre defined checkpoints to check the product's conformance to
requirements.
In V-testing concept, project's development and testing procedures slowly converge from start
to finish, which indicates that as the development team attempts to implement a solution, the
test team concurrently develops a process to eliminate or minimize the risk. If two groups work
closely together, the high level of risk at a project's inception will decrease to an acceptable level
by the project's conclusion.

13

Infosys Technologies Limited

6. Test case design/optimization techniques


Test case
A test case is a document that describes an input, action, or event and an expected response, to
determine if a feature of an application is working correctly. Refer to Test Deliverables (Test case
section) for the details of this.

6.6. Need for test case optimization


Test case optimization is very important since complete testing of software is almost next to
impossible. Following section explain why complete testing is not possible always.
a. Can't test all inputs
The number of inputs that can be fed to a program is typically infinite. In such cases a small
number of test cases that will represent the full space of possible tests need to be identified.
Consider a function that will accept any integer value between 1 and 1000. It is not possible
to test the function for all the possible valid and invalid inputs as this will take a long time to
do the same when done manually.
b. Can't test all combinations of inputs
Suppose a program is designed to add two numbers. The program's design allows the first
number to be between 1 and 100 and the second number between 1 and 25. The total
number of pairs that should be used to test this completely is 100 x 25 (only for valid inputs)
and it is not practical to check all of this.
c. Can't test all the paths
A path through a program starts at the point when we enter the program, and includes all
the steps that are run through until we exit. There are virtually infinite series of paths
through any program.
Due to the above, test case optimization is a must and the following techniques could be used
for this.
Test case design techniques can be broadly split into following categories
Functional
Structural
Special

6.6. Functional Technique


Equivalence partitioning - The whole range of input is split into set of equivalence classes, such
that a single value acts as a sample for each equivalence class. Exhaustive testing is not required
in this case.
Example 1: A program needs to be tested with input range between 1000 to 2000. This can be
divided into three equivalence classes.
14

Infosys Technologies Limited


< 1000
Between 1000 and 2000
> 2000
Any value can be picked up in each of the equivalence class and can be used for testing.
Example 2: For a requirement which indicates that a field should accept only alphabets, can be
tested using the following equivalence classes.
Alphabets
Numbers
Special Characters
Any value can be picked up in each of the equivalence class and can be used for testing
Boundary value analysis - This technique consists of developing test cases and data that focus on
the input and output boundaries of a given function as these are more prone to errors.
Example: A program needs to be tested with input range between 1000 to 2000. Boundary
analysis will have the following boundary values to be tested.
Lower boundary: (999 and 1001)
Upper boundary: (1999 and 2001)
On the boundary: (1000 and 2000)

6.6. Structural Technique


Structural technique is mostly used for White box testing. Since White box testing mainly deals
with program code, testing is focused on coverage of code. The below testing techniques would
help in having more code coverage.
Branch testing -In branch testing, test cases are designed to exercise branches or decision points
in a unit. Given a structural specification for a unit, specifying the control flow within the unit,
test cases can be designed to exercise branches.
Example: If there is a function to calculate the perfect square root, a test designer could assume
that there would be a branch between the processing of valid and invalid inputs, leading to the
following test cases:
Test Case 1: Input
4', Return
2'
Exercises the valid input processing branch
Test Case 2: Input
-10', Return
0'
Exercises the invalid input processing branch
Condition testing - Aims to exercise all logical conditions in a program module and all logical decisions
on their true and false sides
Possible conditions:

Boolean variable (T or F)
Relational expression (a<b)
Composed of several simple conditions ((a=b) and (c>d))

15

Infosys Technologies Limited


Loop Testing: Execute all loops at their boundaries and within their operational bounds to
validate loop constructs

6.6. Special Technique


Error guessing - As the name indicates, this technique is to design test cases for testing the
requirements which are more prone to errors. These test cases are designed based on the
potential areas causing error. Experience and maturity of the tester plays an important role in
this.
Example: February 29th as input to test the date field for a non-leap year

7. Testing Techniques
Software can be tested either by running the programs and then verifying each step of its
execution against expected result or by statically examining the code against the stated
requirements. These 2 distinct methods have led to the popularization of 2 techniques viz. Static
Testing & Dynamic Testing as given below (Figure 4)

Testing Techniques

Dynamic

Static

Informal Review

Walkthrough

Inspection

White Box
Testing
Figure 4 Testing Techniques

7.7. Static Testing:

16

Black Box
Testing

Infosys Technologies Limited


This is a non-execution-based testing technique. It could be done during any phase in the
software development lifecycle but largely it occurs during requirement, design and coding
phase. The Design, code, test plan, test cases or any document may be inspected and reviewed
against a stated requirement/ standards/some guidelines/checklists. Static Testing includes:
Informal Reviews - These reviews are generally done by a peer and occur on a need basis.
Walk-through - Semiformal review facilitated by the author of the product.
Inspections - Formal reviews facilitated by a knowledgeable person who is not the
author of the document.
Many studies show that the single most cost-effective defect reduction process is the static
testing - the code inspection/walk-through/reviews.
Advantages of static testing

Capture defects early, so saves cost


Checklist-based approach
Focuses on coverage
Highest probability of finding defects
Efficient way to educate people regarding the product
Independent of test environment setups

Disadvantages of static testing


Time consuming
Cannot test data dependencies
High skill levels required

7.7. Dynamic Testing:


This is an execution-based testing technique. Here, the program, module or the entire system is
executed and the output is verified against the expected result. Dynamic execution of tests is
based on one of the following:
Dynamic testing can be further classified into White Box Testing and Black Box Testing (Figure 5)

Dynamic Testing

White Box Testing

Black Box testing

Figure 5 Types of dynamic testing

7.2.1

White box testing/Structural testing:

This testing technique takes into account the internal structure of a system or component.
Complete access to the object's source code is needed for white-box testing. This is known as

white box' testing because tester gets to see the internal working of the code.
17

Infosys Technologies Limited


White box testing helps to:

Achieve high code coverage


Test program logic
Eliminate redundant code
Traverse complicated loop structures
Cover control structures and sub-routines
Evaluate different execution paths

Unit testing and some part of integration testing fall under white box testing category.
7.2.2

Black Box Testing:

A testing method where the application under test is viewed as a black box and the internal
behavior of the program is completely ignored. Testing occurs based upon the requirement
specifications.

Black box testing is conducted more from a user's perspective.


It focuses on the features and not the implementation.
Provides a big picture approach.
Black Box testing technique can be applied once unit and integration testing are
completed.

System testing and regression testing fall under black box testing category.
Advantages of dynamic testing
White box testing
Logic of the system is tested
Those parts which could have been omitted in black box testing are also getting covered.
Redundant code eliminated
Cost effective when appropriate techniques are used
Black box testing
Simulates actual system usage
Makes no assumptions about the system structure
Disadvantages of dynamic testing
White box testing
Does not ensure that all user requirements are met
May not simulate real-time situations
Skill level needed is high
Black box testing
May miss out logical errors
Chances of redundant testing is there
Cannot decide which part of code is not getting executed
Thus a good combination of black box and white box testing can ensure adequate code, logic,
functionality coverage.

18

Infosys Technologies Limited


8. Types of testing
Testing could be classified at a high level as Functional testing and Non-functional testing. Chart
below (Figure 6)is a snapshot of the different types of testing.

Types of Testing

Functional Testing

Non Functional Testing

Performance Testing
o
Stress Testing
o
Volume Testing
o
Load Testing
o
Endurance Testing
Scalability Testing
Compatibility Testing
Data Conversion Testing
Security / Penetration Testing
Usability Testing

Unit Testing
Integration Testing
Smoke testing / Sanity testing
System Testing
Regression Testing
User Acceptance Testing
o
Alpha Testing
o
Beta Testing
Globalization Testing
Localization Testing

Figure 6 Types of Testing

8.8. Functional Testing


Functional Testing refers to verifying if the module performs its intended functions in accordance
with the specification. The purpose is to ensure that the application behaves in a way it is
expected to behave E.g. data entry, navigation, processing, retrieval and display based on
requirements.
Different types of functional testing are explained below.

8.8.8. Unit Testing


Primary test performed on software is the
Unit Testing' to see if a standalone module is working
as per the requirements.
Testing performed on a single, standalone module or unit of code to ensure correctness
of the particular module.
Focuses on implementation logic, so the idea is to write test cases for every method in
the module.
The goal of unit testing is to isolate each part of the program and show that the
individual parts are correct.
This isolated testing provides four main benefits:
o
Flexibility when changes are required.
o
Facilitates integration
o
Ensures documentation of the code
o
Separation of interface from implementation
19

Infosys Technologies Limited


This type of testing is mostly done by the developers.

8.8.8. Integration Testing


Phase of software testing in which individual software modules are combined and tested
as a group. It follows unit testing and precedes system testing.
It takes the unit tested modules as input, groups them in larger aggregates, applies tests
defined in an Integration test plan and delivers as its output, the integrated system
which is ready for system testing.
Data transfer between the integrated modules is thoroughly tested.
Dummy modules interface viz. Stubs and Drivers are used in integration testing.
Drivers are simple programs designed specifically for testing the calls to lower layers. It
provides emerging low-level modules with simulated inputs and the necessary resources
to function.
Stubs are dummy software components used to simulate the behavior of a real
component. They do not perform any real computation or data manipulation. It can be
defined as a small program routine that substitutes for a longer program, possibly to be
loaded later or that is located remotely.
Two methods of integration are
Incremental
Big bang
Incremental
It involves adding unit tested modules one by one and checking each resultant
combination. This process repeats till all modules are integrated and tested.
Correction is easy as the source and cause of error could be easily detected.
Big bang
Modules unit tested at isolation are integrated at one go and the integration is tested.
Correction is difficult because isolation of causes is complicated.
Three strategies of integration are
Bottom-Up Strategy (Figure 7)
Process starts with low level modules of the program hierarchy in the application
architecture
Test drivers are used

The following diagram shows the integration of modules in case of Bottom-Up strategy.

20

Infosys Technologies Limited


External, Legacy
System

Payment System

Finance App (Main Module)

Module A

Module C

Module B

Module D

Module E

Module F

Figure 7 Bottom-Up strategy

Top-Down Strategy (Figure 8)


Starts at the top of the program hierarchy in the application architecture and travels
down its Branches
Stubs are used until the actual program is ready
The following diagram shows the integration of modules in case of top-Down strategy.
External, Legacy
System

Payment System

Finance App (Main Module)

Module A

Module C

Module B

Module D

Module E

Module F

Figure 8 Top-Down strategy

Sandwich Strategy (Figure 9)


Integration of Top-Down and Bottom up method
Instead of completely going for top down or bottom up, a layer is identified in between.
21

Infosys Technologies Limited


The following diagram shows the integration of modules in case of sandwich strategy

22

Infosys Technologies Limited


External, Legacy
System

Payment System

Finance App (Main Module)

Module A

Module C

Module B

Module D

Module E

Module F

Figure 9 Sandwich strategy

8.8.8. Smoke Testing


This is a quick & non exhaustive test performed on the new version of the software to see
whether it is performing well enough to accept it for major testing. This test is used to validate if
the major functions of a piece of software work as intended.
Reasons, why a build could be rejected include
Major functionalities are not working fine or missing
Navigations are not appropriate
Look and feel is not according to specification

8.8.8. System Testing


Black-box type testing that is based on overall system requirement specifications. It is carried on
an integrated system and end to end testing is carried out. . During system test execution phase,
defects that can only be exposed by testing the entire system are found.

8.8.8. Regression Testing


Testing conducted for the purpose of evaluating whether or not a change (defect fix or
enhancement) to the system has introduced a new failure. This refers to continuous testing of an
application for each new build.

8.8.8. User Acceptance Testing


Acceptance testing is one of the last phases of testing which is typically done at the customer
place. Generally users of the system will perform these tests which ideally are derived from the
User Requirements Specification, to which the system should conform. The focus is on a final
verification of the required business function in a simulated environment which is very close to
23

Infosys Technologies Limited


the real environment. Idea is that if the software works as intended during the simulation of
normal use, it will work just the same in production. These tests are often used to enable the
customer to determine whether or not to accept a system. Planning for this should be done
during the requirement analysis phase, which will help to identify the gaps in requirements and
to verify the testability of the requirements. Acceptance testing will be carried out when the test
team has completed the System testing.
Types of UAT:
Alpha Testing: Simulated or actual operational testing performed by end users within a company
but outside development group.
Beta Testing: Simulated or actual operational testing performed by a sub-set of actual customers
outside the company.

8.8.8. Globalization Testing


To ensure that internationally localized versions do not have problems unique to
language/currency etc
To validate whether application developed provide support for
o
Multi-language
Check whether messages are accurate
UI objects in all languages reflects same meaning
o
Multi Currency

8.8.8. Localization Testing


Subset of globalization testing and checks for a particular locale
This test is based on the results of globalization testing, which verifies the functional
support for that particular culture/locale.
Can be executed only on the localized version of a product

8.8. Non Functional Testing


Non functional testing verifies if the application performs its intended functions as per the non
functional requirements which could be performance, security, usability, compatibility etc.

8.8.8. Performance Testing:


This testing is carried out to analyze/measure the behavior of the system in terms of time,
stability and scalability and the parameters generally used are response time, transaction rates
etc. This is done to verify whether the performance requirements have been achieved.
Performance testing is implemented and executed to profile and "tune" an application's
performance behaviors as a function of conditions such as workload or hardware configurations.
Types of Performance Testing
Load Testing
Load testing is defined as type of performance testing where performance of the application is
monitored when subjected to different loads. Load is referred to as the number of users using
24

Infosys Technologies Limited


the application. Load testing consists of simulating a real-time workload conditions for the
application under test.
Generally refers to the practice of modeling the expected usage of a software program
by simulating multiple users accessing the program's services concurrently.
To determine at what load the system fails or system's response time degrades
Stress testing
Stress testing is conducted to evaluate a system or a component at or beyond the limits of its
specified requirements. Ideally, stress testing emulates the maximum capacity the application
can support before causing a system outage or disruption. It ensures that the application which
is tested for expected load, can withstand spikes in load conditions (like increase in rate of
transactions). Based on the results of stress testing, system can be configured and fine tuned for
optimal performance.
This test determines the failure point of the system under extreme pressure.
Useful when systems are being scaled up to larger environments or being implemented
for the first time.
System is monitored for performance loss and crashing during the load times.
Endurance Testing
Execute the test with expected user load sustained over longer period of time with normal ramp
up and ramp down.
Volume Testing
Volume testing is the testing where the system is subjected to large volume of data to determine
its point of failure. The main objective of volume testing is to find the limitations of the software
by processing large amount of data.

8.8.8. Compatibility Testing


Test to validate that the application functions the same way across different supported
Hardware and software configurations
Operating systems (OS)
Web browsers
Database types

8.8.8. Data Migration Testing


The Data Migration testing is done to validate the migration of the source data to the new
platform say from one database to a different database or from one version of the database to a
new version of the database.

8.8.8. Data Conversion Testing:


The Data Conversion testing is done to validate the Conversion of the source data to the target
data. Data Conversion testing and implementation are practically inseparable. The Data
Conversion testing plan should be made to confirm the following:
25

Infosys Technologies Limited


Whether or not the source data type has been converted to the target data type?
Is there any loss in the data?
Is data integrity maintained?

8.8.8. Security/Penetration Testing:


Security testing evaluates that an Information Systems maintains confidentiality, availability and
integrity of data. Security testing is performed to assess the sensitivity of the system against
unauthorized internal or external access. Testing is done to ensure that unauthorized persons
are not given access.
Special skills required for security testing

Ability to think like a hacker


Being aware of all known vulnerability & exploits
Thorough understanding of runtime environment
Identification of criticality and sensitivity of data assets

8.8.8. Usability testing


In usability testing, software is evaluated for the easiness with which user can learn and use the
application. Essentially it means testing the software to ensure that it is
user friendly' in nature.

8.8.8. Install/Un-Install Testing


Testing carried out to evaluate the instructions provided in the manual and the accuracy with
which the installed application operates. This testing is carried out to evaluate that no residue
remains after the uninstallation of the application.
Occurs outside the development environment.
Will frequently occur on the computer system in which the software product will
eventually be installed.
Done in case of full or partial upgrades
The installation test for a release is conducted with the objective of demonstrating
production readiness.
Includes the inventory of configuration items.

9. Defect reporting and tracking


Defect A software defect can be defined as any failure in meeting the end-user requirements.
Common defects include missed or misunderstood requirements and errors in design, logic,
code, exception handling, data relationships etc. Defects need to be reported for the primary
reason of correcting it. Also the defect log can be used for the following
To report the status of the application
To identify the areas of improvement
For future reference
It is very important to capture and report defects as early as possible in the lifecycle as delay in
doing so will increase the cost of fixing the defects and also increase the impact in the
application.
26

Infosys Technologies Limited


9.9. Defect Lifecycle
Defect life cycle starts when a tester finds a discrepancy in the application under test. Phases
involved in a defect lifecycle are explained below
Step 1 - Report Defect - Tester executes the test cases and compares the
Actual result' with the

expected result'. In case of discrepancy between the two, the same is logged into the defect
tracking tool with a status
New'. All details required to reproduce the defect are entered into
the tracking tool.
Step 2 - Check the validity of the issue - Development team validates the issue reported and if
found valid and is reported for the first time then assigns it against the concerned developer. The
status changes to
Assigned' in this case. If found invalid then development team will mark it as

Rejected/Not a Defect' and submits back to the tester. The tester closes the defect with proper
comments if he/she is in agreement with the comments given by the developer. Else tester will

Reopen' the defect with data to support it. If the issue reported is not in scope, then developer
will change the status to
Postponed' and if it is reported earlier, then status will be set to

Duplicate'.
Step 3 -Defect resolution - Developer works on the defect assigned against his/her name and
once corrected, will change the status to
Ready for test' or
Retest'. He/she will then assign the
defect back to the test team with details about the correction & the build in which corrected
code will be released. Developer is expected to perform unit test to make sure that the code is
working fine before sending it to the test team to work on it.
Step 4 - Retest the defect - Tester retests the defect in the build in which the fix is available and
if found to be working fine, closes the defect with comments. If not, changes the status to

Reopened' and assigns it back to the developer with retest comments. Now it will go back to
step 2 and the cycle continues. Tester also does regression testing to make sure that the defect
fix has not introduced any new defects in the application.
Defect lifecycle can be illustrated with the help of the following diagram (Figure 10)

27

Infosys Technologies Limited


S
Defect Lifecycle

D
Tester executes the
tests

Tester Retests the


test.

Developer
starts fixing the
code
Status = In Progress

Was the test


successful?
No

Found a
defect?

E
No
Yes

Status=
Retest

Status=
Reopen

Yes

Status= Closed

Status= New

S
E
End of Defect Life Cycle

Development Project Manager:


Analyses the defect.

Defect Analysis Begins

Is it a defect?
Yes

Is it in
scope?

No

Is it already
raised?

Yes

No

No

Yes

Status=
Rejected

Status=
Postponed

Status=
Duplicate

E
28

Status=
Assigned

Infosys Technologies Limited


D
Figure 10 Defect Lifecycle

Properties of a defect
At a minimum, the following information should be captured when reporting a defect.
Defect_ID - Unique identification for the defect.
Defect Description - Detailed description of the defect including information about the
module in which defect was found.
Severity - Severity could be Critical/Major/Minor/Enhancement based on the impact of
the defect on the application.
Priority - This could be High/Medium/Low based on the urgency at which the defect
should be fixed.
Status - Status of the defect (Can be New, Assigned, Closed etc)
Version - Version of the application in which defect was found.
Reference- Provide reference to the documents referenced i.e. requirements, design,
architecture etc
Steps - Detailed steps along with screenshots with which the developer can reproduce
the defects.
Date Raised - Date when the defect is raised
Date Closed - Date when the defect is closed
Detected By - Name/ID of the tester who raised the defect
Fixed by - Name/ID of the developer who fixed it
Differentiating Severity & Priority
Severity describes the impact of the bug on the application, whereas Priority is related to defect
fixing urgency.
The following table gives an example of a severity estimate.
Severity

Ranking criteria

Nature of Bugs

Critical

Severity 1 errors

High/Major

Severity 2 errors

Minor

Severity 4 errors

Enhancement

Severity 5 errors

Crashes, loss of data, severe


memory leak, program ceases
meaningful operation
Major loss of function, Application
can continue but severe function
error
Minor issues like spelling mistake
or suggestions.
Request for enhancement

Table 3 Severity Estimate

Issues having low severity could have a high priority but the other way round is generally not
possible.
29

Infosys Technologies Limited


For eg: If there is a spelling mistake in an application home page in the company logo, it will get a
low severity (impact on application is less) but priority of this is high as client would like to fix this
immediately.

9.9. Defect Management tools


Primary role of defect management is to prevent defects. This includes defects measurement
and defect analysis which is used to improve the processes to minimize the defects. More
mature software development organizations use tools such as defect leakage metrics (for
counting the numbers of defects that pass through development phases prior to detection) and
control charts to measure and improve development process capability.
A simple spreadsheet can be used to report the defects. Availability of sophisticated tools in the
market has enabled their common usage. Advantages of using a tool are different for different
stakeholders as listed below:

Customer - Easy tracking of the defects. Helps to know the status of the application.
Project Manager - For getting quick access to the project statistics
Tester - Efficient reporting and tracking of defects
Developer - Defect details logged can be used to improve the development process.

Features of a defect management tool


Features to look for in a defect management tool are listed below:

User friendliness
Email notification
File attachment
Audit Trail
Configuration Management
Customizable Fields
Metric Reports & graphs
Remote Administering
Report Cross-Referencing
Security Implementation
Web-Based Client
Workflow Support

Popular defect tracking tools available in the market are


Mercury's Test Director
Mozilla's Bugzilla

9.9.9. Mercury's Test Director

A product by Mercury Interactive


Initial versions were of client/server in nature while the later versions were web enabled
Defects can be accessed through the
Defects' tab.
Features offered for complete defect management include
Adding defects
30

Infosys Technologies Limited


Managing defects (allocation, change status etc.)
Analysis of defect data
Log maintenance to track defect status as and when it undergoes a change
Note: Main menu can be seen in the attached picture
Defects-Add, Modify, Delete defect etc. can be managed with the sub menus provided in this tab
(Figure 11)
Search-Functionality to search for any information related to the defects logged within the
system
View- For customizing the view
Favorites-Managing the settings
Analysis-An important feature to generate reports and graphs for defect analysis

Figure 11- Snapshot of the Defects tab in Test Director.

9.9.9. Mozilla's Bugzilla


A bug-tracking product from Mozilla.org. It is an enterprise-class piece of software that
tracks bugs and issues.
An account is created via "Open a new Bugzilla account" link. This is essential in order to
use Bugzilla.
Defects can be accessed through the
Bug' screen.
Features offered for complete defect management include
Adding defects
Managing defects (allocation, change status etc.)
Analysis of defect data
Defects-Add, Modify, Delete defect etc. can be managed.
Searching for Bugs-Functionality to search for any information related to the defects logged
within the system
31

Infosys Technologies Limited


Bug Lists- For listing similar kind of bugs.
Report- Is a view of the current state of the bug database.
Charts- Is a view of the state of the bug database over time.

32

Infosys Technologies Limited


10.When to Stop Testing?
It is rarely possible for one to test all possible aspects of an application, every possible
combination of events, every dependency or everything that could go wrong. Hence we can
never say for sure that testing is 100% complete. But since testing like any other activity needs
effort, time and money, we need to stop it at some point. The objective behind testing is to
minimize the risk in the application due to malfunctioning. If the risks are within acceptable
limits then, we can put a halt on the testing activities. The following factors could be used to
decide the same.
The common factors in deciding when to stop testing are:

Test cases completed with certain pre determined percentage of test cases passed
Coverage of code/functionality/requirements reaches a specified point
Bug rate falls below a pre determined level
Application crashes immediately after testing
Many critical defects found within a short period of test execution

11.Testing Case Study


System testing for a leading Retail company
Problem Statement: Client had an application in legacy system which they wanted to migrate to
web for making it accessible to all their end users. The existing functionality was planned to be
migrated to web based application as it is.
Infosys Services offered
Requirement analysis to understand the current functionality of the system with the
help of documents available and by scheduling discussions with client team
Types of testing to be conducted were identified as
o
Functional testing
o
Browser compatibility testing &
o
Performance testing.
Prepared the requirement traceability matrix which covers the functionality currently
supported by the legacy application along with the test priority details.
Browsers to be covered were identified as Internet Explorer & Netscape Navigator after
having discussions with client.
Prepared and shared the estimation details with the client
Prepared Test plan to manage the project with strategies for functional, browser
compatibility and performance testing.
Planned to conduct the below in the order specified
o
Smoke testing,
o
Functional Testing
o
Browser compatibility testing
o
Defect verification & Regression Testing
o
Performance testing.
Strategy adopted for functional testing was to compare the results with legacy system
behavior in case of issues.
Studied the existing test cases available to identify missing functionalities and prepared
test cases for the same.
33

Infosys Technologies Limited


Environment set up along with test data was done at onsite and provided connectivity to
the offshore test team.
Smoke test was conducted and end of it executed the test cases prepared as per the test
plan and reported issues.
Carried out defect fixes testing and regression testing and tracked the defects to closure
Timely and accurate entry of status, defect information and metrics was done.
Prepared the closure report and shared the details with the client.
Deliverables
RTM - Document providing mapping between Test cases, scenarios and
requirements/functionality along with priorities
Estimation - Document providing the details regarding the resources required along with
the timelines
Test Plan - Provides all the details required to manage the project like scope, types of
testing to be carried out, entry, exit criteria, defect management process etc
Test cases for functionality & scripts for performance testing - Contains the steps which
should be used to check a particular requirement/behavior
Test execution results for functionality and test reports for performance testing - Details
regarding the working of an application w.r.t. functionality and performance
Defect/issues logs - Details regarding the issues found during testing
Status reports - Report with information on status of the project
Closure reports - Summary of the project giving information about best practices
followed, metrics collected etc.
Value Add to the client
Very high defect detection rate since the project inception due to adherence to Infosys
processes.
Optimized test cycle due to Infosys Global delivery model.
Prioritized test coverage.
The deliverables to the customer were cost effective and of high quality.
Successful signoff with almost zero defect post release.

34

Infosys Technologies Limited

12.Test Deliverables
Requirement Traceability Matrix (RTM) - RTM is a deliverable from the test team during the
requirement Analysis phase.
This provides the mapping between the test cases to business scenario to business
functionality.
Helps to link the business criticality and market priority with test requirements.
Serves as a single source for tracking purposes.
Helps in doing the impact analysis when there is a change in the requirement as the test
cases against a particular requirement can be easily identified using a traceability matrix.
Used for prioritization of the tests during crunch times as this one documents the
criticality of the test cases.
A sample RTM template screenshot is given below (Figure 12)

Figure 12 RTM snapshot

Please refer to the attached RTM template for details.

Test Plan - A software test plan is a document that describes the objectives, scope, approach,
and focus of a software testing effort. This document is prepared during the test strategizing
phase. The process of preparing a test plan is a useful way to think through the efforts needed to
validate the acceptability of a software product. The completed document will help people
35

Infosys Technologies Limited


outside the test group understand the 'why' and 'how' of product validation. Test strategy which
talks about the approach towards testing is generally a part of the test plan document.
TOC of a sample Test Plan doc is given below

Figure 13 Test Plan snapshot

Please refer to the attached Test Plan template for details (Figure 13)

Test case - Set of steps involved to evaluate a particular aspect of business scenario/condition is
working correctly. A test case should contain at a minimum particulars such as test case
identifier, description, steps, input data requirements and expected results. Note that the
process of developing test cases can help find problems in the requirements or design of an
application, since it requires completely thinking through the operation of the application. For
this reason, it's useful to prepare test cases early in the development cycle if possible.
For eg: If we had to test the login functionality, the test case can have the following steps
A sample test case template screenshot is given below (Figure 14)

36

Infosys Technologies Limited

Figure 14 Test Case snapshot

Please refer to the attached Test Case template for details.

Test data - The data/values used to test an application is called as the test data. For eg: if we are
checking login functionality, the
user id' and
password' used for testing this functionality forms
test data. Test data can be classified as the following
Static data (permanent data)
Configurable data (parameters driving the application)
Master data (mostly
read only' data used for reference)
Transaction data (operational data)
Test data generation includes 3 different phases mentioned below.
Test data identification - need to identify the types of test data mentioned above as per
the requirements.
Test data setup - This could be done manually or can use a tool to do the same.
Commonly used tools are:

SQL Loader - Utility which helps the user load data from a flat file to one or
more database tables.
Export utility - This helps user to copy data from one database to another
Data factory - Tool which helps populate test databases with syntactically
correct test data. It first reads the database schema and displays database
tables/columns. User can then point, click and populate the database using
the features available.
37

Infosys Technologies Limited


Test data setup review and rework - This ensures completeness, thoroughness and
accuracy of the test data which is set up and initiates rework wherever required.
Test data can be captured in excel/word/notepad format.
Defect report - When a defect is reported, the details regarding the defect should be
captured in a document and defect report is used for the same. It should have details like
defect description, module in which problem is occurring, steps to reproduce, screenshots,
etc.
A sample defect report template is given below (Figure 15)

Figure 15 Defect Report snapshot

Please refer to the attached Defect Report template for details.

38

Infosys Technologies Limited


13.Appendix

13.13. Definition of Quality


Quality has two popular definitions and they are "Customer view of quality" and "producer's
view of quality".
As per Customer view of quality, any product which meets the requirements of end customer is
called as quality product.
As per producers' view of quality any product which meets the documented requirement specific
ation is called as a quality product.
Most of the times there will be difference between the two and then a gap in quality is said to
have occurred. There are two quality gaps namely producer's gap and customer's quality gap.
Producer's quality gap is said to occur when there are gaps between the requirements specified
and what has been finally delivered. Customer gap is said to occur when there is difference in
what has been delivered against what the customer actually needed. These discrepancies can
occur due to any of the following.
Producer may not really understand the true needs of the client
Clients may have unrealistic expectations of what can be achieved
Producer may understand demands of the client but may fail to implement the same
effectively.
Clients may not understand how to use a product/or are not trained on the usage of the
product and hence will be dissatisfied with the product even if it has all the features
implemented.
These gaps need to be minimized to make sure that a product developed meets the requirement
specifications and the needs of the customer.
Minimizing the Producer's quality gap:
To do this, one must make sure that there are processes in place to enforce that the product
under development meets the specified requirements. This will help the producer deliver
consistent products to the customer.
Minimizing the customer's quality gap:
To make this happen, one must capture true needs of the customer. This can be achieved by
involving the customer during the different phases of product development life cycle like
conducting customer surveys, joint application development etc.
Components of Quality
Quality is driven by customer requirements and the following components are derived to
measure quality.
Attributes

Definition

Correctness

The extent to which a program satisfies its specifications and fulfills the
user's mission and goals

Reliability

The capability of the software product to maintain a specified level of


39

Infosys Technologies Limited


performance when used under specified conditions.
Efficiency

The capability of the software product to provide appropriate


performance, relative to the amount of resources used, under stated
conditions.

Integrity

The extent to which access to software or data by unauthorized persons


can be controlled

Usability

The capability of the software product to be understood, learned, used


by the user, when used under specified conditions.

Maintainability

The capability of the software product to be modified. Modifications


many include corrections, improvements or adaptation of the software to
changes in environment, and in requirements and functional
specifications.

Testability

The effort required for testing a program to ensure it performs its


intended function

Flexibility

The effort required for modifying an operational program

Portability

The capability of the software product to be transferred from one


environment to another.

Reusability

The extent to which a program can be used in other applications - related


to the packaging and scope of the functions that the programs perform

Interoperability

The effort required to couple one system with another

Table 4 Quality Components

13.13. Quality Assurance Vs Quality Control


Both quality control and quality assurance aims at producing a quality product which meets the
customer needs as well as the requirements. But there is a difference between the two.
Quality Assurance is a set of activities designed to ensure that the development and/or
maintenance process is adequate to ensure a system will meet its objectives.
Quality Assurance deals with establishing processes to make sure that a quality product is
produced every time. Process is a set of defined activities that should be performed to produce a
product and these can provide consistency in the products delivered when followed religiously.
Also QA sets up measurement programs to evaluate processes to identify the weakness in
processes to improve them.
Quality Control is a set of activities designed to evaluate a developed work product.
Quality control deals with making sure that the produced product meets the customer needs
and requirements and action taken when any non conformance is detected. Reviews and testing
are different types of quality controls. It helps to find out the weakness in the system for the
primary purpose of correcting them.
QA is more of a preventive action whereas QC comprises of appraisal and corrective actions. QA
is process oriented and QC is product oriented.
It is possible to have QC functioning without QA in place. That is one can check whether the
product produced meets the standards/requirements irrespective of the fact that the producer
has not followed any processes.

40

Infosys Technologies Limited


13.13. Measurements and Metrics
Measure is single quantitative attribute of an entity whereas metric is a derived unit of
measurement which can not be obtained directly. It is arrived at by combining two or more
measures. Since Metrics is a combination of measures it gives more valuable information in
understanding or evaluating the process.
Measures and metrics help us
To measure the effectiveness of testing activity
To do trend analysis
To improve quality and productivity
In identifying and improving the weak areas in the testing process
To measure and track progress
Who does it?
Software metrics are analyzed and assessed by software Managers.
Measures are often collected by software Engineers/Testers.
Basic Measures
Measure
Effort
Schedule
Defect
Size

Unit
Hours
Days
Number
LOC

Table 5 Basic Measures & units

Metrics could be classified as


Process Metrics
Product Metrics
Service metrics
Process metrics help in identifying the performance of a process and this includes
Effort variance
Schedule Variance
Productivity
Defect detection rate
Cost of Quality
Test effectiveness
Product can be measured for it's quality, size etc. Few of the product metrics are given below.
Function points
Defect density
Reliability/failure rate
Service metrics help in identifying the strengths and weakness of the services and this includes
Turn around time
Age of open tickets
On time delivery

13.13.13.

Cost Of Quality

Note: in Infosys, COQ is represented using the term Appraisal & Rework Cost (ARC)

41

Infosys Technologies Limited


COQ is one of the very important metric used in STLC. It is the extra money that needs to be
spent if the product was not built correct the first time. If the product can be made defect free
the first time, then the COQ will be zero but that not realistic.
COQ has 3 components and they are
Prevention: Money spent to prevent the defects in the 1st place is called as prevention
cost. Cost for conducting training, establishing processes etc comes under this category.
Money is spent before the product is actually built.
Appraisal: Money spent to check whether the product produced meets the quality
requirements is called as the appraisal cost. Cost incurred in conducting testing, reviews
etc fall under this section. Money is spent after the product or the modules are built but
before delivered to the client.
Failure: Money spent to correct a product after it has been released to the end customer
is called as the failure cost. This is the most expensive component of COQ. Cost involved
in correcting a faulty product/loss incurred using a faulty product etc comes under this
category.
Total cost of quality = Prevention cost + appraisal cost + failure cost
The following diagram depicts Cost of Quality (Figure 16)

Figure 16 Cost of Quality

Studies show that the COQ in IT is approximately 50% of the total cost of building a product. Of the
50% COQ, 40% is failure, 7% is appraisal, and 3% is preven tion. Other studies have shown that $1
spent on appraisal costs will reduce failure costs threefold; and each dollar spent on prevention
costs will reduce failure costs tenfold. Obviously, the right appraisal and prevention methods must
be used to get these benefits.

13.13. Definition of the terms used in Project LC


Term
Requirement Analysis Phase
Requirement Gathering

Description
Collection of the requirement specifications, High
Level Design, Detailed Design, H/W and S/W
42

Infosys Technologies Limited

Business/System Requirement
or Requirement Specifications
Requirement Analysis
Requirement
Matrix (RTM)

Traceability

Test Effort Estimation

Test Planning
Test Strategy

Test Plan

Project Schedule

Test Scenario (TS)

Test Case (TC)


Test Step
Precondition
Input Values
Expected or Predicted Output
Walkthrough

Review
Audit

Test Environment/Bed

requirements, System Architecture, Flow charts,


Entity Relationship Diagrams, stakeholder interview,
use case diagram etc. which help in understanding
the complete business and system requirements.
The document that describes in detail the
characteristics of the product with regard to its
intended capability.
Understanding the requirements and preparing a
traceability matrix.
A mechanism of linking Business criticality, Market
priority with associated test requirements, test
scenarios, test cases and test results.
It's a prediction of approximate cost, effort and
resource
required
to
test
an
application/component.

Describes the test objectives and general approach


for a specific type of testing to evaluate specific
type of characteristics of an application
A document which provides the
why' and
how' of
product validation. Details out the plan for
executing the project right from the planning phase
till the closure. Describes the objectives, scope,
approach & focus of testing efforts.
It is a project agenda that outlines the Project
Duration (start date and end date), Number of
resources, Skills, Effort, Timelines, Tasks/Activities
to be performed and the task allocation to the
resources considering the activity dependencies.
A set of test cases evaluating a business
requirement
for
a
specific
business
scenario/condition.
A set of steps involved to evaluate a particular
aspect of business scenario/condition.
A user action on the component which when
performed will have a definite output.
Environmental and State conditions of the
transactions.
A value or a set of values to be entered to the
component to execute a particular test.
The behavior predicted by the specification of an
object under specified conditions
A review of requirements, designs, code, RTM or
any other test artifacts by the author of the object
itself before subjecting it for review to the reviewer.
A process of evaluating project artifacts for their
completeness.
A process of ensuring that the project team is
conforming to set standards.
A description of H/W and S/W environment in
43

Infosys Technologies Limited

Test Stub

Test Driver

Test Data
Test Automation

Test Automation tools

Test Script
Test Script Parameterization

Business Criticality

Market Priority

Test Execution
Test Execution

Test log

Issue

Defect/Bug/Ticket

which the tests will run and any other software


with which the software under test interacts
(including stubs and test drivers).
It can be defined as a small program routine that
substitutes for a longer program, possibly to be
loaded later or that is located remotely.
Drivers are simple programs designed specifically
for testing the calls to lower layers. It provides
emerging low-level modules with simulated inputs
and the necessary resources to function.
A set of values used/necessary to test an
application.
The process of using software to
- Control the execution of tests
- Comparison the actual outputs with expected
outputs
- Set up the test preconditions and other test
control and test reporting functions.
The software tools used to
- Control the execution of tests
- Comparison the actual outputs with expected
outputs
- Set up the test preconditions and other test
control and test reporting functions.
Commonly used to refer to the automated test
case used with a test harness.
Enabling the script to handle multiple/variable sets
of data in order to evaluate more than one
business scenario/test scenario.
It determines how critical the requirement from
business point of view is. Following are some of the
parameters that should be considered to analyze
the business criticality.
Business impact of the function
Frequency of usage
User of the functionality and visibility
Business continuity
Relative complexity
It helps in preparing test strategy.
Based on the timelines of release, it determines
how important the release of the requirement to
the market.
Running the test cases in order to confirm whether
or not the specified requirement is met by the
application/component under test.
A chronological record of all relevant details about
the execution of a test.
Any ambiguity in the test artifact. It may be
information or defect in requirement/code or even
a misunderstanding.
Any non-conformance of the application with the
44

Infosys Technologies Limited


requirement.
Enhancement
It is a defect addressing the scope of improvement
in the product/component.
Ticket
Supporting It is a document authenticating a defect/bug which
document/Defect report
has occurred. It will have all the details of the
defect and can be used as a proof of the defect or
defect fix.
Test Result Reporting
Test Result

Test Report

Test Management
Test Metrics

Test Quality
Test Productivity
Test Effectiveness

Test Coverage
DIR (Defect Injection Rate)

Cost Of Quality
Configuration Management
Change Request

Change Request Management


Impact Analysis

The individual pass/fail status for each test case


executed plus any associated test notes and
incidents raised during testing.
A report detailing the status of an application and
it's readiness to role out into production. It consists
of various parameters against which the application
is evaluated. Also contains defect summary, defect
distribution details, weaknesses in the application
wrt business.

A process of measuring various parameters


involved in project execution. The parameters can
be classified as Internal, External and Quality.
Internal parameters like Test Effectiveness Ratio
(TER), Defect Injection Rate (DIR), Cost Of Quality
(COQ), etc. are measured for internal improvement.
External parameters like Productivity, Cost of
Offshore, etc. are measured to showcase the
capabilities.
Quality parameters vary for different types of
testing.
A degree to which the requirement is addressed
with respect to all quality parameters.
Output per input of time
It is ratio of total number of bugs found by testing
team and the total number of bugs found by testing
team plus number of post production bugs.
The degree to which the application functionality
has been evaluated using the test case written.
Number of defects/flaws caught in an artifact
during review per unit time spent to prepare the
artifact.
Represents the extra money that needs to be spent
if the product was not built correct the first time.
Managing the critical and configurable items during
and after project life cycle.
Any requirement/request which is communicated
to QA after requirement analysis phase of the
project is considered as change request.
Managing the change requirements(s)/request(s).
Assessing the effect of change to an existing
application, usually in maintenance testing to
determine the amount of regression testing to be
45

Infosys Technologies Limited


Risk Analysis

Closure Analysis

Root Cause Analysis

done.
It's an impact analysis to determine the risk on the
business if a particular aspect of the requirement is
not tested.
Includes the set of activities done at the end of the
project to understand the areas for improvement
and also to highlight the best practices followed in
the project.
Root cause analysis is the process of finding and
eliminating the cause, which prevents the problem
from recurring.

Table 6 Definitions of terms used in project lifecycle

46

Infosys Technologies Limited


13.13.
Common terms in Software Testing
Term
Testing
Turn Around Time (TAT)

Elapse
time/Calendar
time/Schedule

Code coverage

Emulator

Simulator

Debugging
Validation

Verification

Virtual user

Description
The process of exercising software to verify that it
satisfies specified requirements and to detect errors
The time elapsed between an issue reported to
development team and the development team
getting back to the reporter with the resolution. The
accepted TAT is 2-3 days.
Elapse time is equal to calendar time for which the
project will be executed which includes the actual
effort, holidays and TAT. This will be defined in the
schedule.
An analysis method that determines which parts of
the software have been executed (covered) by the
test case suite and which parts have not been
executed and therefore may require additional
attention.
A device, computer program or system that accepts
the same inputs and produces the same outputs as a
given system.
A device, computer program or system used during
software verification, which behaves or operates like
a given system when provided with a set of
controlled inputs.
The process of finding and removing the causes of
failure in software.
Determination of the correctness of the products of
software development with respect to the user
needs and requirements.
The process of evaluating a system or component to
determine whether the products of the given
development phase satisfy the conditions imposed
at the start of the phase.
A virtual user is a program that acts like a real user
would when making requests to an application.

Table 7 Definitions of testing terms

47

Infosys Technologies Limited

14.References
IVS_Trainingmaterials
PRIDE @ Infosys
Knowledge Shop at Infosys
http://www.softwareqatest.com/qatfaq1.html#FAQ1_3
IEEE Std 610.12-1990 - IEEE Standard Glossary of Software Engineering Terminology
ITS (Aust) QMS V1.0 - Independent Test Services (Australia) Pty Ltd Quality Management System.
Version 1.0
ISBN 0-273-03645-9 - A Glossary of Computing Terms, The British Computer Society, 7th edition

48

You might also like