Software Testing
Software Testing
BY:Sunil Sharma
Asst.
Prof. IT/MCA
Software Testing
Software testing is a popular risk
management strategy. It is used to verify
that functional requirements were met.
( Note : Functional requirements discuss the functionalities required by the user from the
system . It is useful to consider a system as performing a set of functions {f i}. These functions
can be considered similar to function f : I
O meaning that a function transforms an
element (ii) in the input domain (I) to a value (O i) in the output (O). The non-functional
requirements deal with the characteristics of a system that cannot be expressed as functions
like number of concurrent users, throughput , interface constraints on the system)
Testing Principles
11
Validation
Testing Myths
Myths: Testing is too expensive.
Reality:There is a saying, pay less for testing during software development or pay more for maintenance or
correction later. Early testing saves both time and cost in many aspects however, reducing the cost without
testing may result in the improper design of a software application rendering the product useless.
Reality:During the SDLC phases testing is never a time consuming process. However diagnosing and fixing
the error which is identified during proper testing is a time consuming but productive activity.
Reality:No doubt, testing depends on the source code but reviewing requirements and developing test
cases is independent from the developed code. However iterative or incremental approach as a development
life cycle model may reduce the dependency of testing on the fully developed software.
13
Testing Myths
Myths: Complete Testing is Possible.
Reality: It becomes an issue when a client or tester thinks that complete testing is possible. It is
possible that all paths have been tested by the team but occurrence of complete testing is never
possible. There might be some scenarios that are never executed by the test team or the client
during the software development life cycle and may be executed once the project has been
deployed.
Reality : This is a very common myth which clients, Project Managers and the management
team believe in. No one can say with absolute certainty that a software application is 100% bug free
even if a tester with superb testing skills has tested the application.
Reality:It is not a correct approach to blame testers for bugs that remain in the application
even after testing has been performed. This myth relates to Time, Cost, and Requirements changing
Constraints. However the test strategy may also result in bugs being missed by the testing team.
14
Testing Myths
Myths: Testers should be responsible for the quality of a product.
Reality:It is a very common misinterpretation that only testers or the testing team should be
responsible for product quality. Tester's responsibilities include the identification of bugs to the stakeholders
and then it is their decision whether they will fix the bug or release the software. Releasing the software at
the time puts more pressure on the testers as they will be blamed for any error.
Reality:Yes it is true that Test Automation reduces the testing time but it is not possible to start Test
Automation at any time during Software development. Test Automaton should be started when the software
has been manually tested and is stable to some extent. Moreover, Test Automation can never be used if
requirements keep changing.
Reality:People outside the IT industry think and even believe that any one can test the software and
testing is not a creative job. However testers know very well that this is myth. Thinking alternatives scenarios,
try to crash the Software with the intent to explore potential bugs is not possible for the person who
developed it.
Reality : Finding bugs in the Software is the task of testers but at the same time they are domain
experts of the particular software. Developers are only responsible for the specific component or area that is
assigned to them but testers understand the overall workings of the software, what the dependencies are and
what the impacts of one module on another module are.
15
DEBUGGING:
Debugging is the activity of locating and correcting
errors. It involves identifying, isolating and fixing the
problems/bug. Developers who code the software
conduct debugging upon encountering an error in
the code. Debugging can be performed in the
development phase while conducting Unit Testing or
in phases while fixing the reported bugs.
17
Debugging Approaches
The following are some of the approaches that are popularly adopted by the
programmers for debugging:
Brute force method: This is the most common method of debugging, but
is the least efficient method. In this approach, print statements are inserted
through out the program to print the intermediate values with the hope that
some of the printed values will help to identify the statement in error. This
approach becomes more systematic with the use of a symbolic debugger,
because values of different variables can be easily checked and break points
and watch points can be easily set to test the values of variables
effortlessly.
Backtracking: In this approach, beginning from the statement at which an
error symptom has been observed, the source code is traced backwards
until the error is discovered. Unfortunately, as the number of source lines to
be traced back increases, the number of potential backward paths increases
and may become unmanageably large for complex programs.
18
Debugging Approaches
19
Debugging Guidelines
The following are some general guidelines for effective
debugging:
Many times debugging requires a thorough understanding
of the program design. Trying to debug based on a partial
understanding of the program design may require an
inordinate amount of effort to be put into debugging even
for simple problems.
Test Cases
Test Case
Executio
n
Cause
Identificatio
n
Regressio
n
Tests
Correctio
ns
Bugs
Identificati
on
Selection
of
corrective
action
Explore
Correction
alternatives
21
Note:
Erroris terminology of Developer.
Bugis terminology of Tester
22
24
Testing Types
Manual testing
This type includes the testing of the Software manually i.e.
without using any automated tool or any script. In this type
the tester takes over the role of an end user and test the
Software to identify any un-expected behavior or bug. There
are different stages for manual testing like unit testing,
Integration testing, System testing and User Acceptance
testing. Testers use test plan, test cases or test scenarios to
test the Software to ensure the completeness of testing.
When to Automate?
Availability of time.
26
How to Automate?
28
Static Testing
Desk
checking
Code
Walkthrou
gh
Code
Inspection
Unit/code
Functional
testing
White box
testing
Structural
Testing or
dynamic
testing
Code
coverage
Stateme
nt
coverage
Path
coverage
Code
complexity
Cyclomati
c
complexit
y
Condition
coverage
Function
coverage
30
Static Testing
Static testing is a type of testing which requires only
the source code of the product, not the binaries or
executables. Static testing does not involve executing
the programs on computers but involves select
people going through the code to find out whether
Static Testing
Static testing by humans
Static testing by specialized tools
31
Desk Checking
Normally done manually by the author of the code, desk
checking is a method to verify the portions of the code for
correctness. Such verification is done by comparing the code
with the design and specifications to make sure that the
code does what it is supposed to do and effectively.
This is desk checking that most programmers do before
compiling and executing the code. Whenever errors are
found the author applies the corrections for errors on the
spot. This method of catching and correcting errors is
characterized by:
- No structured method or formalism to ensure
completeness
- No maintaining of a log or checklist
34
Desk Checking
This method relies completely on the authors
thoroughness, diligence and , skills. There is no process or
structure that verifies the effectiveness of desk checking.
This method is effective for correcting obvious coding
errors but will not be effective in detecting errors that arise
due to incorrect understanding of requirements or
incomplete requirements
35
Desk Checking
Advantage
The
programmer
who
knows the code and the
programming
language
very well is well equipped
to read and understand his
or her own code.
Since this is done by one
individual, there are fewer
scheduling and logistics
overheads.
The defects are detected
and
corrected
with
minimum time delay.
Disadvantage
Walkthrough
Objectives of Walkthrough:
- To uncover errors in function, logic or implementation.
-
37
Types of Walkthrough
Specification Walkthrough: Specification walkthrough
includes
system
specification,
project planning, and
requirements
analysis.
Participants
of
specification
walkthrough are user, senior analyst and objects use are DFDs,
Data Dictionary etc.
Design
Walkthrough:
Formal Reviews
39
Inspection
Inaninspection,aworkproductisselectedforreviewandateamis
gathered for an inspection meeting to review the work product. A
moderator is chosen to moderate the meeting. Each inspector
prepares for the meeting by reading the work product and noting
eachdefect.The goal of the inspection is to identify defects
41
Inspection Stages
Thestagesintheinspectionsprocessare:
42
43
Walkthrough
45
Structural Testing
Structural testing takes into account the code, code
structure, internal design, and how they are coded.
The fundamental difference between structural
testing and static testing is that in structural testing
tests are actually run by the computer on the built
product, whereas in static testing, the product is
tested by humans using just the source code and
not the executables or binaries.
Structural testing entails running the actual product
against some predesigned test cases to exercise as
much of the code as possible or necessary.
46
48
Statement coverage
Program constructs in most conventional programming
language can be classified as
1. Sequential control flow
2. Two way decision statements like if then else
3. Multi way decision statements like switch
4. Loops like while do, repeat until and for
Statement coverage: Statement coverage refers to writing
test cases that execute each of the program
statements. Code coverage can be achieved by
providing coverage to each of the above types of
statements.
Sequential Control Flow: For a section of code that
consists of statements that are sequentially executed
test cases can be designed to run through from top to
bottom.
50
Statement Coverage
A test case that starts at the top would generally
have to go through the full section till the bottom of
the section. However this may not always be true.
1. If there are asynchronous exceptions that
the code
encounters.
2. A section of code may be entered from
multiple points
Two-way decision construct:
In a two way decision construct like the if
statement, then to cover all the statements, we
should also cover the then and else parts of the if
statement. This means we should have, for each if
then else, one test case to test the Then part and
one test case to test the else part.
51
Statement Coverage
Multi-way decision construct: Multi-way decision
construct such as Switch statement can be reduced to
multiple two-way if statements. Thus, to cover all
possible switch cases, there would be multiple test
cases.
Loop Constructs: A loop in various forms such as for,
while, repeat, and so on- is characterized by executing
a set of statements repeatedly until or while certain
conditions are met. A good percentage of the defects in
programs come about because of loops that do not
function properly.
More often, loops fail in what are called boundary
conditions. One of the common looping errors is that
the termination condition of the loop is not properly
stated.
52
Statement Coverage
In order to make sure that there is a better statement
coverage for statements within a loop there should be
test cases that
(1) Skip the loop completely, so that the situation of
the termination condition being true before starting
the loop is tested.
(2) Exercise the loop between once and the maximum
number of times, to check all possible normal
operations of the loop.
(3) Try covering the loop, around the boundary of nthat is, just below n, n, and just above n.
The statement coverage for a program can be calculated by the
formula
Statement Coverage=(Total statements exercised
/ Total number of executable
statements in program) *
100
53
Limitations
Even if we were to achieve a very high level of statement
coverage, it does not mean that the program is defect free.
1. First, consider a hypothetical case when we achieved 100
percent code coverage. If the program implements wrong
requirements and this wrongly implemented code is fully
tested with 100 % code coverage, it still is a wrong program
and hence the 100% code coverage does not mean anything.
2. Consider the following program.
Total = 0;
/* set total to zero */
if ( code = = M)
{
stmt1;
stmt2;
------stmt7;
}
else percent = value /total *100;
/* divide by
zero */
54
Continue
When we test with code = M, we will get 80%
code coverage. But if the data distribution in the real
world is such that 90 percent of the time, the value of
code is not = M then the program will fail 90 percent
of the time. Thus, even with a code coverage of 80%,
we are left with a defect that hits the users 90 percent
of the time.
Path Coverage:
False
If
(mm<1
||
mm>12
)
B
If(mm
==2)
True C
A True
Invalid Date
False
False
D
If(leap
year(y
yyy))
True
Daysofmonth[2]
=29;
Invalid Date
If(dd<Days
True ofmonth[m
m]||
G
dd>daysof
month[mm
])
F
Daysofmonth[2]
=28;
False
H
Valid Date
56
Continue
Condition Coverage
For the given limitations, path testing may not be sufficient.
It is necessary to have test cases that exercise each
Boolean expression and have test cases that produce the
TRUE as well as FALSE paths. Obviously, this will mean
more test cases and the number of test cases will rise
exponentially with the number of conditions and Boolean
expressions.
Condition coverage is a much stronger criteria than path
coverage, which in turn is a much stronger criteria than
statement coverage.
Condition Coverage =( Total decisions exercised
/ Total number of decisions
in
program) * 100
59
Function Coverage
Function coverage is to identify how many program functions
are covered by test cases.
The requirements of a product are mapped into functions
during the design phase and each of the functions form a
logical unit. For example, in a payroll application, calculate
tax could be a function.
While providing function coverage, test cases can be written
so as to exercise each of the different functions in the code.
The advantages that function coverage provides over the
other types of coverage are as follows:
1. Functions are easier to identify in a program and hence it
is easier to write test cases to provide function coverage.
2.Since functions are at a much higher level of abstraction
than code, it is easier to achieve 100 percent function
coverage than 100% coverage in any of the earlier
methods.
60
Cont
3. Functions
have a more logical mapping to
requirements and hence can provide a more direct
correlation to the test coverage of the product.
4.
Since functions are a means of realizing
requirements, the importance of functions can be
prioritized based on the importance of the
requirements they realize. Thus, it would be easier to
prioritize the functions for testing.
5. Function coverage provides a natural transition to
black box testing.
We can also measure how many times a given
function is
called. This will indicate the which
functions are used most often and hence these
functions become the target of any performance
testing and optimization
61
Con
True
True
True
IF(A ||
B)
IF
(A)
IF
(B)
False
False
False
64
Con..
4. When a set of sequential statements are
followed by a simple predicate, combine all the
sequential statements and the predicate check
into one node and have two edges emanating
from this one node. Such nodes with two edges
emanating from them are called predicate
nodes.
5. Make sure that all the edges terminate at
some node; add a node to represent all the sets
of sequential statements at the end of the
program.
65
66
Con..
Cyclomatic Complexity = Number of Predicate Nodes + 1
Cyclomatic Complexity = E N + 2
En
d
.# of independent paths=1
.# of nodes, N=2
.# of edges, E=1
Cyclomatic complexity = E N
+2 = 1
.# of predicate nodes P = 0
Cyclomatic complexity = P + 1
=1
Con..
.# of independent paths = 2
.# of nodes N = 4
.# of edges, E = 4
Cyclomatic complexity = E N
+2=2
.# of predicate nodes , P = 1
.# Cyclomatic complexity = P +
1=2
En
d
Con..
Independent Path: An independent path can be
defined as a path in the flow graph that has at
least one edge that has not been traversed
before in other paths. A set of independent paths
that cover all the edges is a basis set. Once the
basis set is formed, test cases should be written
to execute all the paths in the basis set.
Calculating and using cyclomatic complexity: For
small programs cyclomatic complexity can be
calculated manually, but automated tools are
essential as several thousands of lines of code
are possible in a project. Based on the complexity
number that emerges from using the tool, one
can conclude what actions need to be taken for
complexity measure.
69
Con..
Complexi
ty
Meaning
20-30
30-40
70
Con..
71
Con..
S0
S1
S0
if ( C>10) {
S2
S1
F = F + C;
Flag = 1;
S3
S3 } else if ( C > 5) {
S4
F= F-1;
} else {
S5
S6
F= C*12;
S2
S4
S5
S6
}}
R
72
Dynamic Testing
Testing done by executing the program
Dynamic testing does validation process
Dynamic testing is about finding and fixing
the defects
Dynamic testing gives bugs/bottlenecks in
the software system.
Dynamic testing involves test cases for
execution
Dynamic testing is performed after
compilation
Dynamic testing covers the executable file
of the code
Cost of finding and fixing defects is high
Return on investment will be low as this
process involves after the development
phase
More defects are highly recommended for
good quality.
Comparatively requires lesser meetings
74
77
Con..
and making requirements based testing more
effective.
Some organizations follow a variant of this method to
bring more details into requirements. All explicit
requirements ( from SRS) and implied requirements
(inferred by test team) are collected and documented
as Test Requirements Specification (TRS).
Requirements based testing can also be conducted
based on such a TRS, as it captures the testers
perspective as well.
Requirements are tracked by a Requirements
Traceability Matrix (RTM). An RTM traces all the
requirements from their genesis through design,
development, and testing. This matrix
evolves through the life cycle of the project.
79
Priori Test
ty
conditions
(H,M,
L)
Use key 123456
Test
case
IDs
Phase of
testing
BR02
Lock_00
2
Unit,
Component
BR03
Lock_00
3
Component
Lock_00
5
Lock_00
6
Lock_00
7
BR01
BR04
BR-
Unit,
Lock_00 Component
Lock_00
4
Integration
System
80
RTM
Each requirement is given a unique id along with a brief
description .The requirement identifier and description can
be taken from the Requirement Specification.
Each requirement is assigned a requirement priority,
classified as high, medium or low. Tests for higher priority
requirements will get precedence over tests for lower
priority requirements.
The test conditions column lists the different ways of
testing the requirement. Test conditions can be arrived at
using.
The test case IDs column can be used to complete the
mapping between test cases and the requirement
81
Role of RTM
An RTM play a valuable role in requirements based
testing.
1. When there are a large numbers of requirements, it
would not be possible for someone to manually keep a
track of the testing status of each requirement. The
RTM provides a tool to track the testing status of each
requirement without missing any requirement.
2. By prioritizing the requirements, the RTM, enables
testers to prioritize the test case execution to catch
defects in the high priority area as early as possible.
3. Test conditions can be grouped to create test cases or
can be represented as unique test cases. The list of test
case that address a particular requirement can be
viewed from the RTM.
4. Test conditions/cases can be used as inputs to arrive
at a size /effort/schedule estimation of tests.
82
BVA
b
84
Equivalence Partitioning
Equivalence partitioning is a software testing technique
that involves identifying a small set of representative
input values that produce as many different output
conditions as possible.
The set of input values that generate one single
expected output is called a partition. When the behavior
of the software is the same for a set of values then the
set is termed as an equivalence class or a partition.
One representative sample from each partition( also
called the member of equivalence class) is picked up for
testing. One sample from the partition is enough for
testing as the result of picking up some more values
from the set will be the same and will yield any
additional defects. Since all the values produce equal
and same output they are termed as equivalence
partition.
85
Con..
Testing by this technique involves
a) identifying all partitions for the complete set of
input, output values for a product.
b) picking up one member value from each partition
for testing to maximize complete coverage.
From the results obtained for a member of an equivalence
class or partition, this technique extrapolates the expected
results for all the values in that partition.
The advantage of using this technique is:
a) we gain good coverage with a small number of test
cases. For example, if there is a defect in one value in
a partition, then it can be extrapolated to all the values
of that particular partition.
b) Redundancy of tests is minimized by not repeating
the same tests for multiple values in the same
partition.
86
Con..
Ex- A life insurance company has base premium
of $0.50 for all ages. Based on the age group, an
additional monthly premium has to be paid that is
as listed in the table below. For example, a person
aged 34 has to pay a
premium = base premium + additional
premium
= Additional
$0.50 + $1..65
= $2.15
Age group
premium
Under 35
$1.65
35-59
$2.87
60+
$6.00
87
Con..
Based on the equivalence partitioning technique, the
equivalence partitions that are based on age are given below:
* Below 35 years of age (valid input)
* Between 35 and 59 years of age (valid input)
* Above 60 years of age (valid input)
* Negative age (invalid input)
* Age as 0 (invalid input)
* Age as any three digit number (valid input)
We need to pick up representative values fro each of the
above partitions. The equivalence classes should also include
samples of invalid inputs.
88
Con
S. Equivalence
No partitions
.
Type of
input
Test
data
26,12
Expected Results
Age below 35
Valid
Age 35-39
Valid
Age above 60
Valid
Negative Age
Invalid
-23
Warning message-Invalid
input
Age as 0
Invalid
Warning message-Invalid
input
37
65,
90
89
Con..
4. If there is a decimal point, then there should be
two digits after the decimal.
5. Any number whether or not it has a decimal
point, should
be terminated by a blank.
+ or Digit
1
3
Decimal
Digit
Digit
Digit
Blank
6
Blank
Transition Diagram
91
Con..
Current
Input
Next State
State
1
Digit
Digit
Blank
Decimal
Digit
Digit
Blank
Point
92
Con..
The above state transition table can be used to
drive test cases to test valid and invalid numbers.
Valid test cases can be generated by:
1. Start from the start State.
2. Choose a path that leads to the next state.
3. If you encounter an invalid input in a given
state generate
an error condition test case.
4. Repeat the process till you reach the final state.
A second situation where graph based testing is
useful is to represent a transaction or workflow.
Consider a simple example of leave application by
an employee. A leave application process can be
visualized as being made up of the following steps.
93
Con..
The employee fills up a leave application, giving his
or her employee ID, and start date and end date of
leave required.
This information then goes to an automated system
which validates that the employee is eligible for the
requisite number of days of leave. If not, the
application is rejected; if the eligibility exits, then
the control flow passes on to the next step below.
This information goes to the employees manager
who validates that it is okay for the employee to go
on leave during that time.
Having satisfied himself/herself with the feasibility
of leave, the manager gives the final approval or
rejection of the leave application.
94
Con..
HR
verify
eligibili
ty
Employe
e
Desires
leave
Feasible
Manage
r
ensure
Eligible
feasibili
ty
Approv
ed
Not
feasible
Ineligible
Leave application form
Reject
95
Source:
Requirements,
Experience,
Program
Model
Property
Model Checker
Property Satisfied
Yes
No
Update
Model
of
Source
97
Con..
One or more desired properties are then coded in a
formal specification language. The model and the
desired properties are then input to a model
checker. The model checker attempts to verify
whether the given properties are satisfied by the
given model.
For each property, the checker could come up with
one of the three possible answers: the property is
satisfied, the property is not satisfied, or unable to
determine.
In the second case, the model checker provides a
counterexample showing why the property is not
satisfied. The third case might arise when the
model checker is unable to terminate after an upper
limit on the number of iterations has reached.
98
Integration Testing
Integration testing means testing of interfaces. When we
talk about interfaces there are two types of interfaces that
have to be kept in mind for proper integration testing
.They are
- internal interfaces
- external interfaces
Internal interfaces are those that provide communication
across two modules within a project or product, internal to
the product, and not exposed to the customer or external
developers.
External interfaces are those that are visible outside the
product to third party developers and solution providers.
One method of achieving interfaces is by providing
Application Programming Interfaces (APIs). APIs enable
one module to call another module. The calling module
can be internal or external.
99
Con..
Not all the interfaces may be available at the same
time for testing purposes, as different interfaces are
usually developed by different development teams,
each having their own schedules.
In order to test the interfaces, when the full
functionality of the component being introduced is not
available, stubs are provided.
A stub procedure is a dummy procedure that has the
same I/O parameters as the given procedures. A stub
simulates the interface by providing the appropriate
values in the appropriate format as would be provided
by the actual component being integrated.
All the interactions between the modules are known
and explained through interfaces. Some of the
interfaces are documented and some of the them are
not.
100
Con..
Explicit interfaces are documented interfaces and
implicit interfaces are those which are known internally
to the software engineers but are not documented.
Componen
t1
Componen
t2
Componen
t5
Componen
t6
Componen
t3
Component
7
Component
4
Component8
Component
8
Con..
These are as follows:
1. Top down integration
2. Bottom-up integration
3. Bi-directional integration
4. System integration
Top-Down Testing
Integration testing involves testing the topmost
component interface with other components with
other components in same order as you navigate
from top to bottom, till you cover all the
components.
102
Component
2
Component
5
Component
3
Component
6
Component
7
Component
4
Component
8
103
Con..
Step
Interfaces tested
1
1-2
1-3
1-4
1-2-5
1-3-6
1-3-6-(3-7)
(1-2-5)-(1-3-6-(3-7))
1-4-8
(1-2-5)-(1-3-6-(3-7)-(1-48)
104
Con..
If a set of components and their related interfaces
can deliver functionality without expecting the
presence of other components or with minimal
interface requirement in the software/product, then
that set of components and their related interfaces
is called as a sub-system.
Each sub-system in a product can work
independently with or without other sub-systems.
This makes the integration testing easier.
The order in which the interfaces are tested may
change a bit if different methods of traversing are
used. A breadth first approach will get you
component order such as 1-2,1-3,1-4 and so on
and a depth first order will get you components
such as 1-2-5,1-3-6, and so on.
105
Bottom-up Integration
Bottom-up integration is just the opposite of topdown integration, where the components for a
new product development become available in
reverse order.
Component
8
Component
5
Component
1
Component
6
Component
2
Component
3
Component
7
Component
4
106
Con..
Step
Interfaces tested
1
1-5
2-6,3-6
2-6-(3-6)
4-7
1-5-8
2-6-(3-6)-8
4-7-8
(1-5-8)-(2-6-(3-6)-8)(4-7-8)
107
Bi-Directional Integration
Component
1
Component
6
Component
2
Component
7
Component
3
Component
4
Component
8
Component
5
Bi-directional integration is a combination of the topdown and bottom-up integration approaches used
together to derive integration steps.
The individual components 1, 2, 3, 4 and 5 are tested
separately and bi-directional integration is performed
initially with the use of stubs and drivers.
108
Con..
Drivers are used to provide upstream connectivity
while stubs provide downstream connectivity. A
driver is a function which redirects the requests to
some other component and stubs simulate the
behavior of a missing component. After the
functionality of these integrated components are
tested, the drivers and stubs are discarded.
STRUB
Module A
Dummy Function
Dummy Function
Module B
Module A
DRIVER
Dummy Function
Module
C
Module B
Dummy Function
Module C
109
Con..
Once components 6, 7, and 8 become available,
the integration methodology then focuses only on
those components, as these are the components
which need focus and are new. This approach is
also called sandwich integration .
Step
Interfaces tested
1
6-2
7-3-4
8-5
(1-6-2)-(1-7-3-4)(1-8-5)
System Integration
System integration means that all the components of the
system are integrated and tested as a single unit.
Integration testing, which is testing of interfaces, can be
divided into two types:
- Components or sub-system integration
- Final integration testing or system integration
Instead of integrating component by component and
testing, this approach waits till all components arrive and
one round of integration testing is done. This approach is
also called big-bang integration.
System integration using the big bang approach is well
suited in a product development scenario where the
majority of components are already available and stable and
very few components get added or modified. In this case,
instead of testing component interfaces one by one, it
makes sense to integrate all the components at one go and
test once, saving effort and time for the multistep
component integrations.
111
Disadvantages
When a failure or defect is encountered during system
integration, it is very difficult to locate the problem.
The ownership for correcting the root cause of the
defect may be a difficult issue to pinpoint.
When integration testing happens in the end, the
pressure from the approaching release date is very
high. This pressure on the engineers may cause them
to compromise on the quality of the product.
A certain component may take an excessive amount
of time to be ready. This precludes testing other
interfaces and wastes time till the end.
112
Factors
Suggested integration
method
Top- down
Dynamically changing
requirements, design,
architecture
Bottom-up
Bi-directional
Combination of above
113
Scenario Testing
Scenario testing is defined as a set of realistic user
activities that are used for evaluating the product. It is
also defined as the testing involving customer scenarios.
There are two methods to evolve scenarios.
1. System Scenarios
2. Use- case scenarios/role based scenarios
System Scenarios: System scenario is a method whereby
the set of activities used for scenario testing covers
several components in the system. The following
approaches can be used to develop system scenarios.
Story line: Develop a story line that combines various
activities of the product that may be executed by an end
user. A user enters his or her office, logs into the system,
checks mail, responds to some mails, compiles some
programs, performs unit testing and so on.
115
Con..
Life cycle/state transition: Consider an object, derive
the different transitions/modifications that happen to
the object, and derive scenarios to cover them. For
example, in a savings bank account, you can start with
opening an account with a certain amount of money,
make a deposit, perform a withdrawal, calculate
interest, and so on. All these activities are applied to
the money object, and the different transformations
applied to the money object becomes different
scenarios.
Deployment/implementation stories from customer:
Develop
a
scenario from a known customer
deployment/implementation details and create a set of
activities by various users in that implementation.
116
Con..
Business verticals: Visualize how a product/software will be
applied to different verticals and create a set of activities as
scenarios to address specific vertical businesses. For
example, take the purchasing function. It may be done
differently in different verticals like pharmaceuticals,
software houses, and government organizations. Visualizing
these different types of tests make the product multipurpose.
Battle ground: Create some scenarios to justify that the
product works and some scenarios to try and break the
system to justify the product doesnt work.
The set of scenarios developed will be more effective if the
majority of the approaches mentioned above are used in
combination, not in isolation. Scenarios should not be a set
of
disjointed activities which have no relation to each
other.
117
Con..
Any activity in a scenario is always a continuation of
the previous activity, and depends on or is impacted
by the results of previous activities.
End-user activity
Frequen
cy
1. Login to
application
High
2. Create an object
High
Priorit Applicable
y
Environments
High
W2000, W2003,XP
No. of
times
covered
10
W2000, XP
Mediu
m
W2000, XP
Mediu
m
W2000, XP
Mediu
m
W2000, XP
Low
W2000, XP
2118
Mediu
m
3. Modify
parameters
4. List object
parameters
Medium
Low
5. Compose email
Medium
6. Attach files
Low
Con..
termed as system behavior.
Users with a specific role to interact between the
actors and the system areAgen
called agents.
t
Cheque
Query
Clerk
Actor
System
Respon
se
Customer
Cash
Response
120
Con..
A customer fills up a check and gives it to official
in the bank. The official verifies the balance in the
account from the computer and gives the
required cash to the customer. The customer in
this example is the actor, the clerk is the agent,
and the response given by the computer which
Systemis
Response
gives Actor
the balance in the account,
called the
User likes response.
to withdraw cash and Request for password or
system
inserts the card in the ATM Personal Identification
machine
Number(PIN)
User fills in the pwd or PIN
121
Defect Bash
Defect bash is an ad hoc testing where people performing
different roles in an organization test the product together
at the same time.
This is very popular among the application development
companies where the product can be used by people who
perform different roles.
The testing by all the participants during defect bash is not
based on written test cases. What is to be tested is left to
an individuals decision and creativity.
Defect bash brings together plenty of good practices that
are popular in testing industry. They are as follows:
1. Enabling people Cross boundaries and test beyond
assigned areas.
2. Bringing different people performing different roles
together in the organization- Testing is not for testers
alone
122
Con..
3. Letting everyone in the organization use the
product before delivery.
4. Bringing fresh pairs of eyes to uncover new
defects-Fresh eyes have less bias
5. Bringing in people who have different levels of
product
understanding to test the product
together randomly-Users of s/w are not same
6. Let testing does not wait for lack of /time
taken for documentation-Does testing wait till
all documentation is done?
Even though it is said that defect bash is an ad
hoc testing, not all activities of defect bash are
unplanned. All the activities in the defect bash are
planned activities, except for what to be tested.
123
Con..
Step1: Choosing the frequency and duration of defect bash
Defect bash is an activity involving a large amount of
effort and an activity involving huge planning. Frequent
defect bashes will incur low return on investment, and too
few defect bashes may not meet the objective of finding all
defects.
Duration is also an important factor. Optimizing the
small duration is a big saving as a large number of people
are involved. On the other hand if the duration is small, the
amount of testing that is done may not meet the objective.
Step2: Selecting the Right Product Build
Since the defect bash involves a large number of
people, effort and planning, a good quality build is needed
for defect bash. A regression tested build would be ideal as
all new features and defect fixes would have been already
tested in such a build. An intermediate build where
the
code functionality is evolving or an untested
build will
124
Con..
make the purpose and outcome of a defect bash ineffective.
Step3: Communicating the objective of each defect bash to
everyone
Even though defect bash is an ad hoc activity, its purpose
and objective have to be very clear. Since defect bash
involves people performing different roles, the contribution
they make has to be focused towards meeting the purpose
and objective of defect bash. The objective should be to find a
large number of uncovered defects.
Step4: Setting up and monitoring the lab for defect bash
Since defect bashes are planned, short term and resource
intensive activities, it makes sense to setup and monitor a
laboratory for this purpose. During defect bash, the product
parameters and system resources(CPU, Disk, RAM)
need to be monitored for defects and also corrected so that
users can continue to use the system for the complete
duration of the defect bash.
125
Con..
126
Con..
issues at a higher level, so that a similar problem
can be avoided in future defect bashes. There
could be one defect associated with an issue and
there could be several defects that can be called
as an issue.
For example, In all components, all inputs for
employee number have to be validated before
using them in business logic. This enables all
defects from different components to be grouped
and classified as one issue.
Step6: Optimizing the effort involved in defect bash
One approach to reduce the defect bash effort is
to conduct micro level defect bashes before
conducting one on a large scale. Some of the more
evident defects will emerge at micro level bashes.
127
Con..
Since a defect bash is an integration phase activity, it
can be experimented integration test team before they
open it up for others. To prevent component level defects
emerging during integration testing, a micro level defect
bash can also be done to unearth feature level defects
before the product can be taken into integration. Hence, a
defect bash can be further classified into
1. Feature/component defect bash
2. Integration defect bash
3. Product defect bash
Effort Saved by the defect bash classification:
Let us take three product defect bashes conducted in two
hours with 100 people.
The total effort involved is 3* 2*100=600 person hours
If the feature/ component test team and integration test
team,
128
Con..
that has 10 people each, can participate in doing two
rounds of micro level bashes, which can find out one third of
defects that are expected, then effort saving is 20%.
Total effort involved in two rounds of product bashes - 400
man hour
Effort involved in two rounds of feature bash (2*2*10) 40
Effort involved in two rounds of integration bash(2*2*10) 40
Effort saved = 600 (A+B+C)=600-480=120 person hours,
or 20%
129
130
Con..
System testing brings out issues that are fundamental to
design, architecture, and code of the whole product.
System testing is the only phase of testing which tests
the both functional and non-functional aspects of the
product.
On the functional side, system testing focuses on real life
customer usage of the product and solutions.
On the non-functional side, system brings in different
testing types some of which are as follows.
1. Performance/Load testing: To evaluate the time taken
or response time of the system to perform its required
functions in comparison with different versions of
same product is called performance testing.
2. Scalability Testing: A testing that requires enormous
amount of resource to find out the maximum capability
of the system parameters is called scalability testing.
131
Con..
Reliability testing: To evaluate the ability of the
system or an independent component of the system
to perform its required functions repeatedly for a
specified period of time is called reliability testing.
Stress testing: Evaluating the system beyond the
limits of the specified requirements or system
resources( such as disk space, memory, processor
utilization) to ensure the system does not break
down unexpectedly is called stress testing.
Interoperability testing: This testing is done to
ensure that two or more products can exchange
information, use the information and work closely.
Localization testing: Testing conducted to verify that
the localized product works in different languages is
called localization testing.
132
System
testing
Sub-system
testing
Product
Organizati
on
System
testing
System
testing
Solution
Integrato
rs
System
testing
Solution
testing
nt
Supplier
Con..
A supplier of a component of a product can assume
the independent component as a system in its own
right and do system testing of the component.
From the perspective of the product organization,
integrating those components is referred to as
sub-system testing.
When all the components, delivered by different
component developers, are assembled by a
product, they are tested together as a system.
At the next level, there are solution integrators who
combine products from multiple sources to provide
a complete integrated solution for a client.
They put together many products as a system and
perform system testing of this integrated solution.
134
Functional
testing
Non-functional testing
Product features
and functionality
Quality factors
Product behavior
Result
conclusion
Simple steps
written to check
expected results
Results varies
due to
Product
implementation
Product implementation,
resources, and configurations
Testing focus
Defect detection
Qualification of product
Tests
Knowledge
required
Failures normally
due to
Code
136
Con..
Testing
aspects
Functional Testing
Non-functional testing
Testing phase
Unit, component,
integration,
system
System
Test case
repeatability
Configuration
137
Con..
A small percentage of duplication across phases is advisable,
as different people from different teams test the features with
different perspectives, yielding new defects.
Grey areas in testing happen due to lack of product knowledge,
lack of knowledge of customer usage, and lack of coordination
across test teams.
Such grey areas in testing make defects seep through and
impact customer usage.
A test team performing a particular phase of testing may
assume that a particular test will be performed by the next
phase.
In such cases, there has to be a clear guideline for team
interaction to plan for the tests at the earliest possible phase.
A test case moved from a later phase to an earlier phase is a
better alternative than delaying a test case from an earlier
phase to a later phase.
139
Con..
There are multiple ways system functional
testing is performed:
1. Design/architecture verification
2. Business vertical testing
3. Deployment testing
4. Beta testing
5. Certification, standards, and testing for
compliance
140
Design/Architecture Verification
In this method of functional testing, the test cases are
developed and checked against the design and architecture
to see whether they are actual product-level test cases.
The test cases for integration testing are created by looking
at interfaces whereas system level test cases are created
first and verified with design and architecture to check
whether they are product level or component level test
cases.
The integration test cases focus on interactions between
modules or components whereas the functional system test
focuses on the behavior of the complete product.
This technique helps in validating the product features that
are written based on customer scenarios and verifying them
using product implementation.
If there is a test case that is a customer scenario but failed
validation using this technique, then it is moved
appropriately
141
Con..
to component or integration testing phases.
Some of the guidelines used to reject test cases
for system functional testing include the
following.
1. Is this focusing on code logic, data structures,
and unit of the product?
2. Is this specified in the functional specification
of any component .
3. Is this specified in design and architecture
specification for integration testing.
142
Deployment Testing
System testing is the right time to test the product for
those customers who are waiting for it.
The short-term success or failure of a particular product
release is mainly assessed on the basis of on how well
these customer requirement are met.
This type of deployment(simulated) testing that happens in
a product development company to ensure that customer
deployment requirements are met is called offsite
deployment.
Deployment testing is also conducted after the release of
the product by utilizing the resources and setup available
in customers locations. This is a combined effort by the
product development organization and the organization
trying to use the product. This is called onsite deployment.
Onsite deployment testing is considered to be a part of
acceptance testing .
143
Con..
Onsite deployment testing is done at two
stages:
In the first stage(stage 1), actual data from the
live system is taken and similar machines and
configurations are mirrored, and the operations
from the users are rerun on the mirrored
deployment machine.
In the second stage(stage 2), after a successful
first stage, the mirrored system is made a live
system that runs the new product.
144
Beta Testing
Developing a product involves a significant amount of
effort and time. Delays in product releases and the
product not meeting the customer requirements are
common.
Con..
3. The requirements are high-level statements with a
high degree of ambiguity. Picking up the ambiguous
areas and not resolving them with the customer results
in rejection of the product.
4. The understanding of the requirement may be correct
but their implementation could be wrong. This may
mean reworking the design and coding to suit the
implementation aspects the customer wants.
5. Lack of usability and documentation makes it difficult
for the customer to use the product and may result in
rejection.
To reduce the risk, periodic feedback is obtained on
the product. One of the mechanisms used is sending
the product that is under test to the customers and
receiving the feedback. This is called beta testing.
146
Con..
During the entire duration of beta testing, there are various
activities that are planned and executed according to a
specific schedule. This is called a beta program. Some of
the activities are
1. Collecting the list of customers and their beta testing
requirements along with their expectations on the
product.
2. Sending some documents for reading in advance and
training the customer on product usage.
3. Sending the beta product to the customer and enable
them to carry out their own testing.
4.
Collecting the feedback periodically from the
customers and prioritizing the defects for fixing.
5. Responding to customers feedback with product fixes
or
documentation
changes
and
closing
the
communication loop with customers in a timely fashion.
147
Non-Functional Testing
Repeating non-functional test cases involves more time,
effort, and resources, the process for non-functional
testing has to be stronger than functional testing to
minimize the need for repetition.
This is achieved by having more stringent entry/exit
criteria, better planning, and by setting up the
configuration.
Setting up the configuration: There are two ways the
setup is done simulated environment and real life
customer environment.
Due to varied types of customers, resources availability,
time involved in getting the exact setup, and so on,
setting up the scenario that is exactly real-life is difficult.
Due to several complexities involved, simulated setup is
used
for
non-functional
testing
where
actual
configuration is
difficult to get.
148
Con..
1. Given the high diversity of environments and variety
of customers it is very difficult to predict the type of
environment that will be used commonly by the
customers.
2. Testing a product with different permutations and
combinations of configurations may not prove
effective since the same combination of environment
may not be used by the customer.
3. The cost involved in setting up such environments
is quite high.
4. The people may not have the skills to set up
environment.
5. It is difficult to predict the exact type and nature of
data that the customer may use. Since confidentiality
is involved in the data used by the customer, such
information is not passed to the testing team.
149
Parameters
Sample entry
criteria
Sample exit
criteria
Scalability
Maximum
limits
Performan
ce
test
Response
time
Throughput
Latency
Query
for
1000
records should have a
response time less
than 3 seconds
Reliability
Failures per
iteration
Failures per
test
duration
Stress
System when
stressed
beyond the
limits
Product
should
be
able to withstand 25
clients
login
happening
simultaneously for 5
hours
in
a
Product should be
able to withstand
100 clients login
simultaneously for 5
hours
in
a
150
configuration
Scalability Testing
The objective of scalability testing is to find out the
maximum capability of the product parameters.
The resources that are needed for this kind of
testing are normally very high. For example, one of
the scalability test case should be finding out how
many client machines can simultaneously log in to
the server to perform some operations.
Trying to simulate that kind of real life parameter
is very difficult but at the same time very important.
At the beginning of scalability test, there may not be
an obvious clue about the maximum capability of
the system. Hence a high-end configuration is
selected and the scalability parameter is increased
step by step to reach the maximum capability.
151
Con..
Failures during scalability testing include the
system not responding, or the system crashing,
and so on.
Whether the failure is acceptable or not is
decided on the basis of business goals and
objectives. For example, a product not able to
respond to 100 concurrent users while its
objective is to serve 100 users simultaneously is
considered a failure.
When a product expected to withstand only 100
users fails when its load is increased to 200, then
it is passed test case and an acceptable situation.
152
Acceptance Testing
Acceptance testing is a phase after system testing that
is normally done by the customers or representatives
of the customer.
The customer defines a set of test cases that will be
executed to qualify and accept the product.
Test cases are executed by the customers themselves to
judge the quality of the product before deciding to buy
the product.
Acceptance test cases are normally small in number
and are not written with the intention of finding defects.
Sometimes, acceptance test cases are developed jointly
by the customers and product organization. In this case,
the
product
organization
will
have
complete
understanding of what will be tested by the customer
for acceptance testing.
153
Con..
In such cases, the product organization tests those test
cases in advance as part of the system test cycle itself to
avoid any later surprise when those test cases are
executed by the customer.
In cases where the acceptance tests are performed by the
product organization alone, acceptance tests are executed
to verify if the product meets the acceptance criteria.
Con..
Acceptance testing is not meant for executing test cases
that have not been executed before.
The existing test cases are looked at and certain
categories of test cases can be grouped to form
acceptance criteria. For example, all performance test
cases should pass meeting the response time
requirements).
Acceptance
Acceptance
Criteria-
Procedure
Con..
3. A minimum of 20 employees are trained on the product
usage prior to deployment.
Con..
- All major defects are to be fixed within 48 hours of
reporting.
Con..
4.New functionality: When the product undergoes
modifications or changes, the acceptance test
cases focus on verifying the new features.
5. A few non-functional tests: Some nonfunctional tests are included and executed as
part of acceptance testing to double check that
the non-functional aspects of the product meet
the expectations.
6. Tests pertaining to legal obligations and
service level agreement:
7. Acceptance test data: Test cases that make
use of customer real-life data are included for
acceptance testing.
158
Regression Testing
Software undergoes constant changes. Such changes
are necessitated because of defects to be fixed,
enhancements to be made to existing functionality,
or new functionality to be added. Any time such
changes are made, it is important to ensure that
1. The changes or additions work as designed
2. The changes or additions do not break
something that is already working and should
continue to work.
Regression testing is designed to address the above
two purposes.
Build: A build is an aggregation of all the defects
fixes and features that are present in the
product.
159
Con..
The final regression test cycle is conducted for a
specific period of duration, which is mutually
agreed upon between the development and
testing teams. This is called cook time for
regression testing.
The final regression test cycle is more critical
than any other type or phase of testing, as this is
the only testing that ensures the same build of
the product that was tested reaches the
customer.
161
Con..
Development Team
New
featur
es
Build 3
New
featu
res
New
featu
res
Build 1
Build 2
Build 6
Test cycle1
Reg.
cycle3
1
Bug fixes
Bug fixes
Bug fixes
Build 4
Build 5
Bug fixes
Bug fixes
Bug fixes
Test team
Test cycle2
Reg.2
Time line
Bug fixes
Bug fixes
Bug fixes
Test
Final
Reg.
Bug fixes
162
Con..
Types
Regular
Regression
Final
Regression
Regression
Testing
What?
Select re-testing
to ensure defect
fixes work no
side-effects
Why?
Defects creep
in due to
changes
Defect fixes
may cause
existing
functionality to
fail
When?
When set of
defect fixes
arrive
After formal
testing for
completed areas
Performed in all
test phases
164
Con..
2. A second approach is to select the test cases
dynamically for each build by making judicious
choices of the test cases. The selection of test
cases for regression testing requires knowledge of
a) The defect fixes and changes made in the
current build
b) The ways to test the current changes
c) The impact that the current changes may
have on other parts of the system
d) The ways of testing the other impacted parts
Some of the criteria to select test cases for
regression testing are as follows
1. Include test cases that have produced the
maximum defects in the past
168
Con..
2. Include test cases for a functionality in which a
change has been made
3. Include test cases that test the basic
functionality or the core features of the product
which are mandatory requirements of the
customer
4. Include test cases that test the end-to-end
behavior of the application or the product
5. Include test case to test the positive test
conditions
6. Include area which is highly visible to the users
169
65%
170
Con..
Priority-0: These test cases can be called sanity test
cases which check basic functionality and are run for
accepting the build for further testing. They are also
run when a product goes through a major change.
Priority-1: Uses the basic and normal setup and these
test cases deliver high project value to both
development team and to customers.
Priority-2: These test cases deliver a moderate project
value. They are executed as part of the testing cycle
and selected for regression testing on need basis.
Test Minimization: Test minimization ignores
redundant tests. For example, if both t1 and t2 test
function f in P then one might decide to reject t2 in
favour of t1. The purpose of minimization is to reduce
the number of tests to execute for regression testing.
171
Dynamic Slicing
Finding all statements in a program that directly or
indirectly affect the value of a variable occurrence is
referred to as Program Slicing. The statements selected
constitute a slice of the program with respect to the
occurrence.
There are two types of program slicing:
Static Slicing: The static slice of a program with respect
to a variable, var, at a node, n, consists of all nodes
whose execution could possibly affect the value of var at
n.
Con..
But this notion of program slicing does not make any
use of the particular inputs that revealed the error.
It is concerned with finding all statements that could
influence the value of the variable occurrence for
any inputs, not all statements that did affect its
value for the current inputs.
Unfortunately, the size of a slice so defined may
approach that of the original program, and the
usefulness of a slice in debugging tends to diminish
as the size of the slice increases.
Therefore, we examine a narrower notion of slice,
consisting only of statements that influence the
value of a variable occurrence for specific inputs. We
refer to this problem as Dynamic Program Slicing.
173
S10:
S11:
read( X);
if(X<0)
then
Y:=f1(X);
Z:=g1(X);
else
if (X=0)
then
Y:=f 2(X);
Z:=g 2(X);
else
Y:=f 3(X);
Z:=g 3(X);
end_if;
end_if;
write(Y);
write(Z);
end
175
Con..
Example - To find the static slice of the program with
respect to variable Y at statement 10, We first find all
reaching definitions of Y at node 10. These are nodes 3,
6, and 8. Then we find the set of all reachable nodes
from these three nodes in the program dependence
graph. This set, { 1, 2, 3, 5,6, 8 } gives us desired slice.
The static slice for the program with respect to variable
Y at statement 10 contains all three assignment
statements, namely, 3, 6 and 8, that assign a value to Y.
We know that for any input value of X only one of these
three statements may be executed.
Now, consider the test case when X is -1. In this case
only the assignment at statement 3 is executed. So the
dynamic slice with respect to variable Y at statement
10, will contain only statements 1, 2, and 3 as opposed
to the static slice which contains statements 1, 2, 3, 5, 6,
and 8.
176
Con.
If the value of Y at statement is 10 observed to be
wrong for the above test-case, we know that
either there is an error in f1 at statement 3 or the
if predicate at statement 2 is wrong.
Clearly, the dynamic slice, { 1, 2, 3 }, would help
localize the bug more quickly than the static slice,
{ 1, 2, 3, 5, 6, 8 }.
177
Ad-hoc Testing
Testing done without using any formal testing technique is
called ad-hoc testing.
Planned testing are driven by test engineers and their
understanding of the product at a particular time frame. The
more the test engineers work with the product, the better
their understanding becomes.
By the time the implemented system is completely
understood, it may be too late and there may not be
adequate time for a detailed test documentation.
Hence, there is a need for testing not only based on planned
test cases but also based on better understanding gained
with the product.
Some of the issues faced by planned testing are as follows:
1. Lack of clarity in requirements and other
specifications
2. Lack of skills for doing the testing
178
Con..
3. Lack of time for test design
Requirements
analysis
Analysis of
existing test
cases
Test Planning
Test planning
Test Case
Design
Test Execution
Test report
generation
Test execution
Test report
generation
Test case
design
Planned testing
Ad- hoc testing
One of the most fundamental differences between
planned testing and ad-hoc testing is that test
execution and test report generation takes place before
test case design in ad-hoc testing. This testing gets its
name by virtue of the fact that execution preceds
design.
179
Con..
We referring to the earlier testing activities as planned
testing. This does not mean ad-hoc testing is an
unplanned activity. Ad-hoc testing is planned activity;
only the test cases are not documented to start with.
Ad hoc testing can be performed at any time but return
from ad hoc testing are more if they are run after
running planned test cases. Ad hoc testing can be
planned in one of two ways:
1. After, a certain number of planned test cases are
executed. In this case, the product is likely to be in a
better shape and thus newer perspectives and
defects can be uncovered.
2. Prior to planned testing. This will enable gaining
better clarity on requirements and assessing the
quality of the product upfront.
180
Possible Solution
Difficult to ensure that the leanings Document ad hoc tests after test
gleaned in ad hoc testing are used
completion
in future
Large number of defects are found
in ad hoc testing
Con..
Ad hoc testing can be used to switch the context
of software usage frequently to cover more
functionality in less time. For example, instead of
testing a given functionality end-to-end, ad hoc
testing may cause a tester to jump across
functionalities. This is what is called random
sampling test.
This testing involves using the features of the
software randomly in different components,
without worrying about what features are to be
tested and their coverage in each testing.
This technique simulates the behavior of monkeys
jumping from one tree to another in search of a
better fruit, this is also called monkey testing.
182
Pair Testing
Pair testing is testing done by two testers working
simultaneously on the same machine to find defects.
The objective of this exercise is to maximize the
exchange of ideas between the two testers.
When one person executing the tests, the other
person takes notes. The other person suggests an idea
or helps in providing additional perspectives.
It may not be mandatory for one person to stick one
role continuously for an entire session. They can swap
the role of tester and scribe during a session.
One person can pair with multiple persons during a
day at various points of time for testing. Pair testing is
usually a focused session for about an hour or two.
During the session, the pair is given a specific area to
focus and test. It is up to the pair to decide on
different ways of testing this functionality.
183
Con..
Pair testing can be done during any phase of testing. It
encourages idea generation right from the requirements
analysis phase, taking it forward to the design, coding,
and testing phases.
When the product is in a new domain and not many
people have the desired domain knowledge, pair testing is
useful.
Pair testing helps in getting feedback on their abilities
from each other.
This testing can be used to coach the inexperienced
members in the team by pairing them with experienced
testers.
It may be difficult to provide training to all the members
when the project schedules very tight. Pair testing resolve
this issue by providing constant, continuous guidance to
new members from the more experienced one.
184
Con..
Since the team members pair with different
persons during the project life cycle, the entire
project team can have a good understanding of
each other.
185
Unit 5
Test planning
Test planning involves scheduling and estimating the system testing process,
establishing process standards and describing the tests that should be carried out.
As well as helping managers allocate resources and estimate testing schedules, test plans are
intended for software engineers involved in designing and carrying out system tests. They help
technical staff get an overall picture of the system tests and place their own work in this context.
Frewin and Hatton (Frewin and Hatton, 1986). Humphrey (Humphrey, 1989) and Kit (Kit, 1995)
also include discussions on test planning.
Test planning is particularly important in large software system development. As well as setting
out the testing schedule and procedures, the test plan defines the hardware and software
resources that are required. This is useful for system managers who are responsible for ensuring
that these resources are available to the testing team. Test plans should normally include
significant amounts of contingency so that slippages in design and implementation can be
accommodated and staff redeployed to other activities.
Test plans are not a static documents but evolve during the development process. Test plans
change because of delays at other stages in the development process. If part of a system is
incomplete, the system as a whole cannot be tested. You then have to revise the test plan to
redeploy the testers to some other activity and bring them back when the software is once again
available.
For small and medium-sized systems, a less formal test plan may be used, but there is still a
need for a formal document to support the planning of the testing process. For some agile
processes, such as extreme programming, testing is inseparable from development. Like other
planning activities, test planning is also incremental. In XP, the customer is ultimately responsible
for deciding how much effort should be devoted to system testing.
186
Test management
Test managementis the activity of managing some tests. A test management tool
issoftwareused to managetests(automated or manual) that have been previously
specified. It is often associated withautomationsoftware. Test management tools
often includerequirementand/orspecificationmanagement modules that allows
automatic generation of therequirement test matrix(RTM), which is one of the main
metrics to indicate functional coverage of asystem under test(SUT).
Preparing test campaigns
This includes building some bundles of test cases and execute them (or scheduling
their execution). Execution can be either manual or automatic.
Manual execution :The user will have to perform all the test steps manually and
inform the system of the result. Some test management tools includes a framework
to interface the user with thetest planto facilitate this task.
Automatic execution :There are a numerous way of implementing automated tests.
Automatic execution requires the test management tool to be compatible with the
tests themselves. To do so, test management tools may propose proprietary
automation frameworks orAPIsto interface with third-party or proprietary
automated tests.
Generating reports and metrics
The ultimate goal of test management tools is to deliver sensitive metrics that will
help the QA manager in evaluating the quality of the system under test before
releasing. Metrics are generally presented as graphics and tables indicating success
rates, progression/regression and much other sensitive data.
187
Components of test
automation
190