Chapter Four Part-II
Software Testing Levels
Testing Levels
1. Unit Testing
2. Integration
Testing
3. System Testing
4. Acceptance
Testing
2
Unit Testing
Testing of individual software
components
First level of dynamic testing
Typically white-box testing
Usually done by programmers
3
Cont…
Individual units are tested separately
Units or modules may be single functions,
procedures or programs
Done incrementally, usually by the single
programmer who coded it
Uses stubs and drivers
White box testing most appropriate at this stage
Tests local data structures, boundary conditions,
independent paths, error handling paths
Informal, i.e. no formal test plan specified and
written down
4
Unit Test Considerations
The module interface is tested to ensure that information
properly flows into and out of the program unit under test.
The local data structure is examined to ensure that data
stored temporarily maintains its integrity during all steps
in an algorithm's execution.
Boundary conditions are tested to ensure that the module
operates properly at boundaries established to limit or
restrict processing.
All independent paths through the control structure are
exercised to ensure that all statements in a module have
been executed at least once.
All error handling paths are tested.
5
Cont….
6
Cont….
Common errors in computation are
Misunderstood or incorrect
arithmetic precedence,
Mixed mode operations,
Incorrect initialization
Precision inaccuracy
Incorrect symbolic representation of
an expression
7
Cont….
Test cases should uncover errors such as
Comparison of different data types,
Incorrect logical operators or precedence
Expectation of equality when precision error
makes equality unlikely,
Incorrect comparison of variables
Improper or non existent loop termination,
Failure to exit when divergent iteration is
encountered, and
Improperly modified loop variables
8
Component Testing
It is also called as module testing.
The basic difference between the unit testing
and component testing is in unit testing the
developers test their piece of code but in
component testing the whole component is
tested.
There are problems associated with testing
a module in isolation.
How do we run a module without anything to
call it, to be called by it or, possibly, to output
intermediate values obtained during execution?
9
Stubs and Driver
One approach is to construct an appropriate
driver routine to call it and, simple stubs to
be called by it, and to insert output statements
in it.
Stubs serve to replace modules that are
subordinate to (called by) the module to be
tested.
A stub or dummy subprogram uses the
subordinate module’s interface, may do
minimal data manipulation, prints verification
of entry, and returns.
10
Cont.….
Let’s take an example to understand it in a
better way. Suppose there is an application
consisting of three modules say, module A,
module B and module C. The developer has
developed the module B and now wanted to test
it. But in order to test the module B completely
few of it’s functionalities are dependent on
module A and few on module C. But the module
A and module C has not been developed yet. In
that case to test the module B completely we
can replace the module A and module C by stub
and drivers as required.
11
Stubs and drivers
Driver:
A driver calls the component to be tested
A component, that calls the Tested Unit
Stub:
A stub is called from the software
component to be tested
A component, the Tested Unit depends on
Partial implementation
Returns fake values.
12
Cont.….
Stub – the dummy modules that simulates the low
level modules.
Stubs are always distinguish as "called programs“.
Test stubs are programs that simulate the behaviors
of software components that a module undergoing
tests depends on.
Test stubs are mainly used in top-down approach.
Stubs are computer programs that act as temporary
replacement for a called module and give the same
output as the actual product or software.
13
Drivers
Driver – the dummy modules that
simulate the high level modules.
Drivers are also considered as the
form of dummy modules which are
always distinguished as "calling
programs”, that is handled in
bottom up integration testing, it is
only used when main programs are
under construction.
14
What Is Integration Testing?
Integration testing is the phase in software
testing in which individual software modules are
combined and tested as a group.
It occurs after unit testing and before system
testing.
Integration testing takes as its input modules
that have been unit tested, groups them in larger
aggregates, applies tests defined in an
integration test plan to those aggregates, and
delivers as its output the integrated system ready
for system testing.
15
Integration Testing
Integration testing is a systematic
technique for constructing the software
architecture while at the same time
conducting tests to uncover errors
associated with interfacing.
Testing of two or more units/modules
together
Objective is to detect Interface defects
between units/modules
16
Cont….
17
Integration Testing Strategy
The entire system is viewed as a
collection of subsystems.
The Integration testing strategy
determines the order in which the
subsystems are selected for testing and
integration-
– Big bang integration (Non incremental)
– Incremental integration
– Top down integration
– Bottom up integration
– Sandwich testing
18
Big Bang Integration
All the components of the system are
integrated & tested as a single unit.
Instead of integrating component by
component and testing, this approach
waits till all components arrive and one
round of integration testing is done.
It reduces testing effort, and
removes duplication in testing.
19
Cont.….
20
Incremental Integration
The incremental approach means to first
combine only two components together
and test them.
Remove the errors if they are there,
otherwise combine another component to
it and then test again, and so on until the
whole system is developed.
In this, the program is constructed and
tested in small increments, where errors
are easier to isolate and correct.
21
Top-down Testing Strategy
Test the top layer or the controlling
subsystem first
Then combine all the subsystems that
are called by the tested subsystems
and test the resulting collection of
subsystems
Do this until all subsystems are
incorporated into the test
Stubs are needed to do the testing
22
Cont…
23
Top-down Integration
Interfaces can be tested in various
orders:
Breadth First Integration(B-C-D, E-F-
G): This would integrate all components
on a major control path of the structure.
Depth First Integration (A-B-E,A-B-
F): This incorporates all components
directly subordinate at each level,
moving across the structure horizontally
24
Bottom-up Testing Strategy
The subsystems in the lowest layer of the
call hierarchy are tested individually
Then the next subsystems are tested that
call the previously tested subsystems
This is repeated until all subsystems are
included
Drivers are needed.
As integration moves upward, the need
for separate test drivers lesser
25
Cont.….
26
Sandwich/ Bidirectional Testing
Combines top-down strategy with
bottom-up strategy
The system is viewed as having three
layers
A target layer in the middle
A layer above the target
A layer below the target
Testing converges at the target layer
27
Cont.….
28
Sandwich/ Bidirectional
Testing
It is performed initially with the use of stubs &
drivers.
Drivers are used to provide upstream connectivity
while stubs provide downstream connectivity.
Driver is a function which redirects the requests to
some other component .
Stubs simulate the behavior of a missing
component.
After testing the functionality of the integrated
components, stubs & drivers are discarded
29
System Testing
Functional Testing:
Goal: Test functionality of system
Test cases are designed from the
requirements analysis document (better:
user manual) and centered around
requirements and key functions (use cases).
The system is treated as black box
Unit test cases can be reused, but new
test cases have to be developed as well.
30
Cont.…
Performance Testing:
Goal: Try to violate non-functional
requirements
Test how the system behaves when
overloaded.
Try unusual orders of execution
Check the system’s response to large
volumes of data
What is the amount of time spent in different
use cases?
31
Types of Performance Testing
Stress Testing Security testing
Stress limits of system Try to violate security requirements
Volume testing Environmental test
Testwhat happens if large amounts of data are Test tolerances for heat, humidity, motion
handled
Quality testing
Configuration testing Test reliability, maintain- ability &
Testthe various software and hardware availability
configurations Recovery testing
Compatibility test Test system’s response to presence of errors
or loss of data
Testbackward compatibility with existing
systems Human factors testing
Timing testing Test with end users.
32
Evaluate response times and time to perform a
Acceptance Testing
Goal: Demonstrate system is ready for
operational use
Choice of tests is made by client
Many tests can be taken from integration testing
Acceptance test is performed by the client, not by the
developer.
33
Acceptance Testing
Alpha Testing is a type Beta Testing is
of software testing performed by real users
performed to identify of the software
bugs before releasing application in a real
the product to real users environment.
or to the public. Beta testing is one type
Alpha Testing is one of of User Acceptance
the user acceptance Testing.
tests.
34
Key Difference Between Alpha and Beta Testing
Alpha Testing is performed by the Testers within the
organization whereas Beta Testing is performed by the end
users.
Alpha Testing is performed at Developer’s site whereas Beta
Testing is performed at Client’s location.
Reliability and Security testing are not performed in-depth in
Alpha Testing while Reliability, Security and Robustness are
checked during Beta Testing.
Alpha Testing involves both Whitebox and Blackbox testing
whereas Beta Testing mainly involves Blackbox testing.
35
Information needed at
different Levels of Testing
36
System Testing
37
Regression Testing
It is a type of testing carried out to ensure that
changes made in the fixes are not impacting the
previously working functionality.
The main aim of regression testing is to make sure
that changed component is not impacting the
unchanged part of the component.
It means re-testing an application after its code has
been modified to verify that it still functions correctly.
Regression testing may be conducted manually, by re
executing a subset of all test cases or using
automated capture/playback tools.
38
Smoke testing
Smoke testing, also known as build verification testing
or sanity testing, is a preliminary level of software
testing carried out to determine whether the critical
functionalities of a program work without delving into
finer details.
A common approach for creating “daily builds” for
product software.
Smoke testing steps:
Software components that have been translated into
code are integrated into a “build.”
• A build includes all data files, libraries, reusable modules,
and engineered components that are required to
implement one or more product functions.
39
Cont.….
A series of tests is designed to expose errors
that will keep the build from properly
performing its function.
• The intent should be to uncover “show stopper”
errors that have the highest likelihood of
throwing the software project behind schedule.
The build is integrated with other builds and
the entire product (in its current form) is
smoke tested daily.
• The integration approach may be top down or
bottom up.
40
Object-Oriented Testing
begins by evaluating the correctness and
consistency of the analysis and design models
testing strategy changes
the concept of the ‘unit’ broadens due to
encapsulation
integration focuses on classes and their execution
across a ‘thread’ or in the context of a usage
scenario
validation uses conventional black box methods
test case design draws on conventional methods,
but also encompasses special features
41
Broadening the View of “Testing”
It can be argued that the review of OO analysis
and design models is especially useful because
the same semantic constructs (e.g., classes,
attributes, operations, messages) appear at
the analysis, design, and code level.
Therefore, a problem in the definition of class
attributes that is uncovered during analysis will
circumvent side effects that might occur if the
problem were not discovered until design or
code (or even the next iteration of analysis).
42
OO Testing Strategy
Class testing is the equivalent of unit testing
operations within the class are tested
the state behaviour of the class is examined
Integration applied three different strategies
thread-based testing—integrates the set of
classes required to respond to one input or event
use-based testing—integrates the set of classes
required to respond to one use case
cluster testing—integrates the set of classes
required to demonstrate one collaboration
43
WebApp Testing - I
The content model for the WebApp is
reviewed to uncover errors.
The interface model is reviewed to ensure
that all use cases can be accommodated.
The design model for the WebApp is
reviewed to uncover navigation errors.
The user interface is tested to uncover
errors in presentation and/or navigation
mechanics.
Each functional component is unit tested.
44
WebApp Testing - II
Navigation throughout the architecture is tested.
The WebApp is implemented in a variety of different
environmental configurations and is tested for
compatibility with each configuration.
Security tests are conducted in an attempt to exploit
vulnerabilities in the WebApp or within its environment.
Performance tests are conducted.
The WebApp is tested by a controlled and monitored
population of end-users. The results of their interaction
with the system are evaluated for content and navigation
errors, usability concerns, compatibility concerns, and
WebApp reliability.
45
The Debugging Process
46
Consequences of Bugs
Bug Categories:
function-related
bugs,
system-related
bugs, data bugs,
coding bugs,
design bugs,
documentation
bugs, standards
violations, etc.
47
Debugging Techniques
brute force / testing
backtracking
induction
deduction
48
Correcting the Error
Is the cause of the bug reproduced in another part of the
program? In many situations, a program defect is caused by an
erroneous pattern of logic that may be reproduced elsewhere.
What "next bug" might be introduced by the fix I'm about to
make? Before the correction is made, the source code (or,
better, the design) should be evaluated to assess coupling of
logic and data structures.
What could we have done to prevent this bug in the first place?
This question is the first step toward establishing a statistical
software quality assurance approach. If you correct the process
as well as the product, the bug will be removed from the current
program and may be eliminated from all future programs.
49
Quality Assurance
encompasses Testing
50
Thank you
51