SOFTWARE TESTING
Software Testing is a method to check
whether the actual software product matches
expected requirements and to ensure that
software product is Defect free.
It involves execution of software/system
components using manual or automated tools
to evaluate one or more properties of interest.
The purpose of software testing is to identify
errors, gaps or missing requirements in
contrast to actual requirements.
SOFTWARE TESTING
Quality
means “conformance to
requirements”
The best testers can only catch defects that are
contrary to specification.
Testing does not make the software perfect.
If an organization does not have good requirements
engineering practices then it will be very hard to
deliver software that fills the users’ needs, because
the product team does not really know what those
needs are.
WHY IS TESTING NEEDED
A defect in the software has a root cause, while the effect of defect
is seen as impact by the different stake holders.
Testing is part of overall Quality Assurance – It covers the
Quality Control aspect of ensuring Quality.
A fault doesn't necessarily result in a failure, but a failure can
only occur if a fault exists.
To avoid a failure you must find the fault.
Software testing may be required for compliance with contractual
or legal requirements
When is testing complete? Actually never
However for practical reasons it is stopped after considering the
risks involved, time and budget constraints
Testing should provide sufficient information to the stake holders
for decision making regarding release of the software/ system , for
the next development step or hand over to customers.
TESTING?
Cost-Effective: Testing any IT project on time helps
you to save your money for the long term. In case if the
bugs caught in the earlier stage of software testing, it
costs less to fix.
Security: People are looking for trusted products. It
helps in removing risks and problems earlier.
Product quality: It is an essential requirement of
any software product. Testing ensures a quality
product is delivered to customers.
Customer Satisfaction: The main aim of any product
is to give satisfaction to their customers. UI/UX
Testing ensures the best user experience.
TERMINOLOGIES
Software Fault : A static defect in the software
Software Failure : External, incorrect behavior with
respect to the requirements or other description of the
expected behavior
Software Error : An incorrect internal state that is
the manifestation of some fault
NB//Faults in software are equivalent to design
mistakes in hardware.
Software does not degrade.
TERMINOLOGIES
Validation : The process of evaluating
software at the end of software development
to ensure compliance with intended usage
Verification : The process of determining
whether the products of a given phase of the
software development process fulfill the
requirements established during the previous
phase
SOURCES OF THE PROBLEMS
Requirements Definition: Erroneous, incomplete,
inconsistent requirements.
Design: Fundamental design flaws in the software.
Implementation: Mistakes in chip fabrication,
wiring, programming faults, malicious code.
Support Systems: Poor programming languages,
faulty compilers and debuggers, misleading
development tools.
SOURCES OF THE PROBLEMS
Inadequate Testing of Software: Incomplete testing,
poor verification, mistakes in debugging.
Evolution: Sloppy redevelopment or maintenance,
introduction of new flaws in attempts to fix old flaws,
incremental escalation to inordinate complexity
SOME OBSERVATIONS
It is impossible to completely test any nontrivial
module or any system
Theoretical limitations: Halting problem
Practial limitations: Prohibitive in time and cost
Testing can only show the presence of bugs, not their
absence.
total number of execution paths?
loop 200 times
TESTING ACTIVITIES
Subsystem Requirements
Unit System
Code Test Analysis
Design Document
Tested Document User
Subsystem
Subsystem Manual
Unit
Code Test
Tested Integration Functional
Subsystem
Test Test
Integrated Functioning
Subsystems System
Tested Subsystem
Subsystem Unit
Code Test
All tests by developer
levels of testing
TESTING ACTIVITIES CONTINUED
Client’s
Global Understanding User
Requirements of Requirements Environment
Validated Accepted
Functioning
System
System PerformanceSystem Acceptance Installation
Test Test Test
Usable
Tests by client System
Tests by developer
User’s understanding
System in
Use
Tests (?) by user
Level of abstraction LEVELS OF TESTING IN V MODEL
system system
requirements integration
software acceptance
requirements test
preliminary software
design integration
detailed component
design test
code & unit
debug test
Time
N.B.: component test vs. unit test; acceptance test vs. system integration
TEST PLANS
The goal of test planning is to establish the list of tasks
which, if performed, will identify all of the requirements
that have not been met in the software. The main work
product is the test plan.
The test plan documents the overall approach to the test. In
many ways, the test plan serves as a summary of the test
activities that will be performed.
It shows how the tests will be organized, and outlines all of the
testers’ needs which must be met in order to properly carry out
the test.
The test plan should be inspected by members of the
engineering team and senior managers.
TEST PLANNING
A Test Plan:
covers all types and phases of testing
guides the entire testing process
who, why, when, what
developed as requirements, functional specification, and high-
level design are developed
should be done before implementation starts
A test plan includes:
test objectives
schedule and logistics
test strategies
test cases
procedure
data
expected result
procedures for handling problems
FAULT HANDLING TECHNIQUES
Fault Handling
Fault Tolerance
Fault Avoidance Fault Detection
Design Atomic Modular
Reviews
Methodology Transactions Redundancy
Configuration
Verification
Management
Testing Debugging
Unit Integration System Correctness Performance
Testing Testing Testing Debugging Debugging
QUALITY ASSURANCE ENCOMPASSES
TESTING
Quality Assurance
Usability Testing
Scenario Prototype Product
Testing Testing Testing
Fault Avoidance Fault Tolerance
Atomic Modular
Configuration Transactions Redundancy
Verification
Management
Fault Detection
Reviews
Debugging
Walkthrough Inspection
Testing
Correctness Performance
Unit Integration System Debugging Debugging
Testing Testing Testing
CATEGORIES OF TESTING
Typically Testing is classified into three categories.
Functional Testing
Non-Functional Testing or Performance Testing
Maintenance (Regression and Maintenance)
•Unit Testing
•Integration Testing
•Smoke
•UAT ( User Acceptance Testing)
Functional Testing
•Localization
•Globalization
•Interoperability
•So on
•Performance
•Endurance
•Load
Non-Functional Testing •Volume
•Scalability
•Usability
•So on
•Regression
Maintenance
•Maintenance
TYPES OF TESTING
Unit Testing:
Individual subsystem
Carried out by developers
Goal: Confirm that subsystems is correctly coded and
carries out the intended functionality
Integration Testing:
Groups of subsystems (collection of classes) and eventually
the entire system
Carried out by developers
Goal: Test the interface among the subsystem
SYSTEM TESTING
System Testing:
The entire system
Carried out by developers
Goal: Determine if the system meets the requirements
(functional and global)
Acceptance Testing:
Evaluates the system delivered by developers
Carried out by the client. May involve executing typical
transactions on site on a trial basis
Goal: Demonstrate that the system meets customer
requirements and is ready to use
Implementation (Coding) and testing go hand in hand
UNIT TESTING
Informal:
Incremental coding Write a little, test a little
Static Analysis:
Hand execution: Reading the source code
Walk-Through (informal presentation to others)
Code Inspection (formal presentation to others)
Automated Tools checking for
syntactic and semantic errors
departure from coding standards
Dynamic Analysis:
Black-box testing (Test the input/output behavior)
White-box testing (Test the internal logic of the subsystem
or object)
Data-structure based testing (Data types determine test
cases)
Question: Which is more effective, static or dynamic analysis? Discuss
BLACK-BOX TESTING
Focus: I/O behavior. If for any given input, we can
predict the output, then the module passes the test.
Almost always impossible to generate all possible inputs
why?
("test cases")
Goal: Reduce number of test cases by equivalence
partitioning:
Divide input conditions into equivalence classes
Choose test cases for each equivalence class. (Example: If
an object is supposed to accept a negative number, testing
onenegative 3 thenis…enough)
If x =number
If x > -5 and x < 5 then …
BLACK-BOX TESTING (CONTINUED)
Selection of equivalence classes (No rules, only guidelines):
Input is valid across range of values. Select test cases from 3
equivalence classes:
Below the range
Within the range Are these complete?
Above the range
Input is valid if it is from a discrete set. Select test cases from 2
equivalence classes:
Valid discrete value
Invalid discrete value
Another solution to select only a limited amount of test
cases:
Get knowledge about the inner workings of the unit being tested
=> white-box testing
WHITE-BOX TESTING
Focus:Thoroughness (Coverage). Every
statement in the component is executed at
least once.
Four types of white-box testing
Statement Testing
Loop Testing
Path Testing
Branch Testing
WHITE-BOX TESTING (CONTINUED)
Statement Testing (Algebraic Testing): Test single
statements
Loop Testing:
Cause execution of the loop to be skipped completely.
(Exception: Repeat loops)
Loop to be executed exactly once
Loop to be executed more than once
Path testing:
Make sure all paths in the program are executed
Branch Testing (Conditional Testing): Make sure
that each possible outcome from a condition is tested at
least once
White-Box Testing
LOOP TESTING
Simple
loop
Nested
Loops
Concatenated
Loops Unstructured
Loops
Question: Why is loop testing important? Discuss
CONSTRUCTING THE LOGIC FLOW DIAGRAM
Start
F
2
T
3
T F
4 5
7
T F
8 9
Exit
FINDING THE TEST CASES
Start
1
a (Covered by any data)
2
b (Data set must contain at least one value)
(Positive score) d 3
e (Negative score)
c 4 5
(Data set must h (Reached if either f or
be empty) f g
6 e is reached)
7 j (Total score > 0.0)
(Total score < 0.0) i
8 9
k l
Exit
COMPARISON OF WHITE & BLACK-BOX
White-box Testing:
Potentially infinite number of paths have to be tested
White-box testing often tests what is done, instead of what should be done
Cannot detect missing use cases
Black-box Testing:
Potential combinatorical explosion of test cases (valid & invalid data)
Often not clear whether the selected test cases uncover a particular error
Does not discover extraneous use cases ("features")
Both types of testing are needed
White-box testing and black box testing are the extreme ends of a
testing continuum.
Any choice of test case lies in between and depends on the
following:
Number of possible logical paths
Nature of input data
Amount of computation
Complexity of algorithms and data structures
THE 4 TESTING STEPS
1. Select what has to be 3. Develop test cases
measured A test case is a set of test
Analysis: Completeness of data or situations that will
requirements be used to exercise the unit
Design: tested for cohesion
(code, module, system) being
tested or about the attribute
Implementation: Code tests
being measured
2. Decide how the testing 4. Create the test oracle
is done
An oracle contains of the
Code inspection predicted results for a set of
Proofs (Design by Contract) test cases
Black-box, white box, The test oracle has to be
Select integration testing written down before the
strategy (big bang, bottom actual testing takes place
up, top down, sandwich)
Next module
GUIDANCE FOR TEST CASE SELECTION
Use analysis knowledge Use implementation
about functional knowledge about
requirements (black-box algorithms:
testing):
Use cases Examples:
Expected input data Force division by zero
Invalid input data Use sequence of test cases
Use design knowledge for interrupt handler
about system structure,
algorithms, data
structures (white-box
testing):
Control structures
Test branches, loops, ...
Data structures
Test records fields, arrays,
...
UNIT-TESTING HEURISTICS
1. Create unit tests as soon as 4. Desk check your source code
object design is completed: Reduces testing time
Black-box test: Test the use
5. Create a test harness
Test drivers and test stubs are
cases & functional model
needed for integration testing
White-box test: Test the 6. Describe the test oracle
dynamic model Often the result of the first
Data-structure test: Test the successfully executed test
object model 7. Execute the test cases
2. Develop the test cases Don’t forget regression testing
Re-execute test cases every time
Goal: Find the minimal a change is made.
number of test cases to cover
as many paths as possible 8. Compare the results of the test
3. Cross-check the test cases to with the test oracle
eliminate duplicates Automate as much as possible
Don't waste your time!
OOT STRATEGY
class testing is the equivalent of unit testing
operations within the class are tested
the state behavior of the class is examined
integration applied three different
strategies/levels of abstraction
thread-based testing—integrates the set of classes
required to respond to one input or event
use-based testing—integrates the set of classes
required to respond to one use case
cluster testing—integrates the set of classes
required to demonstrate one collaboration
Recall: model-driven software development
WHO TESTS THE SOFTWARE?
developer independent tester
Understands the system Must learn about the system,
but, will test "gently" but, will attempt to break it
and, is driven by "delivery" and, is driven by quality
EXAMPLES OF FAULTS AND ERRORS
Faults in the Interface Mechanical Faults (very
specification hard to find)
Mismatch between what Documentation does not
the client needs and what match actual conditions or
the server offers operating procedures
Mismatch between Errors
requirements and
Stress or overload errors
implementation
Capacity or boundary
Algorithmic Faults errors
Missing initialization Timing errors
Branching errors (too soon, Throughput or
too late) performance errors
Missing test for nil
DEALING WITH ERRORS
Verification:
Assumes hypothetical environment that does not match real
environment
Proof might be buggy (omits important constraints; simply
wrong)
Modular redundancy:
Expensive
Declaring a bug to be a ―feature‖
Bad practice
Patching
Slows down performance
Testing (this lecture)
Testing is never good enough
ANOTHER VIEW ON HOW TO DEAL WITH
ERRORS
Error prevention (before the system is released):
Use good programming methodology to reduce complexity
Use version control to prevent inconsistent system
Apply verification to prevent algorithmic bugs
Error detection (while system is running):
Testing: Create failures in a planned way
Debugging: Start with an unplanned failures
Monitoring: Deliver information about state. Find performance bugs
Error recovery (recover from failure once the system is
released):
Data base systems (atomic transactions)
Modular redundancy
Recovery blocks
WHAT IS THIS?
A failure?
An error?
A fault?
Need to specify
the desired behavior first!
ERRONEOUS STATE (―ERROR‖)
ALGORITHMIC FAULT
MECHANICAL FAULT
HOW DO WE DEAL WITH ERRORS AND
FAULTS?
VERIFICATION?
MODULAR REDUNDANCY?
DECLARING THE BUG
AS A FEATURE?
PATCHING?
TESTING?
TESTING TAKES CREATIVITY
Testing often viewed as dirty work.
To develop an effective test, one must have:
Detailed understanding of the system
Knowledge of the testing techniques
Skill to apply these techniques in an effective and efficient manner
Testing is done best by independent testers
We often develop a certain mental attitude that the program
should in a certain way when in fact it does not.
Programmer often stick to the data set that makes the
program work
"Don’t mess up my code!"
A program often does not work when tried by somebody
else.
Don't let this be the end-user.
EXAMPLES OF TEST CASES
Test case 1 : ? (To execute loop exactly once)
Test case 2 : ? (To skip loop body)
Test case 3: ?,? (to execute loop more than once)
These 3 test cases cover all control flow paths
SUMMARY
Testing is still a black art, but many rules and
heuristics are available
Testing consists of component-testing (unit testing,
integration testing) and system testing, and …
OOT and architectural testing, still challenging
User-oriented reliability modeling and evaluation not
adequate
Testing has its own lifecycle