Software Testing
Muhammad Azeem Akbar
LUT University, Finland
Software verification and validation
• Verification and validation is intended to show that a system conforms to its
specification and meets the requirements of the system customer
• Involves checking and review processes and system testing
• System testing involves executing the system with test cases that are derived from
the specification of the real data to be processed by the system
Verification vs. Validation
Verification:
"Are we building the product right?”
Tests against a simulation.
The software should conform to its specification.
Validation:
"Are we building the right product?”
Tests against a real world situation.
The software should do what the user really requires.
Verification & Validation Goals
• Establish confidence that the software is fit for purpose.
• Two principal objectives
• The discovery of defects in a system
• The assessment of whether or not the system is useful and useable in an operational
situation.
• This does not mean:
• completely free of defects
• correct
• Rather, it must be good enough for its intended use and the type of use
will determine the degree of confidence that is needed.
Verification & Validation confidence
• Depends on system’s purpose, user expectations and marketing
environment
• Software function
• The level of confidence depends on how critical the software is to an
organization.
• User expectations
• Users may have low expectations of certain kinds of software.
• Marketing environment
• Getting a product to market early may be more important than finding
defects in the program.
Static and dynamic verification
• Static verification. Concerned with Analysis of the static system
representation to discover problems
• Software inspections
• May be supplement by tool-based document and code analysis
• Static analysis
• Formal verification
• Dynamic V & V: Concerned with exercising and observing product
behaviour, using Software testing.
• The system is executed with test data and its operational behaviour is
observed
Static verification
Code Inspections
• Formalized approach to document reviews
• Intended explicitly for defect detection (not correction).
• Defects may be: logical errors, anomalies in the code that might indicate
an erroneous condition (e.g. an uninitialized variable) or non-compliance
with standards.
• Inspections can check conformance with a specification but not
conformance with the customer’s real requirements.
• Inspections cannot check non-functional characteristics such as
performance, usability, etc.
• Usually done by a group of programmers
• The comments are to the program not the programmer!!
Static verification
Code Inspections
• An Error Checklist for Inspections
• The checklist is largely language independent
• Sometimes (unfortunately) concentrate more on issues of style than on errors (for
example, “Are comments accurate and meaningful?” and “Are if- else, code blocks, and
do-while groups aligned?”), and the error checks are too nebulous to be useful (such as
“Does the code meet the design requirements?”)
• Beneficial side effects
• The programmer usually receives feedback concerning programming style, choice of
algorithms, and programming techniques.
• The other participants gain in a similar way by being exposed to another programmer’s
errors and programming style.
Static verification
Code Inspections
•9
Static verification
Automated Static Analysis
• Static analyzers are software tools for source text processing.
• A static Analyzer:
• A collection of algorithms and techniques used to analyze source code in order
to automatically find bugs
• Parses the program text and try to discover potentially erroneous conditions
and bring these to the attention of the V & V team.
• Very effective as an aid to inspections – they are a supplement to but not a
replacement for inspections.
Static verification
Formal Verification Methods
• Formal methods can be used when a mathematical specification of the system
is available.
• Form the ultimate static verification technique.
• Involve detailed mathematical analysis of the specification
• May develop formal arguments that a program conforms to its mathematical
specification.
• Model checking.
Output- Does
• Predicate abstraction. Model Model the model
(System Checking
satisfied
• Termination analysis. Design) Tool
Secification
Formal
Specification
Dynamic V & V
Software Testing
• According to Boehm (1975),
• Programmers in large software projects typically spend their time as follows:
• 45-50% program checkout
• 33% program design
• 20% coding
Software testing
• Yet, in spite of this checkout expense, delivered “verified” and “validated”
code is still unreliable.
• We have attempted to make the design and coding processes more
systematic.
• What is Software Testing?
• The only validation technique for non-functional requirements as the software has
to be executed to see how it behaves.
• Should be used in conjunction with static verification to provide full V&V coverage.
Two types of Testing
• Defect testing
• Tests designed to discover system defects.
• A successful defect test is one which reveals the presence of defects in a
system.
• Validation testing
• Intended to show that the software meets its requirements.
• A successful test is one that shows that a requirements has been
properly implemented.
Testing
When?
Testing along software development process.
What?
What to test?
How?
How to conduct the test?
Discussion on Integration testing
When
Testing along software development ?
The Testing Process
Black-box and White-box Testing
• Black-box testing
• Also named : closed box , data-driven , input/output driven, behavioral testing
• Provide both valid and invalid inputs.
• The output is matched with expected result (from specifications).
• The tester has no knowledge of internal structure of the program.
• It is impossible to find all errors using this approach.
• Test all possible types of inputs
• (including invalid ones like characters, float, negative integers, etc.).
Black-box and White-box Testing
• White-box Testing
• Also named: open-box, clear box , glass box , logic-driven
• Examine the internal structure of the program.
• Test all possible paths of control flow.
• Capture:
• errors of omission – neglected specification
• errors of commission – not defined by the specification
Testing Stages
• Unit testing
• Individual components are tested.
• Module testing
• Related collections of dependent components are tested.
• Sub-system testing
• Modules are integrated into sub-systems and tested. The focus here should be
on interface testing.
• System testing
• Testing of the system as a whole. Testing of emergent properties.
• Acceptance testing
• Testing with customer data to check that it is acceptable.
What?
Test case design
• Involves designing the test cases (inputs and outputs) used to test the system.
• The goal of test case design is to create a set of tests that are effective in
validation and defect testing.
• Design approaches:
• Requirements-based testing (black-box)
• Mainly in release testing;
• Partition testing
• Based on structure of input or output domain;
• In component and release testing.
• Structural testing (based on code, white-box).
• In component testing.
Finding What to Test (1)
• Only exhaustive testing can show a program is free from defects.
• Exhaustive testing is impossible – leads to combinatorial explosion.
• We need to group cases into equivalence classes.
• Two test cases are in the same equivalence class if they find the same error.
• Try to find a set of test cases with exactly one element of each equivalence
class.
Finding What to Test (2)
• Each new test case added to suite should expose at least one (and as
many as possible) previously undetectable error.
• Each tested thing must appear in at least, and preferably at most one, test
case.
• A test case may and should test more than one thing.
• It is hoped to get enough test cases to find all errors – high test coverage.
• A certain amount of skill and deviousness is needed to generate test
cases.
Finding What to Test (3)
• Content of a test case:
• describe what is tested
• give input data
• give output data (assumed to be expected)
• Further constraints, if any.
Requirements based testing
• A general principle of requirements engineering : requirements should be
testable.
• A validation testing technique where you consider each requirement and
derive a set of tests for that requirement.
• Use cases can be a basis for deriving the tests for a system.
• They help identify operations between an actor and the system, or internal
system operations.
• For each operation, there is a contract, that specifies input of the operation,
preconditions and postconditions for the operation.
• The contracts specify test cases
Structural testing – code-based (1)
• Sometime called white-box testing.
• Used mainly in component testing
• To distinguish from functional (requirements-based, “black-box” testing)
• Testing product functionality against its specification.
• Derivation of test cases according to program structure.
• Knowledge of the program is used to identify additional test cases.
• Objective is to exercise all program statements (not all path combinations).
Integration testing
• Integration testing is testing how a system consisting of more than one
module works together, i.e., testing the interfaces.
• Assume a system structure having:
• a root component (no client components), or components,
• and leaf components (no dependencies on other components).
• Two Major Kinds of Testing
• big-bang!!!!!!
• Incremental - several orders possible:
• top-down, bottom-up, critical piece first
• first integrating functional subsystems and then integrating the
subsystems in separate phases using any of the basic strategies.
Integration testing
• Big-Bang Testing
• Advantage:
• Every module thoroughly tested in isolation.
• Disadvantages:
• Driver and stub must be written for each module (except no driver for root and
no stubs for leaves).
• No error isolation — if error occurs, then in which module or interface is it?
• Interface testing must wait until all modules are programmed critical interface
& major design problems are found very late into project, after all code has
been committed!
Integration testing
• Incremental integration:
• There are several specific orders of adding modules one by one to an
ever growing system until the whole system is obtained.
• A module is tested as it is added in the context of whatever of its
callers and callees are already in the System, and drivers and stubs
for callers and callees that are not yet in the system, respectively.
Integration testing – incremental
• Bottom-Up
• Test leaf modules in isolation with drivers.
• Test any non-leaf module m in the context of all of its callees (previously
tested!) with a driver for m.
• Advantages:
• Thorough testing of each leaf module.
• Abstract types tend to be tested early.
• Disadvantages:
• Must make drivers for every module except root.
• Major design flaws and serious interface problems involving high modules
are not caught until late; could cause a major rewrite of everything that
has been tested before.
Integration testing – incremental
• Top-down
• Only the root is tested in isolation with stubs for its callees.
• Test any non-root module m by having it replace its stub in its callers, and
with stubs for its callees.
• Advantages:
• Can test during top-down programming.
• Major flaws and major interface problems are found early.
• No drivers needed, as modules are tested in context of actual callers.
• Disadvantages:
• Stubs can be difficult to write for planned test cases.
• Lower modules are not tested thoroughly; tested only in ways used by
upper modules and not as fully as possible against specs.
Integration testing (9) – incremental
• Sandwich
• Root and leaf modules are tested in isolation as in T-D and B-U methods.
• Work from root to middle T-D, and work from leaves to middle BU.
• Advantages:
• Root and leaf modules tested thoroughly.
• Major design flaws and major interface problems found early.
• Most abstract data types tested early.
• No stubs and no drivers needed for middle modules.
• Disadvantages:
• Stubs needed for upper modules and drivers needed for lower modules.
• Upper-middle modules may not be thoroughly tested.
Integration testing (10) – incremental
• Modified Sandwich
• Do sandwich.
• Also test upper-middle modules in isolation using drivers and actual
callees.
• Advantages:
• Those of sandwich +
• Removal of its second disadvantage.
• Disadvantages:
• First of sandwich +
• Extra drivers to write.
Benefits of test-driven development
• Code coverage
• Every code segment that you write has at least one associated test so
all code written has at least one test.
• Simplified debugging
• When a test fails, it should be obvious where the problem lies. The
newly written code needs to be checked and modified.
• System documentation
• The tests themselves are a form of documentation that describe what
the code should be doing.
Release testing
• Release testing is the process of testing a particular release of a system that
is intended for use outside of the development team.
• The primary goal of the release testing process is to convince the supplier of
the system that it is good enough for use.
• Release testing, therefore, has to show that the system delivers its specified
functionality, performance and dependability, and that it does not fail during normal
use.
• Release testing is usually a black-box testing process where tests are only
derived from the system specification.
User testing
• User or customer testing is a stage in the testing process in which users or
customers provide input and advice on system testing.
• User testing is essential, even when comprehensive system and release
testing have been carried out.
• The reason for this is that influences from the user’s working environment have
a major effect on the reliability, performance, usability and robustness of a
system. These cannot be replicated in a testing environment.