Unit 4 Part 1
Unit 4 Part 1
1
Unit 4 Objectives
- A strategic approach to testing
- Test strategies for conventional software
- Validation testing
- System testing
- The art of debugging
Introduction
• A strategy for software testing integrates the design of
software test cases into a well-planned series of steps that
result in successful development of the software
• The strategy provides a road map that describes the steps to
be taken, when, and how much effort, time, and resources
will be required
• The strategy incorporates test planning, test case design, test
execution, and test result collection and evaluation
• The strategy provides guidance for the practitioner and a set
of milestones for the manager
• Because of time pressures, progress must be measurable and
problems must surface as early as possible
3
A Strategic Approach to Testing
General Characteristics of
Strategic Testing
• To perform effective testing, a software team should conduct
effective formal technical reviews
• Testing begins at the component level and work outward
toward the integration of the entire computer-based system
• Different testing techniques are appropriate at different
points in time
• Testing is conducted by the developer of the software and
(for large projects) by an independent test group
• Testing and debugging are different activities, but debugging
must be accommodated in any testing strategy
5
Verification and Validation
6
Organizing for Software Testing
• Testing should aim at "breaking" the software
• Common misconceptions
• The developer of software should do no testing at all
• The software should be given to a secret team of testers who will test
it unmercifully
• The testers get involved with the project only when the testing steps
are about to begin
• Reality: Independent test group
• Removes the inherent problems associated with letting the builder test
the software that has been built
• Removes the conflict of interest that may otherwise be present
• Works closely with the software developer during analysis and design
to ensure that thorough testing occurs
7
A Strategy for Testing Conventional
Software
System Testing
Validation Testing
r s to
pe
de w
co
Integration Testing
o a ro
Br Nar
Unit Testing
Code
re to
Design
nc t
co strac
te
Ab
Requirements
System Engineering
8
Levels of Testing for Conventional
Software
Unit testing
• Concentrates on each component/function of the software
as implemented in the source code.
• Exercises specific paths in a component's control structure
to ensure complete coverage and maximum error detection
• Components are then assembled and integrated
Integration testing
• Focuses on the design and construction of the software
architecture
• Focuses on inputs and outputs, and how well the
components fit together and work together
9
Levels of Testing for Conventional
Software(Cont’d)
Validation testing
• Provides final assurance that the software meets all functional,
behavioral, and performance requirements by validating against the
constructed software.
System testing
• Verifies that all system elements (software, hardware, people,
databases) mesh properly and that overall system function and
performance is achieved
• The software and other system elements are tested as a whole
10
When is Testing Complete?
• There is no definitive answer to this question
• Every time a user executes the software, the
program is being tested
• Sadly, testing usually stops when a project is
running out of time, money, or both
• One approach is to divide the test results into
various severity levels
• Then consider testing to be complete when certain
levels of errors no longer occur or have been
repaired or eliminated
11
Test Strategies for
Conventional Software
13
SOFTWARE TESTING DEFINITION
14
Unit Testing
• Focuses testing on the function or software module
• Concentrates on the internal processing logic and data
structures
• Is simplified when a module is designed with high
cohesion
• Reduces the number of test cases
• Allows errors to be more easily predicted and uncovered
• Concentrates on critical modules and those with high
cyclomatic complexity when testing resources are
limited
15
Targets for Unit Test Cases
Module interface
• Ensure that information flows properly into and out of the module
Local data structures
• Ensure that data stored temporarily maintains its integrity during all
steps in an algorithm execution
Boundary conditions
• Ensure that the module operates properly at boundary values
established to limit or restrict processing
Independent paths (basis paths)
• Paths are exercised to ensure that all statements in a module have
been executed at least once
Error handling paths
• Ensure that the algorithms respond correctly to specific error
conditions
16
Common Computational Errors in
Execution Paths
• Misunderstood or incorrect arithmetic
precedence
• Mixed mode operations (e.g., int, float, char)
• Incorrect initialization of values
• Precision inaccuracy and round-off errors
• Incorrect symbolic representation of an
expression (int vs. float)
17
Other Errors to Uncover
• Comparison of different data types
• Incorrect logical operators or precedence
• Expectation of equality when precision error makes equality unlikely
(using == with float types)
• Incorrect comparison of variables
• Improper or nonexistent loop termination
• Failure to exit when divergent iteration is encountered
• Improperly modified loop variables
• Boundary value violations
18
Problems to uncover in Error Handling
21
Non-incremental Integration
Testing
• Commonly called the “Big Bang” approach
• All components are combined in advance
• The entire program is tested as a whole
• Chaos results
• Many seemingly-unrelated errors are encountered
• Correction is difficult because isolation of causes is
complicated
• Once a set of errors are corrected, more errors occur,
and testing appears to enter an endless loop
22
Incremental Integration Testing
• Three kinds
• Top-down integration
• Bottom-up integration
• Sandwich integration
• The program is constructed and tested in small
increments
• Errors are easier to isolate and correct
• Interfaces are more likely to be tested completely
• A systematic test approach is applied
23
Top-down Integration
• Modules are integrated by moving downward through the control
hierarchy, beginning with the main module
• Subordinate modules are incorporated in either a depth-first or breadth-
first fashion
• DF: All modules on a major control path are integrated
• BF: All modules directly subordinate at each level are integrated
• Advantages
• This approach verifies major control or decision points early in the
test process
• Disadvantages
• Stubs need to be created to substitute for modules that have not been
built or tested yet; this code is later discarded
• Because stubs are used to replace lower level modules, no significant
data flow can occur until much later in the integration/testing process
24
Bottom-up Integration
• Integration and testing starts with the most atomic modules
in the control hierarchy
Advantages
• This approach verifies low-level data processing early in
the testing process
• Need for stubs is eliminated
Disadvantages
• Driver modules need to be built to test the lower-level
modules; this code is later discarded or expanded into a
full-featured version
• Drivers inherently do not contain the complete algorithms
that will eventually use the services of the lower-level
modules; consequently, testing may be incomplete or
more testing may be needed later when the upper level
modules are available
25
Sandwich Integration
• Consists of a combination of both top-down and bottom-up integration
• Occurs both at the highest level modules and also at the lowest level
modules
• Proceeds using functional groups of modules, with each group completed
before the next
• High and low-level modules are grouped based on the control and
data processing they provide for a specific program feature
• Integration within the group progresses in alternating steps between
the high and low level modules of the group
• When integration for a certain functional group is complete,
integration and testing moves onto the next group
• Reaps the advantages of both types of integration while minimizing the
need for drivers and stubs
• Requires a disciplined approach so that integration doesn’t tend towards
the “big bang” scenario
26
Regression Testing
• Each new addition or change to base lined software may cause problems
with functions that previously worked flawlessly
• Regression testing re-executes a small subset of tests that have already
been conducted
• Ensures that changes have not propagated unintended side effects
• Helps to ensure that changes do not introduce unintended behavior or
additional errors
• May be done manually or through the use of automated
capture/playback tools
• Regression test suite contains three different classes of test cases
• A representative sample of tests that will exercise all software
functions
• Additional tests that focus on software functions that are likely to be
affected by the change
• Tests that focus on the actual software components that have been
changed
27
Smoke Testing
• Smoke testing, also known as “Build Verification
Testing” or “Build Acceptance Testing,” is a type
of software testing that is typically performed at the
beginning of the development process to ensure that
the most critical functions of a software application
are working correctly.
• It is used to quickly identify and fix any major
issues with the software before more detailed
testing is performed.
• The goal of smoke testing is to determine whether
the build is stable enough to proceed with further
testing.
28
Goal of Smoke Testing
30
Validation Testing
Background
• Validation testing follows integration testing
• Focuses on user-visible actions and user-recognizable output from the system
• Designed to ensure that
• All functional requirements are satisfied
• All behavioral characteristics are achieved
• All performance requirements are attained
• Documentation is correct
• Usability and other requirements are met (e.g., transportability, compatibility, error recovery,
maintainability)
• After each validation test
• The function or performance characteristic conforms to specification and is accepted
• A deviation from specification is uncovered and a deficiency list is created
• A configuration review or audit ensures that all elements of the software configuration have been
properly developed, cataloged, and have the necessary detail for entering the support phase of the
software life cycle
32
Alpha and Beta Testing
Alpha testing
• Conducted at the developer’s site by end users
• Software is used in a natural setting with developers
watching intently
• Testing is conducted in a controlled environment
Beta testing
• Conducted at end-user sites
• Developer is generally not present
• It serves as a live application of the software in an
environment that cannot be controlled by the developer
• The end-user records all problems that are encountered
and reports these to the developers at regular intervals
• After beta testing is complete, software engineers make
software modifications and prepare for release of the
software product to the entire customer base
33
System Testing
Different Types of System
• Recovery testing
Testing
• Tests for recovery from system faults
• Forces the software to fail in a variety of ways and verifies that
recovery is properly performed
• Tests reinitialization, check pointing mechanisms, data recovery, and
restart for correctness
• Security testing
• Verifies that protection mechanisms built into a system will, in fact,
protect it from improper access
• Stress testing
• Executes a system in a manner that demands resources in abnormal
quantity, frequency, or volume
• Performance testing
• Tests the run-time performance of software within the context of an
integrated system
• Often coupled with stress testing and usually requires both hardware
and software instrumentation
• Can uncover situations that lead to degradation and possible system
failure 35
The Art of Debugging
Debugging Process
• Debugging occurs as a consequence of successful testing
• It is still very much an art rather than a science
• Good debugging ability may be an innate human trait
• Large variances in debugging ability exist
• The debugging process begins with the execution of a test case
• Results are assessed and the difference between expected and
actual performance is encountered
• This difference is a symptom of an underlying cause that lies
hidden
• The debugging process attempts to match symptom with cause,
thereby leading to error correction
37
Why is Debugging so Difficult?
38
Why is Debugging so Difficult?
(continued)
39
Debugging Strategies
• Objective of debugging is to find and correct the
cause of a software error
• Bugs are found by a combination of systematic
evaluation, intuition, and luck
• Debugging methods and tools are not a substitute for
careful evaluation based on a complete design model
and clear source code
• There are three main debugging strategies
• Brute force
• Backtracking
• Cause elimination 40
Strategy #1: Brute Force
• Most commonly used and least efficient method
• Used when all else fails
• Involves the use of memory dumps, run-time traces,
and output statements
• Leads many times to wasted effort and time
41
Strategy #2: Backtracking
• Can be used successfully in small programs
• The method starts at the location where a symptom has been
uncovered
• The source code is then traced backward (manually) until the
location of the cause is found
• In large programs, the number of potential backward paths
may become unmanageably large
42
Strategy #3: Cause Elimination
• Involves the use of induction or deduction and introduces the concept
of binary partitioning
• Induction (specific to general): Prove that a specific starting value
is true; then prove the general case is true
• Deduction (general to specific): Show that a specific conclusion
follows from a set of general premises
• Data related to the error occurrence are organized to isolate potential
causes
• A cause hypothesis is devised, and the aforementioned data are used
to prove or disprove the hypothesis
• Alternatively, a list of all possible causes is developed, and tests are
conducted to eliminate each cause
• If initial tests indicate that a particular cause hypothesis shows
promise, data are refined in an attempt to isolate the bug
43
Three Questions to ask Before
Correcting the Error
• Is the cause of the bug reproduced in another part of the program?
• Similar errors may be occurring in other parts of the program
• What next bug might be introduced by the fix that I’m about to make?
• The source code (and even the design) should be studied to assess the
coupling of logic and data structures related to the fix
• What could we have done to prevent this bug in the first place?
• This is the first step toward software quality assurance
• By correcting the process as well as the product, the bug will be
removed from the current program and may be eliminated from all
future programs
44