Software Testingstrategies
Software Testingstrategies
TESTINGSTRATEGIES
• ➢ Software testing strategies integrates the design of test cases into
the different phases of softwaredevelopment
• ➢ The strategy incorporates test planning, test case design, test
execution, and test result collection andevaluation
• Verification and Validation
• ➢ Software testing is part of a broader group of activities called
verification and validation that are involved in software
qualityassurance
• ➢ Verification (Are the algorithms codedcorrectly?) o The set of
activities that ensure that software correctly implements a specific
function or algorithm
• ➢ Validation (Does it meet userrequirements?) o The set of activities
that ensure that the software that has been built is traceable to
customer requirements
Levels of Testing
UNITTESTING :
➢ Unit testing focuses testing each individual moduleseparately.
➢ This concentrates on the internal processing logic and
datastructures
➢ Unit testing is simplified when a module is designed with
highcohesion
o Reduces the number of testcases
o Allows errors to be more easily predicted anduncovered
Unit test considerations ➢ Moduleinterface
▪ This is tested to ensure that information flows properly into and out of
themodule
➢ Local datastructures ▪ This ensure that data stored temporarily maintains its
integrity during all steps in an algorithmexecution
➢ Boundaryconditions ▪ This ensures that the module operates properly at
boundaryvalues
➢ Independentpaths ▪ Paths are exercised to ensure that all statements in a
module have been executed at leastonce
➢ Error handlingpaths ▪ Ensure that the algorithms respond correctly to specific
errorconditions
Common Errors in Execution Paths
➢ Misunderstood or incorrect arithmeticprecedence
➢ Mixed mode operations (e.g., int, float,char)
➢ Incorrect initialization ofvalues
➢ Precision inaccuracy and round-offerors
➢ Incorrect symbolic representation of an expression (int vs.float)
INTEGRATIONTESTING
Integration testing is defined as a systematic technique for constructing
the software architecture and also conduct tests to uncover errors
associated withinterfaces
➢ The Objective of this testing is to take unit tested modules and build
a program structure based on the prescribeddesign
➢ Two Approaches used
1. Non-incremental Integration Testing or Big BangApproach
2. Incremental IntegrationTesting
1. Big BangApproach
➢ In this approach all the components are combined in advance
and the entire program is tested as a whole
➢ This is not appropriate and results in Chaos.
➢ Many seemingly-unrelated errors are encountered in this testing
➢ Correction is difficult because isolation of causes is complicated
➢ Once a set of errors are corrected, more errors occur, and testing
appears to enteran endlessloop
Incremental IntegrationTesting
➢ Three kinds – Top-down integration – Bottom-up integration –
Sandwich integration
➢ In this approach the program is constructed and tested in small
increments
➢ Errors are easier to isolate and correct
➢ Interfaces are more likely to be tested completely
➢ A systematic test approach is applied
Top-down Integration
➢ Modules are integrated by moving downward
through the control hierarchy, beginning with the
mainmodule
➢ Subordinate modules are incorporated in either
a depth-first or breadth-first fashion
➢ Depth First: All modules on a major control path
areintegrated
➢ Breadth First: All modules directly subordinate
at each level are integrated
Steps followed in top-down integration:
1. The main control module is used as a test driver and stubs are substituted for
all components directly subordinate to the main control module.
2. Depending on the integration approach selected (i.e., depth or breadth first),
subordinate stubs are replaced one at a time with actualcomponents.
3. Tests are conducted as each component isintegrated.
4. On completion of each set of tests, another stub is replaced with the real
component.
5. Regression testing may be conducted to ensure that new errors have not
been introduced.
Depth-first integration integrates the left side components M1, M2 , M5 first.
Next, M8 or (if necessary for proper functioning of M2) M6 would be
integrated. Then, the central and right-hand control paths arebuilt. ➢ In
Breadth-first integration the components M2, M3, and M4 would be
integrated first. The next control level, M5, M6, and soon.Bottom-up
Integration
➢ Integration and testing starts with the most atomic modules in the
control hierarchy
Steps for bottom up integration:
1. Low-level components are combined into clusters
(sometimes called builds) that perform a specific
softwaresubfunction.
2. A driver is written to coordinate test case input
andoutput.
3. The cluster istested.
4. Drivers are removed and clusters are combined
moving upward in the programstructure.
Sandwich Integration:
➢ This consists of a combination of both top-down and bottom-
upintegration
➢ This occurs both at the highest level modules and also at the lowest
levelmodules
➢ Proceeds using functional groups of modules, with each group completed
before thenext
o High and low-level modules are grouped based on the control and data
processing they provide for a specific programfeature
o Integration within the group progresses in alternating steps between the
high and low level modules of thegroup
o When integration for a certain functional group is complete, integration
and testing moves onto the nextgroup
Regression Testing
➢ Any addition or change to the software may cause problems with the
functions that previously worked flawlessly
➢ Regression testing re-executes a small subset of tests that have already
been conducted
➢ This helps to ensure that changes have not propagated unexpected
errors in the program
➢ This testing can be done manually or through automated tools
➢ Regression test suite contains three different classes of testcases – A
sample of tests that will exercise all software functions – Additional tests
that focus on software functions that are likely to be affected by the change
– Tests that focus on the actual software components that have
beenchanged
Smoke Testing
➢ Smoke testing is designed as a pacing mechanism for time-critical projects ➢ This
testing allows the software team to assess its project on a frequent basis
➢ This includes the following activities – The software is compiled and linked into a
build – A series of breadth tests is designed to expose errors that will keep the build
from properly performing its function – The build is integrated with other builds and
the entire product is smoke tested daily
• Daily testing gives managers and practitioners a realistic assessment of the progress
of the integration testing – After a smoke test is completed, detailed test scripts are
executed
Benefits of Smoke Testing
➢ Integration risk is minimized
➢ The quality of the end-product is improved
➢ Error diagnosis and correction are simplified
➢ Progress is easier to assess
SYSTEM TESTING
Different Types of system testing Recovery testing
➢ This is to tests for recovery from system faults
➢ Recover testing forces the software to fail in a variety of ways and
verifies that recovery is properly performed
➢ Tests reinitialization, checkpointing mechanisms, data recovery, and
restart for correctness Security testing
➢ This verifies whether the protection mechanisms built in the system
will protect it from improper access
➢ During security testing, the tester plays the role of the hacker, attempts
to acquire passwords, may purposely cause system errors, may browse
through insecure data, hoping to find the key to systementry.
Stress testing
➢ Stress tests are designed to confront programs with abnormal situations. ➢ This
executes a system in a manner that demands resources in abnormal quantity, frequency,
or volume Performance testing
➢ Tests the run-time performance of software within the context of an integrated system
➢ Performance testing is often combined with stress testing and usually requires both
hardware and software instrumentation
➢ This testing can uncover situations that lead to degradation and possible system failure
Deployment Testing
➢ Deployment testing is also called as configuration testing
➢ Software should be able to execute on a variety of platforms and under more than one
operating system environment. Deployment testing exercises the software in each
environment in which it is tooperate.
➢ Deployment testing examines all installation procedures and specialized installation
software that will be used by customers, and all documentation that will be used to
introduce the software to endusers.
DEBUGGING Debugging Process
➢ Debugging is the process of removing the defects.
➢ It occurs as a consequence of successful testing
➢ The debugging process begins with the execution of a test case, assess
the result and then the difference between expected and actual
performance is encountered
➢ The debugging process matches the symptom with the cause and leads
to error correction
➢ The debugging process will usually have one of twooutcomes: (1) The
cause will be found and corrected (2) The cause will not be found. In this
case, the debugger may suspect a cause, design a test case to help validate
that suspicion, and work toward error correction in an iterative fashion.
Debugging Strategies
➢ The main objective of debugging is to find the cause of the error and correct it.
➢ Bugs can be found by systematic evaluation andintuition ➢ There are three main
debugging strategies – Bruteforce – Backtracking – Cause elimination
Brute Force
Brute force is the most commonly used and least efficient method. This is used
when all else fails ➢ This approach involves the use of memory dumps, run-time traces,
and output statements ➢ This leads to wasted effort and time
Backtracking
➢ This can be used successfully in smallprograms ➢ The method starts at the location
where a symptom has been uncovered ➢ The source code is then traced backward until
the location of the cause is found ➢ In large programs, the number of potential backward
paths may become unmanageably large
Cause Elimination
➢ This approach involves the use of induction or deduction and introduces the concept of
binarypartitioning – Induction (specific to general): Prove that a specific starting value is
true; then prove the general case is true – Deduction (general to specific): Show that a
specific conclusion follows from a set of generalpremises
➢ Data related to the error occurrence are organized to isolate potentialcauses
➢ A cause hypothesis is defined, and the hypothesis is proved or disproved using the data
collected.
➢ Alternatively, a list of all possible causes is developed, and tests are conducted to
eliminate eachcause
➢ If initial tests indicate that a particular cause hypothesis shows promise, data are refined
in an attempt to isolate thebug Correcting the Error
➢ Three Questions to ask Before Correcting theError
• Is the cause of the bug reproduced in another part of theprogram? – Similar errors may
be occurring in other parts of theprogram
• What next bug might be introduced by the fix that I’m about tomake?
• What could we have done to prevent this bug inthe firstplace?