Cs502pc Se Unit 4
Cs502pc Se Unit 4
Cs502pc Se Unit 4
Testing Strategies: A strategic approach to software testing, test strategies for conventionalsoftware,
Black-Box and White-Box testing, Validation testing, System testing, the art of Debugging.
Product metrics: Software Quality, Metrics for Analysis Model, Metrics for Design Model,Metrics
for source code, Metrics for testing, Metrics for maintenance.
Metrics for Process and Products: Software Measurement, Metrics for software quality.
Testing Strategies
Software is tested to uncover errors introduced during design and construction. Testingoften
accounts for
More project effort than other s/e activity. Hence it has to be done carefully using a testingstrategy.
The strategy is developed by the project manager, software engineers and testing specialists.
Testing is the process of execution of a program with the intention of finding errors Involves 40%
of total project cost
Testing Strategy provides a road map that describes the steps to be conducted as part oftesting.
It should incorporate test planning, test case design, test execution and resultant data
collection and execution
Validation refers to a different set of activities that ensures that the software is traceable tothe
Customer requirements.
V&V encompasses a wide array of Software Quality Assurance
A strategic Approach for Software testing
Testing is a set of activities that can be planned in advance and conducted systematically.Testing
strategy
Should have the following characteristics:
-- usage of Formal Technical reviews(FTR)
-- Begins at component level and covers entire system
-- Different techniques at different points
-- conducted by developer and test group
-- should include debugging
Software testing is one element of verification and validation.
Verification refers to the set of activities that ensure that software correctly
implements aspecific function.
( Ex: Are we building the product right? )
Validation refers to the set of activities that ensure that the software built is
traceable tocustomer requirements.
( Ex: Are we building the right product ? )
Testing Strategy
Low level tests verifies small code segments. High level tests validate major
systemfunctions against customer requirements
Test Strategies for Conventional Software:
Testing Strategies for Conventional Software can be viewed as a spiral consisting of four
levelsof testing:
1) Unit Testing
2)Integration Testing
3)Validation Testing
and
4) System Testing
Unit Testing begins at the vortex of the spiral and concentrates on each unit of
software insource code.
It uses testing techniques that exercise specific paths in a component and its control
structure to ensure complete coverage and maximum error detection. It focuses on the internal
processing logicand data structures. Test cases should uncover errors.
Boundary testing also should be done as s/w usually fails at its boundaries. Unit
tests can be designed before coding begins or after source code is generated.
Integration testing: In this the focus is on design and construction of the software
architecture. It addresses the issues associated with problems of verification and program
construction by testing inputs and outputs. Though modules function independently
problems may arise because of interfacing. This technique uncovers errors associated with
interfacing. We can use top-down integration wherein modules are integrated by moving
downward through the control hierarchy, beginning with the main control module. The
other strategy is bottom –up which begins construction and testing with atomic modules
which are combined into clusters as we move up the hierarchy. A combined approach called
Sandwich strategy can be used i.e., top- down for higher level modules and bottom-up for
lower level modules.
System Testing: In system testing, s/w and other system elements are tested as a whole.
This is the last high-order testing step which falls in the context of computer system
engineering. Software is combined with other system elements like H/W, People, Database
and the overall functioning is checked by conducting a series of tests. These tests fully
exercise the computer based system. The types of tests are:
1. Recovery testing: Systems must recover from faults and resume processing within a
prespecified time.
It forces the system to fail in a variety of ways and verifies that recovery is properly
performed. Here the Mean Time To Repair (MTTR) is evaluated to see if it is within
acceptable limits.
2. Security Testing: This verifies that protection mechanisms built into a system will protect
it from improper penetrations. Tester plays the role of hacker. In reality given enough
resources and time it is possible to ultimately penetrate any system. The role of system
designer is to make penetration cost more than the value of the information that will be
obtained.
3. Stress testing: It executes a system in a manner that demands resources in abnormal
quantity, frequency or volume and tests the robustness of the system.
4. Performance Testing: This is designed to test the run-time performance of s/w within the
context of an integrated system. They require both h/w and s/w instrumentation.
Testing Tactics:
The goal of testing is to find errors and a good test is one that has a high probability of
findingan error.
A good test is not redundant and it should be neither too simple nor too
complex.Two major categories of software testing
Black box testing: It examines some fundamental aspect of a system, tests whether
eachfunction of product is fully operational.
White box testing: It examines the internal operations of a system and
examines theprocedural detail.
1) Graph based testing method: Testing begins by creating a graph of important objects and
theirrelationships
and then devising a series of tests that will cover the graph so that each object and
relationshipis exercised and errors are uncovered.
Object
Link
2) Equivalence partitioning: This divides the input domain of a program into classes of
datafrom which test
Cases can be derived. Define test cases that uncover classes of errors so that no. of test
cases are reduced.This is based on equivalence classes which represents a set of valid or
invalid states forinputconditions. Reduces the cost of testing
Example
Input consists of 1 to 10
Then classes are n<1,1<=n<=10,n>10
Choose one valid class with value within the allowed range and two invalid classes
wherevalues are greater than maximum value and smaller than minimum value.
Example
If 0.0<=x<=1.0
Then test cases are (0.0,1.0) for valid input and (-0.1 and 1.1) for invalid input
itself2.Control
Structure testing
This broadens testing coverage and improves quality of testing. It uses the following methods:
a) Condition testing: Exercises the logical conditions contained in a program module.
Focuses on testing each condition in the program to ensure that it does not contain
errorsSimple condition
E1<relation operator>E2 Compound condition
simple condition<Boolean operator>simple
condition
Types of errors include operator errors, variable errors, arithmetic expression errors etc.
b) Data flow Testing
This selects test paths according to the locations of definitions and use of variables in
aprogram Aims to ensure that the definitions of variables and subsequent use is tested
First construct a definition-use graph from the control flow of a program
DEF(definition):definition of a variable on the left-hand side of an assignment statement
USE: Computational use of a variable like read, write or variable on the right hand
of
assignment statement Every DU chain be tested at least once.
c) Loop Testing
This focuses on the validity of loop constructs. Four categories can be defined
1.Simple
loops
2.Nested
loops
Debugging is an innate human trait. Some are good at it and some are
not.
Debugging Strategies:
The objective of debugging is to find and correct the cause of a software error which is
realizedby a combination of systematic evaluation, intuition and luck. Three strategies are
proposed: 1)Brute Force Method.
2)Back Tracking
3)Cause
Elimination
Brute Force: Most common and least efficient method for isolating the cause of a s/w error.
This is applied
when all else fails. Memory dumps are taken, run-time traces are invoked and program
is loaded with output statements. Tries to find the cause from the load of information
Leads towaste of time and effort.
Automated Debugging: This supplements the above approaches with debugging tools
thatprovide semi-automated support like debugging compilers, dynamic debugging
aids, test casegenerators, mapping tools etc.
Regression Testing: When a new module is added as part of integration testing the software
changes.
This may cause problems with the functions which worked properly before. This testing is
there-execution of some subset of tests that are already conducted to ensure that
changes have not propagatedunintended side effects. It ensures that changes do not
introduce unintended behaviour or errors. This can be done manually or automated.
Software Quality Conformanceto explicitly stated functional andperformance
requirements, explicitly documented development standards, and implicit characteristics
that are expected of
Factors that affect software quality can be categorized in two broad groups:
Factors that can be directly measured (e.g. defects uncovered during testing)
Factors that can be measured only indirectly (e.g. usability or maintainability)
Reliability
Efficiency
Integrity
Usability
2. Product Revision
Maintainability
Flexibility
3. Product
Transition
Portability
Reusability
Interoperability
1. Functionality
2.Reliability
3.Usability
4.Efficiency
5.Maintainability
6.Portability
Product metrics
• Information
Domain Count Simple avg Complex
EIS 3 4 6
EOS 4 5 7
EQS 3 4 6
ILFS 7 10 15
EIFS 5 7 10
DSQI=sigma of WiDi
Primitive measure that may be derived after the code is generated or estimated once design is
complete