Unit-Iv A Strategic Approach For Software Testing: Validation
Unit-Iv A Strategic Approach For Software Testing: Validation
Software Testing
• Two major categories of software testing
Black box testing
White box testing
Loop Testing
Focuses on the validity of loop constructs
Four categories can be defined
Simple loops
Nested loops
Concatenated loops
Unstructured loops
Testing of simple loops
-- N is the maximum number of allowable passes through the loop
Skip the loop entirely
Only one pass through the loop
Two passes through the loop
m passes through the loop where
m>N N-1,N,N+1 passes the loop
Nested Loops
Start at the innermost loop. Set all other loops to maximum values
Conduct simple loop test for the innermost loop while holding the outer loops at their
minimum iteration parameter.
Work outward conducting tests for the next loop but keeping all other loops at
minimum. Concatenated loops
Follow the approach defined for simple loops, if each of the loop is independent of other.
If the loops are not independent, then follow the approach for the nested
loops Unstructured Loops
Redesign the program to avoid unstructured
loops Validation Testing
It succeeds when the software functions in a manner that can be reasonably expected by
the customer.
1) Validation Test Criteria
2)Configuration Review
3)Alpha And Beta Testing
System Testing
Its primary purpose is to test the complete
software. 1)Recovery Testing
2) Security Testing
3Stress Testing and
4)Performance Testing
The Art of Debugging
Debugging occurs as a consequences of successful testing.
Debugging Stratergies
1)Brute Force Method.
2)Back Tracking 3)Cause
Elimination and
4)Automated debugging
Brute force
Most common and least efficient
Applied when all else fails
Memory dumps are taken
Tries to find the cause from the load of information
Back tracking
Common debugging approach
Useful for small programs
Beginning at the system where the symptom has been uncovered, the source code traced
backward until the site of the cause is found.
Cause Elimination
Based on the concept of Binary partitioning
A list of all possible causes is developed and tests are conducted to eliminate each
Software Quality
Conformance to explicitly stated functional and performance requirements, explicitly
documented development standards, and implicit characteristics that are expected of all
professionally developed software.
Factors that affect software quality can be categorized in two broad groups:
Factors that can be directly measured (e.g. defects uncovered during testing)
2. Factors that can be measured only indirectly (e.g. usability or maintainability)
McCall‘s quality factors
Product operation
Correctness
Reliability
Efficiency
Integrity
Usability
Product Revision
Maintainability
Flexibility
Testability
Product Transition
Portability
Reusability
Interoperability
ISO 9126 Quality
Factors 1.Functionality 2.
Reliability
3. Usability
4.Efficiency
5.Maintainability
6.Portability
Product metrics
Product metrics for computer software helps us to assess quality.
Measure
Provides a quantitative indication of the extent, amount, dimension, capacity or size of some attribute
of a product or process
Metric(IEEE 93 definition)
A quantitative measure of the degree to which a system, component or process possess a given attribute
Indicator
A metric or a combination of metrics that provide insight into the software process, a software project
or a product itself
Product Metrics for analysis,Design,Test and maintenance
Product metrics for the Analysis model
Function point Metric
First proposed by Albrecht
Measures the functionality delivered by the system
FP computed from the following parameters
Number of external inputs(EIS)
Number external outputs(EOS)
Number of external Inquiries(EQS)
Number of Internal Logical Files(ILF)
Number of external interface files(EIFS)
Each parameter is classified as simple, average or complex and weights are assigned as follows
•Information Domain Count Simple avg Complex
EIS 3 4 6
EOS 4 5 7
EQS 3 4 6
ILFS 7 10 15
EIFS 5 7 10
SOFTWARE MEASUREMENT
Software measurement can be categorized in two ways.
Direct measures of the software engineering process include cost and effort applied. Direct
measures of the product include lines of code (LOC) produced, execution speed, memory size,
and defects reported over some set period of time.
Indirect measures of the product include functionality, quality, complexity, efficiency,
reliability, maintainability, and many other "–abilities"
Size-Oriented Metrics
Size-oriented software metrics are derived by normalizing quality and/or productivity measures
by considering the size of the software that has been produced.
To develop metrics that can be assimilated with similar metrics from other projects, we choose lines of
code as our normalization value. From the rudimentary data contained in the table, a set of simple size-
oriented metrics can be developed for each project:
Errors per KLOC (thousand lines of code).
Defects per KLOC.
$ per LOC.
Page of documentation per KLOC.
In addition, other interesting metrics can be computed:
Errors per person-month.
LOC per person-month.
$ per page of documentation.
Function-Oriented Metrics
Function-oriented software metrics use a measure of the functionality delivered by the application as a
normalization value. Since ‗functionality‘ cannot be measured directly, it must be derived indirectly using other
direct measures. Function-oriented metrics were first proposed by Albrecht, who suggested a measure called the
function point. Function points are derived using an empirical relationship based on countable (direct)
measures of software's information domain and assessments of software complexity.
Proponents claim that FP is programming language independent, making it ideal for application
using conventional and nonprocedural languages, and that it is based on data that are more likely
to be known early in the evolution of a project, making FP more attractive as an estimation
approach.
Opponents claim that the method requires some ―sleight of hand ‖ in that computation is
basedsubjective rather than objective data, that counts of the information domain can be difficult
to collect after the fact, and that FP has no direct physical meaning- it‘s just a number.
Typical Function-Oriented Metrics:
errors per FP (thousand lines of code)
defects per FP
$ per FP
pages of documentation per FP
FP per person-month