0% found this document useful (0 votes)
11 views69 pages

Unit 4 & 5

The document outlines various software testing strategies, including black-box and white-box testing, and emphasizes the importance of a strategic approach to software testing and debugging. It discusses different testing techniques, metrics for assessing software quality, and the significance of validation and system testing. Additionally, it covers the art of debugging and the metrics used for analysis, design, source code, testing, and maintenance.

Uploaded by

waseembolte
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views69 pages

Unit 4 & 5

The document outlines various software testing strategies, including black-box and white-box testing, and emphasizes the importance of a strategic approach to software testing and debugging. It discusses different testing techniques, metrics for assessing software quality, and the significance of validation and system testing. Additionally, it covers the art of debugging and the metrics used for analysis, design, source code, testing, and maintenance.

Uploaded by

waseembolte
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 69

UNIT 4

Testing Strategies

A strategic approach to software testing


Black-Box Testing
White-Box testing
Validation testing
System testing
The art of Debugging

1
Product metrics
Software Quality
Metrics for analysis model
Metrics for design model
Metrics for source code
Metrics for testing
Metrics for maintenance
Metrics for process and products
Software measurement
Metrics for software quality
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
A strategic Approach for Software testing

Software Testing

• One of the important phases of software


development

• Testing is the process of execution of a


program with the intention of finding
errors

• Involves 40% of total project cost


20
Characteristics

• To perform effective testing, software team


should perform effective Formal Technical
reviews(FTR)
• Testing begins at component level and
move outward to integration of entire
component based system.
• Different testing techniques are
appropriate at different points in time.

21
• Testing can be done by software developer
and independent testing group

• Testing and debugging are different activities.


Debugging follows testing

Note:
Verification: Are we building the Product Right ?
Validation: Are we building the Right Product ?

22
Criteria for completion of software testing

• No body is absolutely certain that software


will not fail
• Testing can never be completed, rather
than responsibility of testing gets shifted
from software engineer to the end user

23
A strategic Approach for Software testing
Testing Strategy

• A road map that incorporates test planning, test


case design, test execution and resultant data
collection and execution

• Validation refers to a different set of activities


that ensures that the software is traceable to the
customer requirements.

24
Testing Strategies for Conventional
Software
1)Unit Testing

2)Integration Testing

3)Validation Testing and

4)System Testing

25
Spiral Representation for
Conventional Software

26
Software Testing
• Two major categories of software testing
Black box testing
White box testing
Black box testing
Treats the system as black box whose
behavior can be determined by studying
its input and related output
Not concerned with the internal structure
of the program
27
Black Box Testing
• It focuses on the functional requirements
of the software ie it enables the sw
engineer to derive a set of input conditions
that fully exercise all the functional
requirements for that program.
• Concerned with functionality and
implementation
1)Graph based testing method
2)Equivalence partitioning
28
Graph based testing
• Draw a graph of objects and relations
• Devise test cases to uncover the graph
such that each object and its relationship
exercised.

29
Graph based testing
Directed
Object link Object
#1ob #2

Node
Undirected weight
link
Parallel Links
object
#3

Fig:a

30
Equivalence partitioning
• Divides all possible inputs into classes
such that there are a finite equivalence
classes.
• Equivalence class
-- Set of objects that can be linked by
relationship
• Reduces the cost of testing

31
Equivalence partitioning
Example
• Input consists of 1 to 10
• Then classes are n<1,1<=n<=10,n>10
• Choose one valid class with value within
the allowed range and two invalid classes
where values are greater than maximum
value and smaller than minimum value.

32
Boundary Value analysis
• Select input from equivalence classes
such that the input lies at the edge of the
equivalence classes
• Set of data lies on the edge or boundary of
a class of input data or generates the data
that lies at the boundary of a class of
output data

33
Boundary Value analysis

Example
• If 0.0<=x<=1.0
• Then test cases (0.0,1.0) for valid input
and (-0.1 and 1.1) for invalid input

34
Orthogonal array Testing
• To problems in which input domain is
relatively small but too large for exhaustive
testing
Example
• Three inputs A,B,C each having three
values will require 27 test cases
• L9 orthogonal testing will reduce the
number of test case to 9 as shown below
35
Orthogonal array Testing
A B C
1 1 1
1 2 2
1 3 3
2 1 3
2 2 3
2 3 1
3 1 3
3 2 1
3 3 2

36
White Box testing
• Also called glass box testing
• Involves knowing the internal working of a
program
• Guarantees that all independent paths will
be exercised at least once.
• Exercises all logical decisions on their true
and false sides

37
White Box testing
• Executes all loops
• Exercises all data structures for their
validity
• White box testing techniques
1.Basis path testing
2.Control structure testing

38
Basis path testing
• Proposed by Tom McCabe
• Defines a basic set of execution paths
based on logical complexity of a
procedural design
• Guarantees to execute every statement in
the program at least once

39
Basis path testing
• Steps of Basis Path Testing
1.Draw the flow graph from flow chart of the
program
2.Calculate the cyclomatic complexity of the
resultant flow graph
3.Prepare test cases that will force
execution of each path

40
Basis path testing
• Three methods to compute Cyclomatic
complexity number
1.V(G)=E-N+2(E is number of edges, N is
number of nodes
2.V(G)=Number of regions
3.V(G)= Number of predicates +1

41
Control Structure testing
• Basis path testing is simple and effective
• It is not sufficient in itself
• Control structure broadens the basic test
coverage and improves the quality of white
box testing
• Condition Testing
• Data flow Testing
• Loop Testing
42
Condition Testing
--Exercise the logical conditions contained in a
program module
--Focuses on testing each condition in the program
to ensure that it does contain errors
--Simple condition
E1<relation operator>E2
--Compound condition
simple condition<Boolean operator>simple
condition

43
Data flow Testing
• Selects test paths according to the
locations of definitions and use of
variables in a program
• Aims to ensure that the definitions of
variables and subsequent use is tested
• First construct a definition-use graph from
the control flow of a program

44
Data flow Testing
• Def(definition):definition of a variable on
the left-hand side of an assignment
statement
• C- use: Computational use of a variable
like read, write or variable on the right
hand of assignment statement
• P- use: Predicate use in the condition
• Every DU chain be tested at least once.

45
Loop Testing
• Focuses on the validity of loop constructs
• Four categories can be defined
1.Simple loops
2.Nested loops
3.Concatenated loops
4.Unstructured loops

46
Loop Testing
• Testing of simple loops
-- N is the maximum number of allowable
passes through the loop
1.Skip the loop entirely
2.Only one pass through the loop
3.Two passes through the loop
4.m passes through the loop where m>N
5.N-1,N,N+1 passes the loop
47
Loop Testing
Nested Loops
1.Start at the innermost loop. Set all other
loops to maximum values
2.Conduct simple loop test for the
innermost loop while holding the outer
loops at their minimum iteration
parameter.
3.Work outward conducting tests for the
next loop but keeping all other loops at
minimum.
48
Loop Testing
Concatenated loops
• Follow the approach defined for simple
loops, if each of the loop is independent of
other.
• If the loops are not independent, then
follow the approach for the nested loops
Unstructured Loops
• Redesign the program to avoid
unstructured loops
49
Validation Testing
• It succeeds when the software functions in a
manner that can be reasonably expected by the
customer.

1)Validation Test Criteria

2)Configuration Review

3)Alpha And Beta Testing


50
System Testing
• Its primary purpose is to test the complete
software.
1)Recovery Testing

2)Security Testing

3Stress Testing and

4)Performance Testing

51
The Art of Debugging
• Debugging occurs as a consequences of
successful testing.

Debugging Strategies
1)Brute Force Method.
2)Back Tracking
3)Cause Elimination

52
The Art of Debugging
• Brute force
-- Most common and least efficient
-- Applied when all else fails
-- Memory dumps are taken
-- Tries to find the cause from the load of information

• Back tracking
-- Common debugging approach
-- Useful for small programs
-- Beginning at the system where the symptom has been
uncovered, the source code traced backward until the
site of the cause is found.

53
The Art of Debugging
• Cause Elimination
A list of all possible causes is developed and
tests are conducted to eliminate each

Why debugging is difficult?

54
The Art of Debugging
The Debugging process

Execution of
test cases
Results

Test
cases Additional Suspected
tests causes

Regression tests Debugging


Identified
Corrections causes
55
Software Quality
• Conformance to explicitly stated functional and
performance requirements, explicitly documented
development standards, and implicit characteristics that
are expected of all professionally developed software.

• Factors that affect software quality can be categorized in


two broad groups:

1. Factors that can be directly measured


2. Factors that can be measured only indirectly

56
Software Quality
• McCall’s quality factors
1.Product operation
a. Correctness
b. Reliability
c. Efficiency
d. Integrity
e. Usability
2.Product Revision
a. Maintainability
b. Flexibility
c. Testability 57
Software Quality
3. Product Transition
a. Portability
b. Reusability
c. Interoperability
ISO 9126 Quality Factors
1.Functionality
2.Reliability
3.Usability
4.Efficiency
5.Maintainability
6.Portability
58
59
• Metrics for the Analysis model
Function point Metric
 Measures the functionality delivered by the
system
 FP computed from the following
parameters
1) Number of external inputs(EIS)
2) Number external outputs(EOS)

60
Metrics for the Analysis model
 Number of external Inquiries(EQS)
 Number of Internal Logical Files(ILF)
 Number of external interface files(EIFS)
Each parameter is classified as simple,
average or complex and weights are
assigned as follows

61
Metrics for the Analysis model

• Information Domain Count Simple avg Complex


EIS 3 4 6
EOS 4 5 7
EQS 3 4 6
ILFS 7 10 15
EIFS 5 7 10

FP=Count total *[0.65+0.01*E(Fi)]

62
Metrics for Design Model
• DSQI(Design Structure Quality Index)
• Compute s1 to s7 from data and
architectural design
• S1:Total number of modules
• S2:Number of modules whose correct
function depends on the data input
• S3:Number of modules whose function
depends on prior processing
• S4:Number of data base items
63
Metrics for Design Model
• S5:Number of unique database items
• S6: Number of database segments
• S7:Number of modules with single entry
and exit
• Calculate D1 to D6 from s1 to s7 as
follows:
• D1=1 if standard design is followed
otherwise D1=0
64
Metrics for Design Model
• D2(module independence)=(1-(s2/s1))
• D3(module not depending on prior
processing)=(1-(s3/s1))
• D4(Data base size)=(1-(s5/s4))
• D5(Database compartmentalization)=(1-(s6/s4)
• D6(Module entry/exit characteristics)=(1-(s7/s1))
• DSQI=sigma of WiDi

65
Metrics for Design Model
• i=1 to 6,Wi is weight assigned to Di
• DSQI of present design be compared with
past DSQI. If DSQI is significantly lower
than the average, further design work and
review are indicated

66
METRIC FOR SOURCE CODE
• Primitive measure that may be derived after the
code is generated or estimated once design is
complete
• n1 = the number of distinct operators that appear
in a program
• n2 = the number of distinct operands that appear
in a program
• N1 = the total number of operator occurrences.
• N2 = the total number of operand occurrence.
• Overall program length N can be computed:
• N = n1 log2 n1 + n2 log2 n2

• V = N log2 (n1 + n2)


67
METRIC FOR TESTING
• n1 = the number of distinct operators that appear in a
program
• n2 = the number of distinct operands that appear in a
program
• N1 = the total number of operator occurrences.
• N2 = the total number of operand occurrence.

• Program Level and Effort

• PL = 1/[(n1 / 2) x (N2 / n2 l)]

• e = V/PL

68
METRICS FOR MAINTENANCE
• Mt = the number of modules in the current release
• Fc = the number of modules in the current release that
have been changed
• Fa = the number of modules in the current release that
have been added.
• Fd = the number of modules from the preceding release
that were deleted in the current release

• The Software Maturity Index, SMI, is defined as:

• SMI = [Mt – (Fc + Fa + Fd)/ Mt ]

69

You might also like