0% found this document useful (0 votes)
19 views101 pages

Lect 22 To 26 Testing - BB Testing and Levels

Uploaded by

Surada Srisai
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views101 pages

Lect 22 To 26 Testing - BB Testing and Levels

Uploaded by

Surada Srisai
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 101

Testing

1
Organization of this lecture
 Important concepts in program testing
 Black-box testing:
 equivalence partitioning
 boundary value analysis
 White-box testing
 Unit, Integration, and System testing
 Summary

2
Testing

 The aim of testing is to identify all


defects in a software product.
 However, in practice even after
thorough testing:
 one cannot guarantee that the
software is error-free.

3
Testing

 The input data domain of most


software products is very large:
 it is not practical to test the
software exhaustively with each
input data value.

4
Testing

 Testing does however expose many


errors:
 testing provides a practical way of
reducing defects in a system
 increases the users' confidence in a
developed system.

5
Testing
 Testing is an important development
phase:
 requires the maximum effort among all
development phases.
 In a typical development organization:
 maximum number of software engineers can
be found to be engaged in testing activities.

6
Testing

 Many engineers have the wrong


impression:
 testing is a secondary activity
 it is intellectually not as
stimulating as the other
development activities, etc.

7
Testing
 Testing a software product is in
fact:
 as much challenging as initial
development activities such as
specification, design, and coding.
 Also, testing involves a lot of
creative thinking.

8
How do you test a
program?

 Input test data to the


program.
 Observe the output:
 Check if the program
behaved as expected.

9
How do you test a system?

10
How do you test a system?
 If the program does not
behave as expected:
 note the conditions under
which it failed.
 later debug and correct.

11
Basic Concepts and
terminologies
Error, Faults, and Failures
 Error is a mistake committed by
the developer team during any of
the development phases.
 An error is sometimes referred to
as a fault, a bug or a defect.

12
Basic Concepts and
terminologies
Error, Faults, and Failures
 A failure is a manifestation of
an error (aka defect or bug).
 mere presence of an error may
not lead to a failure.

13
Basic Concepts and
terminologies
Test cases and Test suites
 A test case is a triplet [I,S,O]
 I is the data to be input to the
system,
 S is the state of the system at
which the data will be input,
 O is the expected output of the
system.
14
Basic Concepts and
terminologies
Test cases and Test suites

 Test a software using a set of


carefully designed test cases:
 the set of all test cases is
called the test suite

15
Verification versus
Validation

 Verification is the process of


determining:
 whether output of one phase of development
conforms to its previous phase.
 Validation is the process of determining
 whether a fully developed system
conforms to its SRS document.

16
Verification versus
Validation

 Aim of Verification:
 phase containment of errors
 Aim of validation:
 final product is error free.

17
Verification versus
Validation

 Verification:
 are we doing right?
 Validation:
 have we done right?

18
Design of Test Cases
 Exhaustive testing of any non-
trivial system is impractical:
 input data domain is extremely large.
 Design an optimal test suite:
 of reasonable size and
 uncovers as many errors as
possible.

19
Design of Test Cases
 If test cases are selected randomly:
 many test cases would not contribute to the
significance of the test suite,
 would not detect errors not already being
detected by other test cases in the suite.
 Number of test cases in a randomly
selected test suite:
 not an indication of effectiveness of testing.

20
Design of Test Cases

 Testing a system using a large number


of randomly selected test cases:
 does not mean that many errors in the
system will be uncovered.
 Consider an example for finding the
maximum of two integers x and y.

21
Design of Test Cases

 The code has a simple programming error:


 If (x>y) max = x;
else max = y;
 test suite {(x=3,y=2);(x=2,y=3)} can
detect the error,
 a larger test suite {(x=3,y=2);(x=4,y=3);
(x=5,y=1)} does not detect the error.

22
Design of Test Cases

 Systematic approaches are


required to design an optimal
test suite:
 each test case in the suite
should detect different errors.

23
Design of Test Cases

 There are essentially two main


approaches to design test
cases:
 Black-box approach
 White-box (or glass-box)
approach
24
Black-box Testing
 Test cases are designed using only
functional specification of the
software:
 without any knowledge of the internal
structure of the software.
 For this reason, black-box testing is
also known as functional testing.

25
White-box Testing
 Designing white-box test cases:
 requires knowledge about the
internal structure of software.
 white-box testing is also called
structural testing.
 In this unit we will not study white-
box testing.

26
Black-box Testing

 There are essentially two main


approaches to design black box
test cases:
 Equivalence class partitioning
 Boundary value analysis

27
Equivalence Class
Partitioning

 Input values to a program are


partitioned into equivalence classes.
 Partitioning is done such that:
 program behaves in similar ways to
every input value belonging to an
equivalence class.

28
Why define equivalence
classes?

 Test the code with just one


representative value from each
equivalence class:
 as good as testing using any other
values from the equivalence classes.

29
Equivalence Class
Partitioning

 How do you determine the


equivalence classes?
 examine the input data.
 few general guidelines for
determining the equivalence classes
can be given

30
Equivalence Class
Partitioning
 If the input data to the program is
specified by a range of values:
 e.g. numbers between 1 to 5000.
 one valid and two invalid equivalence
classes are defined.
1 5000

31
Equivalence Class
Partitioning
 If input is an enumerated set of
values:
 e.g. {a,b,c}
 one equivalence class for valid input
values
 another equivalence class for invalid
input values should be defined.

32
Example
 A program reads an input value
in the range of 1 and 5000:
 computes the square root of the
input number
SQR
T

33
Example (cont.)
 There are three equivalence classes:
 the set of negative integers,
 set of integers in the range of 1 and
5000,
 integers larger than 5000.

1 5000

34
Example (cont.)
 The test suite must include:
 representatives from each of the
three equivalence classes:
 a possible test suite can be:
{-5,500,6000}.
1 5000

35
36
37
38
Boundary Value Analysis

 Some typical programming errors occur:


 at boundaries of equivalence classes
 might be purely due to psychological
factors.
 Programmers often fail to see:
 special processing required at the
boundaries of equivalence classes.

39
Boundary Value Analysis

 Programmers may improperly


use < instead of <=
 Boundary value analysis:
 select test cases at the
boundaries of different
equivalence classes.
40
41
42
Example

 For a function that computes the


square root of an integer in the
range of 1 and 5000:
 test cases must include the
values: {0,1,5000,5001}.
1 5000

43
44
45
46
47
48
49
50
51
52
53
54
55
Levels of Testing

 Software products are tested at


three levels:
 Unit testing
 Integration testing
 System testing

56
Unit testing
 During unit testing, modules are
tested in isolation:
 If all modules were to be tested
together:
 it may not be easy to determine
which module has the error.

57
Unit testing

 Unit testing reduces debugging


effort several folds.
 Programmers carry out unit
testing immediately after they
complete the coding of a module.

58
Integration testing

 After different modules of a system


have been coded and unit tested:
 modules are integrated in steps
according to an integration plan
 partially integrated system is
tested at each integration step.

59
System Testing

 System testing involves:


 validating a fully developed system
against its requirements.

60
Integration Testing

 Develop the integration plan by


examining the structure chart :
 big bang approach
 top-down approach
 bottom-up approach
 mixed approach

61
Big bang Integration
Testing
 Big bang approach is the simplest
integration testing approach:
 all the modules are simply put
together and tested.
 this technique is used only for
very small systems.

62
Big bang Integration
Testing
 Main problems with this approach:
 if an error is found:
 it is very difficult to localize the error
 the error may potentially belong to any of the
modules being integrated.
 debugging errors found during big bang
integration testing are very expensive to fix.

63
64
Bottom-up Integration
Testing
 Integrate and test the bottom level
modules first.
 A disadvantage of bottom-up testing:
 when the system is made up of a large
number of small subsystems.
 This extreme case corresponds to the
big bang approach.

65
66
Top-down integration
testing
 Top-down integration testing starts with
the main routine:
 and one or two subordinate routines in the
system.
 After the top-level 'skeleton’ has been
tested:
 immediate subordinate modules of the
'skeleton’ are combined with it and tested.

67
68
Mixed integration testing

 Mixed (or sandwiched)


integration testing:
 uses both top-down and bottom-
up testing approaches.
 Most common approach

69
70
Integration Testing
 In top-down approach:
 testing waits till all top-level
modules are coded and unit
tested.
 In bottom-up approach:
 testing can start only after bottom
level modules are ready.

71
Phased versus Incremental
Integration Testing

 Integration can be incremental or


phased.
 In incremental integration testing,
 only one new module is added to
the partial system each time.

72
Phased versus Incremental
Integration Testing

 In phased integration,
 a group of related modules are
added to the partially integrated
system each time.
 Big-bang testing:
 a degenerate case of the phased
integration testing.

73
Phased versus Incremental
Integration Testing
 Phased integration requires less
number of integration steps:
 compared to the incremental integration
approach.
 However, when failures are detected,
 it is easier to debug if using incremental
testing
 since errors are very likely to be in the
newly integrated module.

74
System Testing

 There are three main kinds of


system testing:
 Alpha Testing
 Beta Testing
 Acceptance Testing

75
Alpha Testing

 System testing is carried out by


the test team within the
developing organization.

76
Beta Testing

 System testing performed by a select


group of friendly customers.

77
Acceptance Testing

 System testing performed by the


customer himself:
 to determine whether the system
should be accepted or rejected.

78
System Testing

 During system testing, in addition


to functional tests:
 performance tests are performed.

79
Performance Testing

 Addresses non-functional
requirements.
 May sometimes involve testing hardware
and software together.
 There are several categories of
performance testing.

80
Stress testing

 Evaluates system performance


 when stressed for short periods of
time.
 Stress testing
 also known as endurance testing.

81
Stress testing
 Stress tests are black box tests:
 designed to impose a range of
abnormal and even illegal input
conditions
 so as to stress the capabilities of the
software.

82
Stress Testing
 If the requirements is to handle a
specified number of users, or
devices:
 stress testing evaluates system
performance when all users or
devices are busy simultaneously.

83
Stress Testing
 If an operating system is supposed to
support 15 multiprogrammed jobs,
 the system is stressed by attempting to run 15
or more jobs simultaneously.
 A real-time system might be tested
 to determine the effect of simultaneous arrival
of several high-priority interrupts.

84
Volume Testing
 Addresses handling large amounts of
data in the system:
 whether data structures (e.g. queues,
stacks, arrays, etc.) are large enough to
handle all possible situations
 Fields, records, and files are stressed to
check if their size can accommodate all
possible data volumes.

85
Configuration Testing
 Analyze system behavior:
 in various hardware and software
configurations specified in the requirements
 sometimes systems are built in various
configurations for different users
 for instance, a minimal system may serve a
single user,
 other configurations for additional users.

86
Compatibility Testing

 These tests are needed when the


system interfaces with other
systems:
 check whether the interface
functions as required.

87
Compatibility testing
Example
 If a system is to communicate
with a large database system to
retrieve information:
 a compatibility test examines speed
and accuracy of retrieval.

88
Recovery Testing

 These tests check response to:


 presence of faults or to the loss of
data, power, devices, or services
 subject system to loss of resources
 check if the system recovers properly.

89
Maintenance Testing

 Verify that:
 all required artifacts for
maintenance exist
 they function properly

90
Documentation tests

 Check that required documents


exist and are consistent:
 user guides,
 maintenance guides,
 technical documents

91
Documentation tests

 Sometimes requirements specify:


 format and audience of specific
documents
 documents are evaluated for
compliance

92
Usability tests

 All aspects of user interfaces are


tested:
 Display screens
 messages
 report formats
 navigation and selection problems

93
Regression Testing

 Does not belong to either unit


test, integration test, or system
test.
 In stead, it is a separate dimension
to these three forms of testing.

94
Regression testing

 Regression testing is the running of


test suite:
 after each change to the system or after
each bug fix
 ensures that no new bug has been
introduced due to the change or the bug
fix.

95
Regression testing

 Regression tests assure:


 the new system’s performance is at
least as good as the old system
 always used during phased system
development.

96
How many errors are still
remaining?

 Seed the code with some known


errors:
 artificial errors are introduced into
the program.
 Check how many of the seeded
errors are detected during testing.

97
Error Seeding
 Let:
 N be the total number of errors in the
system
 n of these errors be found by testing.
 S be the total number of seeded
errors,
 s of the seeded errors be found during
testing.
98
Error Seeding

 n/N = s/S
 N = S n/s
 remaining defects:
N - n = n ((S - s)/ s)

99
Example

 100 errors were introduced.


 90 of these errors were found during
testing
 50 other errors were also found.
 Remaining errors=
50 (100-90)/90 = 6

100
Error Seeding

 The kind of seeded errors should


match closely with existing errors:
 However, it is difficult to predict the
types of errors that exist.
 Categories of remaining errors:
 can be estimated by analyzing historical
data from similar projects.

101

You might also like