0% found this document useful (0 votes)
84 views18 pages

Unit 5

The document discusses various topics related to software testing including the objectives of testing, terminology used in testing like mistakes, errors, failures, test cases, test scripts, test suites etc. It also discusses different types of testing like unit testing, integration testing, system testing. The key goals of testing are to identify defects, systematically design test cases using black box and white box approaches, and test individual components as well as integrated system.

Uploaded by

srikanth
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
84 views18 pages

Unit 5

The document discusses various topics related to software testing including the objectives of testing, terminology used in testing like mistakes, errors, failures, test cases, test scripts, test suites etc. It also discusses different types of testing like unit testing, integration testing, system testing. The key goals of testing are to identify defects, systematically design test cases using black box and white box approaches, and test individual components as well as integrated system.

Uploaded by

srikanth
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 18

UNIT-5

TESTING
INTRODUCTION

• The aim of program testing is to identify all defects in a program.


• However, in practice, even after satisfactory completion of the testing phase, it is not
possible to guarantee that a program is error free.
• This is because the input data domain of most programs is very large, and it is not
practical to test the program exhaustively with respect to each value that the input can
assume.
• We must remember that careful testing can expose a large percentage of the defects
existing in a program, and therefore provides a practical way of reducing defects in a
system.

How to test a program?


• Testing a program involves executing the program with a set of test inputs and observing
if the program behaves as expected.
• If the program fails to behave as expected, then the input data and the conditions under
which it fails are noted for later debugging and error correction.

TERMINOLOGIES

MISTAKE
• A mistake is essentially any programmer action that later shows up as an incorrect result
during program execution.
• A programmer may commit a mistake in almost any development activity.
• For example, during coding a programmer might commit the mistake of not initializing a
certain variable, or might overlook the errors that might arise in some exceptional
situations such as division by zero in an arithmetic operation. Both these mistakes can lead
to an incorrect result.
ERROR
• An error is the result of a mistake committed by a developer in any of the development
activities.
• One example of an error is a call made to a wrong function.
• The terms error, fault, bug, and defect are considered to be synonyms in the area of
program testing

FAILURE
• A failure of a program essentially denotes an incorrect behaviour exhibited by the program
during its execution.
• An incorrect behaviour is observed either as an incorrect result produced or as an
inappropriate activity carried out by the program.
• Every failure is caused by some bugs present in the program.
• The number of possible ways in which a program can fail is extremely large. Out of the
large number of ways three randomly selected examples:
• The result computed by a program is 0, when the correct result is 10.
• A program crashes on an input.
• A robot fails to avoid an obstacle and collides with it.
• It may be noted that mere presence of an error in a program code may not necessarily lead
to a failure during its execution.
TEST CASE
• A test case is a triplet [I , S, R], where I is the data input to the program under test, S is the
state of the program at which the data is to be input, and R is the result expected to be
produced by the program.
• The state of a program is also called its execution mode.
• Execution modes—edit, view, create, and display.
• A test case is a set of test inputs, the mode in which the input is to be applied, and the
results that are expected during and after the execution of the test case.
TEST SCENARIO
• A test scenario is an abstract test case in the sense that it only identifies the aspects of the
program that are to be tested without identifying the input, state, or output.
• A test case can be said to be an implementation of a test scenario.
• An important automatic test case design strategy is to first design test scenarios through an
analysis of some program abstraction (model) and then implement the test scenarios as test
cases
TEST SCRIPT
• A test script is an encoding of a test case as a short program.
• Test scripts are developed for automated execution of the test cases.
• A test case is said to be a positive test case if it is designed to test whether the software
correctly performs a required functionality.
• A test case is said to be negative test case, if it is designed to test whether the software
carries out something, that is not required of the system.
• consider an example as a program to manage user login. A positive test case can be
designed to check if a login system validates a user with the correct user name and
password.
• A negative test case can be a test case that checks whether the login functionality
validates and admits a user with wrong login user name or password.
TEST SUITE
• A test suite is the set of all test cases that have been designed by a tester to test a given
program.
TESTABILITY
• Testability of a requirement denotes the extent to which it is possible to determine
whether an implementation of the requirement conforms to it in both functionality and
performance.
• A failure mode of a software denotes an observable way in which it can fail.
• As an example of the failure modes of a software, consider a railway ticket booking
software that has three failure modes—failing to book an available seat, incorrect seat
booking (e.g., booking an already booked seat), and system crash.
• Equivalent faults denote two or more bugs that result in the system failing in the same
failure mode.
Consider the following two faults in C language—division by zero and illegal memory access
errors. These two are equivalent faults, since each of these leads to a program crash

Verification vs Validation

Verification is the process of determining whether the output of one phase of software
development conforms to that of its previous phase, whereas validation is the process of
determining whether a fully developed system conforms to its requirements specification. Thus,
while verification is concerned with phase containment of errors, the aim of validation is that the
final product be error free.

Differentiate between verification and validation.

• The objectives of both verification and validation techniques are very similar since both
these techniques are designed to help remove errors in a software.
• Verification is concerned with phase containment of errors, Validation is to check whether
the deliverable software is error free.
• Error détection techniques = Vérification techniques + Validation techniques

Testing Activities
Testing involves performing the following main activities
• Test suite design: The set of test cases using which a program is to be tested is designed by
using several test case design techniques
• Running test cases and checking the results to detect failures: Each test case is run and the
results are compared with the expected results.
• A mismatch between the actual result and expected results indicates a failure.
• The test cases for which the system fails are noted down for later debugging.
• Locate error: In this activity, the failure symptoms are analysed to locate the errors.
• For each failure observed during the previous activity, the statements that are in error are
identified.
• Error correction: After the error is located during debugging, the code is appropriately
changed to correct the error.

Why design Testcase

• Testing a software using a large collection of randomly selected test cases does not
guarantee that all (or even most) of the errors in the system will be uncovered.
• Consider the following example code segment which determines the greater of two integer
values x and y. This code segment has a simple programming error:
• if (x>y) max = x;
• else max = x;
• For the given code segment, the test suite {(x=3,y=2);(x=2,y=3)} can detect the error,
whereas a larger test suite {(x=3,y=2);(x=4,y=3); (x=5,y=1)} does not detect the error.
• For effective testing, the test suite should be carefully designed rather than picked
randomly.
• A minimal test suite is a carefully designed set of test cases such that each test case helps
detect different errors.

There are essentially two main approaches to systematically design test cases
• Black-box approach
• White-box (or glass-box) approach
• These two approaches to test case design are complementary.
• That is, a program has to be tested using the test cases designed by both the approaches,
and one testing using one approach does not substitute testing using the other.

Testing in the large vs. testing in the small

Software products are normally tested first at the individual component (or unit) level. This is
referred to as testing in the small. After testing all the components individually, the components
are slowly integrated and tested at each level of integration (integration testing). Finally, the fully
integrated system is tested (called system testing). Integration and system testing are known as
testing in the large.

Unit testing

Unit testing is undertaken after a module has been coded and successfully reviewed. Unit testing
(or module testing) is the testing of different units (or modules) of a system in isolation.
In order to test a single module, a complete environment is needed to provide all that is necessary
for execution of the module. That is, besides the module under test itself, the following steps are
needed in order to be able to test the module:
• The procedures belonging to other modules that the module under test
calls.
• Nonlocal data structures that the module accesses.
• A procedure to call the functions of the module under test with appropriate parameters.

Modules required to provide the necessary environment (which either call or are called by the
module under test) is usually not available until they too have been unit tested, stubs and drivers
are designed to provide the complete environment for a module. The role of stub and driver
modules is pictorially shown in fig. 10.1. A stub procedure is a dummy procedure that has the
same I/O parameters as the given procedure but has a highly simplified behavior. For example, a
stub procedure may produce the expected behavior using a simple table lookup

mechanism. A driver module contains the nonlocal data structures accessed by the module under
test, and would also have the code to call the different functions of the module with appropriate
parameter values.
Black box testing
In the black-box testing, test cases are designed from an examination of the input/output values
only and no knowledge of design, or code is required. The following are the two main approaches
to designing black box test cases.

• Equivalence class portioning


• Boundary value analysis

Equivalence Class Partitioning


In this approach, the domain of input values to a program is partitioned into a set of equivalence
classes. This partitioning is done such that the behavior of the program is similar for every input
data belonging to the same equivalence class. The main idea behind defining the equivalence
classes is that testing the code with any one value belonging to an equivalence class is as good as
testing the software with any other value belonging to that equivalence class. Equivalence classes
for a software can be designed by examining the input data and output
data. The following are some general guidelines for designing the equivalence classes:

1. If the input data values to a system can be specified by a range of values, then one valid and
two invalid equivalence classes should be defined.
2. If the input data assumes values from a set of discrete members of some domain, then one
equivalence class for valid input values and another equivalence class for invalid input values
should be defined.

Example#1: For a software that computes the square root of an input integer which can assume
values in the range of 0 to 5000, there are three equivalence classes: The set of negative integers,
the set of integers in the range of 0 and 5000, and the integers larger than 5000. Therefore, the test
cases must include representatives for each of the three equivalence classes and a possible test set
can be: {-5,500,6000}.

Example#2: Design the black-box test suite for the following program. The program computes the
intersection point of two straight lines and displays the result. It reads two integer pairs (m1, c1)
and (m2, c2) defining the two straight lines of the form y=mx + c.

The equivalence classes are the following:


• Parallel lines (m1=m2, c1≠c2)
• Intersecting lines (m1≠m2)
• Coincident lines (m1=m2, c1=c2)

Now, selecting one representative value from each equivalence class, the test
suit (2, 2) (2, 5), (5, 5) (7, 7), (10, 10) (10, 10) are obtained.

Boundary Value Analysis


A type of programming error frequently occurs at the boundaries of different equivalence classes
of inputs. The reason behind such errors might purely be due to psychological factors.
Programmers often fail to see the special processing required by the input values that lie at the
boundary of the different equivalence classes. For example, programmers may improperly use <
instead of <=, or conversely <= for <. Boundary value analysis leads to selection of test cases at
the boundaries of the different equivalence classes.

Example: For a function that computes the square root of integer values in the range of 0 and
5000, the test cases must include the following values: {0, - 1,5000,5001}.

Test cases for equivalence class testing and boundary value analysis for a problem Let’s consider
a function that computes the square root of integer values in the range of 0 and 5000. For this
particular problem, test cases corresponding to equivalence class testing and boundary value
analysis have been found out earlier.

White box testing


One white-box testing strategy is said to be stronger than another strategy, if all types of
errors detected by the first testing strategy is also detected by the second testing
strategy, and the second testing strategy additionally detects some more types of
errors. When two testing strategies detect errors that are different at least with respect to
some types of errors, then they are called complementary. The concepts of stronger and
complementary testing are schematically illustrated in fig. 10.2.

Statement coverage
The statement coverage strategy aims to design test cases so that every statement in a
program is executed at least once. The principal idea governing the statement coverage
strategy is that unless a statement is executed, it is very hard to determine if an error
exists in that statement. Unless a statement is executed, it is very difficult to observe
whether it causes failure due to some illegal memory access, wrong result computation,
etc. However, executing some statement once and observing that it behaves properly for
that input value is no guarantee that it will behave correctly for all input values. In the
following, designing of test cases using the statement coverage strategy have been shown.

Example: Consider the Euclid’s GCD computation algorithm: int

compute_gcd(x, y)
int x, y;
{
1 while (x! = y){
2 if (x>y) then
3 x= x – y;
4 else y= y – x;
5 }
6 return x;
}

By choosing the test set {(x=3, y=3), (x=4, y=3), (x=3, y=4)}, we can exercise the
program such that all statements are executed at least once.

Branch coverage
In the branch coverage-based testing strategy, test cases are designed to make each
branch condition to assume true and false values in turn. Branch testing is also known as
edge testing as in this testing scheme, each edge of a program’s control flow graph is
traversed at least once.

It is obvious that branch testing guarantees statement coverage and thus is a


stronger testing strategy compared to the statement coverage-based testing. For Euclid’s
GCD computation algorithm , the test cases for branch coverage can be {(x=3, y=3),
(x=3, y=2), (x=4, y=3), (x=3, y=4)}.

Condition coverage
In this structural testing, test cases are designed to make each component of a composite
conditional expression to assume both true and false values. For example, in the
conditional expression ((c1.and.c2).or.c3), the components c1, c2 and c3 are each made
to assume both true and false values.
Branch testing is probably the simplest condition testing strategy where only the
compound conditions appearing in the different branch statements are made to assume
the true and false values. Thus, condition testing is a stronger testing strategy than branch
testing and branch testing is stronger testing strategy than the statement coverage-based
testing. For a composite conditional expression of n components, for condition
coverage , 2ⁿ test cases are required. Thus,for condition coverage, the number of test
cases increases exponentially with the number of component conditions. Therefore, a
condition coverage-based testing technique is practical only if n (the number of
conditions) is small.
Path coverage
The path coverage-based testing strategy requires us to design test cases such that all
linearly independent paths in the program are executed at least once. A linearly
independent path can be defined in terms of the control flow graph (CFG) of a program.

Control Flow Graph (CFG)


A control flow graph describes the sequence in which the different instructions of a
program get executed. In other words, a control flow graph describes how the control
flows through the program. In order to draw the control flow graph of a program, all the
statements of a program must be numbered first. The different numbered statements serve
as nodes of the control flow graph (as shown in fig. 10.3). An edge from one node to
another node exists if the execution of the statement representing the first node can result
in the transfer of control to the other node.

The CFG for any program can be easily drawn by knowing how to represent the sequence,
selection, and iteration type of statements in the CFG. After all, a program is made up from these
types of statements. Fig. 10.3 summarizes how the CFG for these three types of statements can be
drawn. It is important to note that for the iteration type of constructs such as the while construct,
the loop condition is tested only at the beginning of the loop and therefore the control flow from
the last statement of the loop is always to the top of the loop. Using these basic ideas, the CFG of
Euclid’s GCD computation algorithm can be drawn as shown.
Path
A path through a program is a node and edge sequence from the starting node to a
terminal node of the control flow graph of a program. There can be more than one
terminal node in a program. Writing test cases to cover all the paths of a typical program
is impractical. For this reason, the path-coverage testing does not require coverage of
all paths but only coverage of linearly independent paths.

Linearly independent path


A linearly independent path is any path through the program that introduces at least one
new edge that is not included in any other linearly independent paths. If a path has one
new node compared to all other linearly independent paths, then the path is also linearly
independent. This is because, any path having a new node automatically implies that it
has a new edge. Thus, a path that is subpath of another path is not considered to be a
linearly independent path.
Control flow graph
In order to understand the path coverage-based testing strategy, it is very much necessary
to understand the control flow graph (CFG) of a program. Control flow graph (CFG) of a
program has been discussed earlier.
Linearly independent path
The path-coverage testing does not require coverage of all paths but only coverage of
linearly independent paths. Linearly independent paths have been discussed earlier.

Cyclomatic complexity
For more complicated programs it is not easy to determine the number of independent
paths of the program. McCabe’s cyclomatic complexity defines an upper bound for the
number of linearly independent paths through a program. Also, the McCabe’s cyclomatic
complexity is very simple to compute. Thus, the McCabe’s cyclomatic complexity metric
provides a practical way of determining the maximum number of linearly independent
paths in a program. Though the McCabe’s metric does not directly identify the linearly
independent paths, but it informs approximately how many paths to look for.

There are three different ways to compute the cyclomatic complexity. The
answers computed by the three methods are guaranteed to agree.

Method 1:
Given a control flow graph G of a program, the cyclomatic complexity V(G) can
be computed as:
V(G) = E – N + 2
where N is the number of nodes of the control flow graph and E is the number of
edges in the control flow graph.

For the CFG of example shown in fig. 10.4, E=7 and N=6. Therefore, the
cyclomatic complexity = 7-6+2 = 3.

Method 2:
An alternative way of computing the cyclomatic complexity of a program from an
inspection of its control flow graph is as follows:
V(G) = Total number of bounded areas + 1
In the program’s control flow graph G, any region enclosed by nodes and edges
can be called as a bounded area. This is an easy way to determine the McCabe’s
cyclomatic complexity. But, what if the graph G is not
planar, i.e. however you draw the graph, two or more edges intersect? Actually, it
can be shown that structured programs always yield planar graphs. But, presence
of GOTO’s can easily add intersecting edges. Therefore, for non-structured
programs, this way of computing the McCabe’s cyclomatic complexity cannot be
used.
The number of bounded areas increases with the number of decision
paths and loops. Therefore, the McCabe’s metric provides a quantitative measure
of testing difficulty and the ultimate reliability. For the CFG example shown in
fig. 10.4, from a visual examination of the CFG the number of bounded areas is 2.
Therefore the cyclomatic complexity, computing with this method is also 2+1 = 3.
This method provides a very easy way of computing the cyclomatic complexity of
CFGs, just from a visual examination of the CFG. On the other hand, the other
method of computing CFGs is more amenable to automation, i.e. it can be easily
coded into a program which can be used to determine the cyclomatic complexities
of arbitrary CFGs.

Method 3:
The cyclomatic complexity of a program can also be easily computed by
computing the number of decision statements of the program. If N is the number
of decision statement of a program, then the McCabe’s metric is equal to N+1.

Integration Testing

• Integration testing is carried out after all the modules have been unit tested.
• The objective of integration testing is to detect the errors at the module interfaces
• The objective of integration testing is to check whether the different modules of a program
interface with each other properly.
• The integration plan specifies the steps and the order in which modules are combined to
realise the full system.
• After each integration step, the partially integrated system is tested.
The following approaches can be used to develop the test plan:
• Big-bang approach to integration testing
• Top-down approach to integration testing
• Bottom-up approach to integration testing
• Mixed (also called sandwiched ) approach to integration testing
Big-bang approach to integration testing
• Big-bang testing is the most obvious approach to integration testing.
• In this approach all the unit tested modules of the system are simply linked together and
tested.
• This technique can be used only for very small systems.
• If there is any defect found , it becomes difficult to identify where the defect has occured.

Top-down approach to integration testing


• In Top-down integration testing, first the main module is tested and then move down to
integrate and test its lower level modules.
• This testing is continued until all modules at last level have been integrated and tested
• The main advantage is that it can take decisions early before testing the lower level
modules.
• The disadvantage is the testing of a module may be delayed if its lower level modules are
not available.

Bottom-up approach to integration testing


• In bottom-up integration testing, first the modules for the each subsystem are integrated.
• Thus, the subsystems can be integrated separately and independently.
Mixed (also called sandwiched ) approach to integration testing
• The mixed (also called sandwiched ) integration testing follows a combination of top-
down and bottom-up testing approaches.
• In top-down approach, testing can start only after the top-level modules have been coded
and unit tested.
• Similarly, bottom-up testing can start only after the bottom level modules are ready.
• The mixed approach overcomes this shortcoming of the top-down and bottom-up
approaches.
• In the mixed testing approach, testing can start as and when modules become available
after unit testing.
• Therefore, this is one of the most commonly used integration testing approaches.

Phased versus Incremental Integration Testing

• Big-bang integration testing is carried out in a single step of integration.


• In contrast, in the other strategies, integration is carried out over several steps.
• In these later strategies, modules can be integrated either in a phased or incremental
manner.
A comparison of these two strategies is as follows:
• In incremental integration testing, only one new module is added to the partially
integrated system each time.
• In phased integration, a group of related modules are added to the partial system each
time.
• Phased integration requires less number of integration steps compared to the incremental
integration approach.
• But in the incremental testing approach the errors can easily be traced .
The phased integration testing approach is similar to big-bang testing.
System Testing
• After all the units of a program have been integrated together and tested, system testing is
taken up.
• System tests are designed to validate a fully developed system to assure that it meets its
requirements.
• The test cases are designed based on the SRS document.
• The system testing procedures are the same for both object-oriented and procedural
programs.
• There are essentially three main kinds of system testing depending on who carries out
testing:
1. Alpha Testing: Alpha testing refers to the system testing carried out by the test team within the
developing organisation.
2. Beta Testing: Beta testing is the system testing performed by a select group of friendly
customers.
3. Acceptance Testing: Acceptance testing is the system testing performed by the customer to
determine whether to accept the delivery of the system.

Black Box Testing is a software testing method in which the internal


structure/design/implementation of the item being tested is not known to the tester. Only the
external design and structure are tested.
 
White Box Testing is a software testing method in which the internal
structure/design/implementation of the item being tested is known to the tester.  Implementation
and impact of the code are tested.

Differences between Black Box Testing vs White Box Testing: 


S.
No. Black Box Testing White Box Testing

It is a way of software testing in which It is a way of testing the software in


the internal structure or the program or which the tester has knowledge about
the code is hidden and nothing is known the internal structure or the code or the
1. about it. program of the software.

Implementation of code is not needed Code implementation is necessary for


2. for black box testing. white box testing.

It is mostly done by software


3. It is mostly done by software testers. developers.

No knowledge of implementation is Knowledge of implementation is


4. needed. required.
S.
No. Black Box Testing White Box Testing

It can be referred to as outer or external It is the inner or the internal software


5. software testing. testing.

6. It is a functional test of the software. It is a structural test of the software.

This testing can be initiated based on the This type of testing of software is started
7. requirement specifications document. after a detail design document.

No knowledge of programming is It is mandatory to have knowledge of


8. required. programming.

9. It is the behavior testing of the software. It is the logic testing of the software.

It is applicable to the higher levels of It is generally applicable to the lower


10. testing of software. levels of software testing.

11. It is also called closed testing. It is also called as clear box testing.

12. It is least time consuming. It is most time consuming.

It is not suitable or preferred for


13. algorithm testing. It is suitable for algorithm testing.

Can be done by trial and error ways and Data domains along with inner or
14. methods. internal boundaries can be better tested.

Example: Search something on google Example: By input to check and verify


15. by using keywords loops

Black-box test design techniques-


 Decision table testing White-box test design techniques-
 All-pairs testing  Control flow testing
 Equivalence partitioning  Data flow testing
16.  Error guessing  Branch testing

17. Types of Black Box Testing:  Types of White Box Testing: 


S.
No. Black Box Testing White Box Testing

 Functional Testing   Path Testing 


 Non-functional testing   Loop Testing 
 Regression Testing   Condition testing

It is less exhaustive as compared to It is comparatively more exhaustive than


18. white box testing. black box testing.

You might also like