0% found this document useful (0 votes)
3 views32 pages

Module I - Testing Activities

The document outlines the essential activities and levels of software testing, detailing the steps involved in testing, from identifying objectives to analyzing results and assigning verdicts. It also discusses various testing techniques, including white-box and black-box testing, and emphasizes the importance of sources of information for test case selection. Additionally, it describes the different levels of testing, such as unit, integration, system, and acceptance testing, highlighting their roles in ensuring software quality.

Uploaded by

Raj Shah
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views32 pages

Module I - Testing Activities

The document outlines the essential activities and levels of software testing, detailing the steps involved in testing, from identifying objectives to analyzing results and assigning verdicts. It also discusses various testing techniques, including white-box and black-box testing, and emphasizes the importance of sources of information for test case selection. Additionally, it describes the different levels of testing, such as unit, integration, system, and acceptance testing, highlighting their roles in ensuring software quality.

Uploaded by

Raj Shah
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 32

Module I: Testing Activities

By: Dimple Bohra


Topics to be covered

● Testing activities and levels

● Sources of Information for Test Case Selection

● Introduction to Testing techniques


Testing Activities
● In order to test a
program, a test
engineer must
perform a sequence
of testing activities.
● Most of these
activities have been
shown in Figure
● These explanations
focus on a single test
case.
Testing Activities

1. Identify an objective to be tested


2. Select inputs
3. Compute the expected outcome
4. Set up the execution environment of the
program
5. Execute the program
6. Analyze the test result
Testing Activities
● Identify an objective to be tested: The first activity is to
identify an objective to be tested. The objective defines the
intention, or purpose of designing one or more test cases to
ensure that the program supports the objective. A clear purpose
must be associated with every test case.
● Select inputs: The second activity is to select test inputs.
Selection of test inputs can be based on the requirements
specification, the source code, or our expectations. Test inputs
are selected by keeping the test objective in mind.
Testing Activities
● Compute the expected outcome: The third activity is
to compute the expected outcome of the program with
the selected inputs. In most cases, this can be done from
an overall, high-level understanding of the test objective
and the specification of the program under test.
● Set up the execution environment of the program:
The fourth step is to prepare the right execution
environment of the program. In this step all the
assumptions external to the program must be satisfied.
Testing Activities
● Execute the program: In the fifth step, the test engineer executes
the program with the selected inputs and observes the actual
outcome of the program. To execute a test case, inputs may be
provided to the program at different physical locations at different
times. The concept of test coordination is used in synchronizing
different components of a test case.
● Analyze the test result: The final test activity is to analyze the
result of test execution. Here, the main task is to compare the
actual outcome of program execution with the expected outcome.
The complexity of comparison depends on the complexity of the
data to be observed.
Testing Activities
● At the end of the analysis step, a test verdict is assigned to the
program.
● There are three major kinds of test verdicts, namely, pass, fail,
and inconclusive.
● If the program produces the expected outcome and the purpose
of the test case is satisfied, then a pass verdict is assigned.
● If the program does not produce the expected outcome, then a
fail verdict is assigned.
● However, in some cases it may not be possible to assign a clear
pass or fail verdict. In such case an inconclusive test verdict is
assigned.
Testing Activities
● A test report must be written after analyzing the test result.
● The motivation for writing a test report is to get the fault fixed
if the test revealed a fault.
● A test report contains the following items to be informative:
1. Explain how to reproduce the failure.
2. Analyze the failure to be able to describe it.
3. A pointer to the actual outcome and the test case, complete with the
input, the expected outcome, and the execution environment
Test Levels
● The first three levels of
testing are performed by
a number of different
stakeholders in the
development
organization, where as
acceptance testing is
performed by the
customers.
● The four stages of
testing have been
illustrated in the form of
what is called the
classical V model in
Figure
Test Levels
● In unit testing, programmers test individual program units,
such as a procedures, functions, methods, or classes, in
isolation.
● After ensuring that individual units work to a satisfactory extent,
modules are assembled to construct larger subsystems by
following integration testing techniques.
● Integration testing is jointly performed by software
developers and integration test engineers.
● The objective of integration testing is to construct a reasonably
stable system that can withstand the rigor of system-level
testing.
Test Levels
● System-level testing includes a wide spectrum of testing,
such as functionality testing, security testing, robustness testing,
load testing, stability testing, stress testing, performance
testing, and reliability testing.
● System testing is a critical phase in a software development
process because of the need to meet a tight schedule close to
delivery date, to discover most of the faults, and to verify that
fixes are working and have not resulted in new faults.
Test Levels
● Regression testing is another level of testing that is performed
throughout the life cycle of a system.
● Regression testing is performed whenever a component of the system is
modified. The key idea in regression testing is to ascertain that the
modification has not introduced any new faults in the portion that was
not subject to modification.
● To be precise, regression testing is not a distinct level of testing.
● Rather, it is considered as a subphase of unit, integration, and system-
level testing, as illustrated in Figure
Test Levels
● In regression testing, new tests are not designed. Instead, tests
are selected, prioritized, and executed from the existing pool of
test cases to ensure that nothing is broken in the new version of
the software.
● Regression testing is an expensive process and accounts for a
predominant portion of testing effort in the industry.
● It is desirable to select a subset of the test cases from the
existing pool to reduce the cost.
● A key question is how many and which test cases should be
selected so that the selected test cases are more likely to
uncover new faults
Test levels
● After the completion of system-level testing, the product is delivered to the
customer.
● The customer performs their own series of tests, commonly known as
acceptance testing.
● The objective of acceptance testing is to measure the quality of the
product, rather than searching for the defects, which is objective of system
testing.
● A key notion in acceptance testing is the customer’s expectations from the
system.
● By the time of acceptance testing, the customer should have developed
their acceptance criteria based on their own expectations from the system.
● There are two kinds of acceptance testing as explained in the following:
1. User acceptance testing (UAT)
2. Business acceptance testing (BAT)
Test Levels
● User acceptance testing is conducted by the customer to ensure
that the system satisfies the contractual acceptance criteria before
being signed off as meeting user needs.
● On the other hand, Business Acceptance Testing is undertaken
within the supplier’s development organization.
● The idea in having a BAT is to ensure that the system will eventually
pass the user acceptance test.
● It is a rehearsal of UAT at the supplier’s premises.
Test Levels
● UAT means testing a software product from the perspective of the
end user to ensure it meets their needs and functions correctly in
real-world scenarios, while BAT focusing on whether the software
aligns with broader business goals and objectives, not just user
functionality; essentially, UAT checks if users can effectively use the
system, while BAT checks if the system is delivering the desired
business outcomes.
SOURCES OF INFORMATION FOR TEST CASE SELECTION

● Designing test cases has continued to stay in the focus of the


research community and the practitioners.
● A software development process generates a large body of
information, such as requirements specification, design document,
and source code.
● In order to generate effective tests at a lower cost, test designers
analyze the following sources of information:
1. Requirements and functional specifications
2. Source code
3. Input and output domains
4. Operational profile
5. Fault model
Requirements and Functional
Specifications
● The process of software development begins by capturing user
needs. The nature and amount of user needs identified at the
beginning of system development will vary depending on the
specific life-cycle model to be followed.
● The requirements might have been specified in an informal
manner, such as a combination of plaintext, equations, figures,
and flowcharts. Though this form of requirements specification
may be ambiguous, it is easily understood by customers.
● For some systems, requirements may have been captured in the
form of use cases, entity–relationship diagrams, and class
diagrams.
● Sometimes the requirements of a system may have been specified
in a formal language or notation, such as Z, SDL, or finite-state
machine.
● Both the informal and formal specifications are prime sources of
test cases.
Source Code
● Requirements specification describes the intended behavior of a
system, the source code describes the actual behavior of the
system.
● High-level assumptions and constraints take concrete form in an
implementation.
● Though a software designer may produce a detailed design,
programmers may introduce additional details into the system.
● For example, a step in the detailed design can be “sort array A.”
To sort an array, there are many sorting algorithms with different
characteristics, such as iteration, recursion, and temporarily
using another array.
● Therefore, test cases must be designed based on the program
Input and Output Domains
● Some values in the input domain of a program have special
meanings, and hence must be treated separately.
● To illustrate this point, let us consider the factorial function.
● The factorial of a nonnegative integer n is computed as follows:
1. factorial(0) = 1;
2. factorial(1) = 1;
3. factorial(n) = n * factorial(n-1);
A programmer may wrongly implement the factorial function as
factorial(n) = 1 * 2 * ... * n;
without considering the special case of n = 0.
Input and Output Domains
● Sometimes even some output values have special meanings, and a
program must be tested to ensure that it produces the special values for
all possible causes.
● In the above example, the output value 1 has special significance: (i) it is
the minimum value computed by the factorial function and (ii) it is the
only value produced for two different inputs.
● In the integer domain, the values 0 and 1 exhibit special characteristics if
arithmetic operations are performed.
● These characteristics are 0 × x = 0 and 1 × x = x for all values of x.
Therefore, all the special values in the input and output domains of a
program must be considered while testing the program.
Operational Profile
● As the term suggests, an operational profile is a quantitative
characterization of how a system will be used.
● It was created to guide test engineers in selecting test cases (inputs)
using samples of system usage.
● The ways test engineers assign probability and select test cases to
operate a system may significantly differ from the ways actual users
operate a system.
● However, for accurate estimation of the reliability of a system it is
important to test a system by considering the ways it will actually be
used in the field.
● This concept is being used to test web applications, where the user
session data are collected from the web servers to select test cases.
Fault Model
● Previously encountered faults are an excellent source of
information in designing new test cases.
● The known faults are classified into different classes, such as
initialization faults, logic faults, and interface faults,
and stored in a repository.
● Test engineers can use these data in designing tests to
ensure that a particular class of faults is not resident in the
program.
● There are three types of fault-based testing: error
guessing, fault seeding,and mutation analysis.
Fault Model
In error guessing, a test engineer applies his experience to
● Assess the situation and guess where and what kinds of
faults might exist,
● Design tests to specifically expose those kinds of faults.
In fault seeding, known faults are injected into a program, and
the test suite is executed to assess the effectiveness of the test
suite.
Fault seeding makes an assumption that a test suite that finds
seeded faults is also likely to find other faults.
Fault Model
Mutation analysis is similar to fault seeding, except that
mutations to program statements are made in order to
determine the fault detection capability of the test suite.
If the test cases are not capable of revealing such faults, the test
engineer may specify additional test cases to reveal the faults.
Fault Model
● Mutation testing is based on the idea of fault simulation, whereas
fault seeding is based on the idea of fault injection.
● In the fault injection approach, a fault is inserted into a program,
and an oracle is available to assert that the inserted fault indeed
made the program incorrect.
● On the other hand, in fault simulation, a program modification is not
guaranteed to lead to a faulty program.
● In fault simulation, one may modify an incorrect program and turn it
into a correct program.
Testing Techniques
● Software testing techniques are methods used to design
and execute tests to evaluate software applications
● Two broad concepts in testing, based on the sources of
information for test design, are white-box and black-
box testing.
● White-box testing techniques are also called structural
testing techniques, whereas black-box testing techniques
are called functional testing techniques.
Structural Testing
● In structural testing, one primarily examines source code
with a focus on control flow and data flow.
● Control flow refers to flow of control from one instruction to
another.
● Conditional statements alter the normal, sequential flow of
control in a program.
● Data flow refers to the propagation of values from one
variable or constant to another variable.
● Definitions and uses of variables determine the data flow
aspect in a program.
Functional Testing
● In functional testing, one does not have access to the internal details of a
program and the program is treated as a black box.
● A test engineer is concerned only with the part that is accessible outside the
program, that is, just the input and the externally visible outcome.
● A test engineer applies input to a program, observes the externally visible
outcome of the program, and determines whether or not the program
outcome is the expected outcome.
● Inputs are selected from the program’s requirements specification and
properties of the program’s input and output domains.
● A test engineer is concerned only with the functionality and the features
found in the program’s specification.
● Examples: Equivalence partitioning, BVA
Scopes of Structural and Functional Testing

● At this point it is useful to identify a distinction between the scopes of


structural testing and functional testing.
● One applies structural testing techniques to individual units of a
program, whereas functional testing techniques can be applied to
both an entire system and the individual program units.
● Since individual programmers know the details of the source code
they write, they themselves perform structural testing on the
individual program units they write.
● On the other hand, functional testing is performed at the external
interface level of a system, and it is conducted by a separate
software quality assurance group.
Thank You !!!!

You might also like