0% found this document useful (0 votes)
33 views104 pages

Testing

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
33 views104 pages

Testing

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 104

Testing

How to test a program?


• Testing a program involves executing the program with a set of test
inputs and observing if the program behaves as expected.
• If the program fails to behave as expected, then the input data and
the conditions under which it fails are noted for later debugging and
error correction.
Basic Terminologies
1. Test Case
• A test case is a triplet [I , S, R], where I is the data input to the program
under test, S is the state of the program (execution mode) at which the
data is to be input, and R is the result expected to be produced by the
program.
• Example, consider the different execution modes of a certain text editor
software. The text editor can at any time during its execution assume any
of the following execution modes—edit, view, create, and display.
• An example of a test case is—[input: “abc”, state: edit, result: abc is
displayed], which essentially means that the input abc needs to be applied
in the edit mode, and the expected result is that the string a b c would be
displayed.
Contd..
2. A test script is an encoding of a test case as a short program. Test
scripts are developed for automated execution of the test cases.
3. A test case is said to be a positive test case if it is designed to test
whether the software correctly performs a required functionality.
4. A test case case is said to be negative test case, if it is designed to
test whether the software carries out something, that is not required of
the system.
5. A test suite is the set of all test that have been designed by a tester
to test a given program.
Why Design Test Cases?
• Testing a software using a large collection of randomly selected test
cases does not guarantee that all (or even most) of the errors in the
system will be uncovered.
• Consider the following example code segment which determines the
greater of two integer values x and y. This code segment has a simple
programming error:
For the given code segment, the test suite
{(x=3,y=2);(x=2,y=3)} can
detect the error, whereas a larger test
suite {(x=3,y=2);(x=4,y=3); (x=5,y=1)} does
not detect the error
Contd..
• A minimal test suite is a carefully designed set of test cases such that
each test case helps detect different errors. This is in contrast to
testing using some random input values.
• There are essentially two main approaches to systematically design
test cases:
• Black-box approach
• White-box (or glass-box) approach
Contd..
• In the black-box approach, test cases are designed using only the functional specification
of the software.
• That is, test cases are designed solely based on an analysis of the input/out behaviour
(that is, functional behaviour) and does not require any knowledge of the internal
structure of a program.
• For this reason, black-box testing is also known as functional testing.
• Designing white-box test cases requires a thorough knowledge of the internal structure
of a program, and therefore white-box testing is also called structural testing.
• Black- box test cases are designed solely based on the input-output behaviour of a
program.
• In contrast, white-box test cases are based on an analysis of the code. These two
approaches to test case design are complementary.
Strategic Approach to Software Testing
• Testing is a set of activities that can be planned in advance and
conducted systematically.
• Testing begins at the component level and works “outward” toward
the integration of the entire computer-based system.
• Testing and debugging are different activities, but debugging must be
accommodated in any testing strategy.
• A strategy for software testing must accommodate low-level tests
that are necessary to verify that a small source code segment has
been correctly implemented as well as high-level tests that validate
major system functions against customer requirements.
Contd..
• Verification and Validation:
• Verification: “Are we building the product right?”
• Validation: “Are we building the right product?”
Software Testing Strategy—The Big Picture
• System engineering defines the role of
software and leads to software
requirements analysis, where the
information domain, function,
behavior, performance, constraints,
and validation criteria for software are
established.
• Unit testing begins at the vortex of the
spiral and concentrates on each unit
(e.g., component, class) of the
software as implemented in source
code.
Contd…
• Integration testing, where the focus is on design and the construction
of the software architecture.
• Validation testing, where requirements established as part of
requirements modeling are validated against the software that has
been constructed.
• System testing, where the software and other system elements are
tested as a whole.
Contd..
• Testing within the context of software engineering is a series of four
steps that are implemented sequentially.
Test Plan
• Test Plan is a detailed document that describes the test strategy,
objectives, schedule, estimation and deliverables, and resources
required for testing.
• Test Plan helps us determine the effort needed to validate the quality
of the application under test.
• The test plan serves as a blueprint to conduct software testing
activities as a defined process which is minutely monitored and
controlled by the test manager.
Example Test Plan – Banking Web Application
1. Introduction
The Test Plan is designed to prescribe the scope, approach, resources,
and schedule of all testing activities of the project.
The plan identify the items to be tested, the features to be tested, the
types of testing to be performed, the personnel responsible for testing,
the resources and schedule required to complete testing, and the risks
associated with the plan.
1.1 All the features of the application which were defined in software
requirement specifications are need to be tested.
Contd..
1.1.2 Out of Scope
• These feature are not be tested because they are not included in the
software requirement specs
• User Interfaces
• Hardware Interfaces
• Software Interfaces
• Database logical
• Communications Interfaces
• Website Security and Performance
Contd..
1.2 Quality Objective
• The test objectives are to verify the Functionality of the Banking Web
Application, the project should focus on testing the banking
operation such as Account Management, Withdrawal, and
Balance…etc. to guarantee all these operation can work normally in
real business environment.
Contd..
2. Test Methodology
2.1 Test Levels
• In this project, there’re 4 types of testing should be conducted.
• Unit Testing : Testing individual modules for checking their functionality
• Integration Testing (Individual software modules are combined and tested as
a group)
• System Testing: Conducted on a complete, integrated system to evaluate the
system’s compliance with its specified requirements
• API testing: Test all the APIs create for the software under tested
Contd..
2.2 Bug Triage
2.3 Suspension Criteria and Resumption Requirements
• If the team members report that there are 40% of test cases failed,
suspend testing until the development team fixes all the failed cases.
2.4 Test Completeness
• Specifies the criteria that denote a successful completion of a test
phase.
• Run rate is mandatory to be 100% unless a clear reason is given.
• Pass rate is 80%, achieving the pass rate is mandatory
Contd..
2.5 Project task and estimation and schedule
Contd..
3. Test Deliverables
Test deliverables are provided as below
• Before testing phase
• Test plans document.
• Test cases documents
• Test Design specifications.
• During the testing
• Test Tool Simulators.
• Test Data
• Test Trace-ability Matrix – Error logs and execution logs.
• After the testing cycles is over
• Test Results/reports
• Defect Report
• Installation/ Test procedures guidelines
• Release notes
Contd..
4 Resource & Environment Needs
4.1 Testing Tools
Contd..
4.2 Test Environment
• It mentions the minimum hardware and software requirements that
will be used to test the Application.
• Following software’s are required in addition to client-specific
software.
• Windows 11 and above
• Office 2021 and above etc
Unit testing
• A testing strategy that is chosen by most software teams takes an
incremental view of testing, beginning with the testing of individual
program units, moving to tests designed to facilitate the integration of
the units, and culminating with tests that exercise the constructed
system.
Unit Testing
• Unit testing focuses verification effort on the smallest unit of software
design—the software component or module.
• Using the component-level design description as a guide, important
control paths are tested to uncover errors within the boundary of the
module
Contd..
Driver and stub modules
• In order to test a single module, we need a complete environment to
provide all relevant code that is necessary for execution of the
module.
• That is, besides the module under test, the following are needed to
test the module:
• The procedures belonging to other modules that the module under test calls.
• Non-local data structures that the module accesses.
• A procedure to call the functions of the module under test with appropriate
parameters.
Contd..
• Modules required to provide the necessary environment (which
either call or are called by the module under test) are usually not
available until they too have been unit tested.
• In this context, stubs and drivers are designed to provide the
complete environment for a module so that testing can be carried
out.
• Stub: A stub procedure is a dummy procedure that has the same I/O
parameters as the function called by the unit under test but has a
highly simplified behavior. A stub procedure may produce the
expected behaviour.
Contd..
• Driver: A driver module should contain the non-local data structures
accessed by the module under test. Additionally, it should also have
the code to call the different functions of the unit under test with
appropriate parameter values for testing.
BLACK-BOX TESTING
• In black-box testing, test cases are designed from an examination of
the input/output values only and no knowledge of design or code is
required.
• The following are the two main approaches available to design black
box test cases:
• Equivalence class partitioning
• Boundary value analysis
Equivalence Class Partitioning
• In the equivalence class partitioning approach, the domain of input
values to the program under test is partitioned into a set of
equivalence classes.
• The partitioning is done such that for every input data belonging to
the same equivalence class, the program behaves similarly.
• The main idea behind defining equivalence classes of input data is
that testing the code with any one value belonging to an equivalence
class is as good as testing the code with any other value belonging to
the same equivalence class.
Contd..
• The following are two general guidelines for designing the equivalence classes:
1. If the input data values to a system can be specified by a range of
values, then one valid and two invalid equivalence classes need to be
defined.
For example, if the equivalence class is the set of integers in the range 1 to 10 (i.e., [1,10]),
then the invalid equivalence classes are [−∞,0], [11,+∞].
2. If the input data assumes values from a set of discrete members of some domain, then
one equivalence class for the valid input values and another equivalence class for the
invalid input values should be defined.
For example, if the valid equivalence classes are {A,B,C}, then the invalid equivalence class
is X - {A,B,C}, where X is the universe of possible input values.
Examples
• For a software that computes the square root of an input integer that
can assume values in the range of 0 and 5000. Determine the
equivalence classes and the black box test suite.
• Answer: There are three equivalence classes—The set of negative
integers, the set of integers in the range of 0 and 5000, and the set of
integers larger than 5000.
• Therefore, the test cases must include representatives for each of the
three equivalence classes. A possible test suite can be: {–5,500,6000}.
Example
• Design equivalence class partitioning test suite for a function that
reads a character string of size less than five characters and displays
whether it is a palindrome.

Answer: The equivalence classes are the leaf level


classes shown in Figure. The equivalence classes
are palindromes, non-palindromes, and invalid
inputs. Now, selecting one representative value
from each equivalence class, we have the required
test suite: {abc,aba,abcdef}.
Boundary Value Analysis
• Boundary value analysis-based test suite design involves designing test
cases using the values at the boundaries of different equivalence classes.
• To design boundary value test cases, it is required to examine the
equivalence classes to check if any of the equivalence classes contains a
range of values.
• For those equivalence classes that are not a range of values (i.e., consist of
a discrete collection of values) no boundary value test cases can be
defined.
• For an equivalence class that is a range of values, the boundary values
need to be included in the test suite.
• For example, if an equivalence class contains the integers in the range 1 to
10, then the boundary value test suite is {0,1,10,11}.
Example
• For a function that computes the square root of the integer values in
the range of 0 and 5000, determine the boundary value test suite.
• Answer: There are three equivalence classes—The set of negative
integers, the set of integers in the range of 0 and 5000, and the set of
integers larger than 5000. The boundary value-based test suite is: {0,-
1,5000,5001}.
Summary of the Black-box Test Suite Design
Approach
• Examine the input and output values of the program.
• Identify the equivalence classes.
• Design equivalence class test cases by picking one representative
value from each equivalence class.
• Design the boundary value test cases as follows.
• Examine if any equivalence class is a range of values.
• Include the values at the boundaries of such equivalence classes in
the test suite.
WHITE-BOX TESTING
• A white-box testing strategy can either be coverage-based or fault
based.
• Fault-based testing : A fault-based testing strategy targets to detect
certain types of faults. These faults that a test strategy focuses on
constitutes the fault model of the strategy.
• Coverage-based testing : A coverage-based testing strategy attempts
to execute (or cover) certain elements of a program. Popular
examples of coverage-based testing strategies are statement
coverage, branch coverage, multiple condition coverage, and path
coverage-based testing.
Contd..
• Testing criterion for coverage-based testing : A coverage-based
testing strategy typically targets to execute (i.e., cover) certain
program elements for discovering failures.
• The set of specific program elements that a testing strategy targets to
execute is called the testing criterion of the strategy.
• For example, if a testing strategy requires all the statements of a
program to be executed at least once, then we say that the testing
criterion of the strategy is statement coverage.
Types of Coverage Based Testing
• Statement Coverage
• The statement coverage strategy aims to design test cases so as to
execute every statement in a program at least once.
• The principal idea governing the statement coverage strategy is that
unless a statement is executed, there is no way to determine whether
an error exists in that statement.
• A weakness of the statement- coverage strategy is that executing a
statement once and observing that it behaves properly for one input
value is no guarantee that it will behave correctly for all input values.
Contd..
• In the case of a flowchart, every node must be traversed at least
once.
Example
• Design statement coverage-based test suite for the following Euclid’s
GCD computation program:
To design the test cases for the statement coverage, the
conditional expression of the while statement needs to be
made true and the conditional expression of the if statement
needs to be made both true and false.
By choosing the test set {(x = 3, y = 3), (x = 4, y = 3), (x = 3, y
= 4)}, all statements of the program would be executed at
least once.
Contd..
• Branch Coverage
• A test suite satisfies branch
coverage, if it makes each branch
condition in the program to assume
true and false values in turn.
• Branch testing is also known as edge
testing, since in this testing scheme,
each edge of a program’s control
flow graph is traverse at least once.
Control Flow Graphs for (a) sequence, (b)
selection, and (c) iteration type of constructs.
Contd..

4 test cases are required such


that all branches of all
decisions are covered, i.e, all
edges of the flowchart are
covered
Contd..
• Condition Coverage
• In this technique, all individual conditions must be covered as shown
in the following example:
Contd..
• Multiple Condition Coverage
• In the multiple condition (MC) coverage-based
testing, test cases are designed to make each
component of a composite conditional
expression to assume both true and false
values.
• For example, consider the composite
conditional expression ((c1 and c2 ) or c3). A
test suite would achieve MC coverage, if all the
component conditions c1, c2 and c3 are each
made to assume both true and false values.
Basis Path Testing
• Basis path testing is a white-box testing technique.
• The basis path method enables the test-case designer to derive a
logical complexity measure of a procedural design and use this
measure as a guide for defining a basis set of execution paths.
• Test cases derived to exercise the basis set are guaranteed to execute
every statement in the program at least one time during testing.
• Control flow graph (CFG)
• A control flow graph describes the sequence in which the different
instructions of a program get executed
Contd..
• In order to draw the control flow graph of a
program, we need to first number all the
statements of a program.
• The different numbered statements serve
as nodes of the control flow graph.
• There exists an edge from one node to
another, if the execution of the statement
representing the first node can result in the
transfer of control to the other node.
Contd..
• A CFG is a directed graph consisting of a set of nodes and edges (N, E),
such that each node N corresponds to a unique program statement
and an edge exists between two nodes if control can transfer from
one node to the other.
• Note: For iteration type of constructs such as the while construct, the
loop condition is tested only at the beginning of the loop and
therefore always control flows from the last statement of the loop to
the top of the loop. That is, the loop construct terminates from the
first statement (after the loop is found to be false) and does not at
any time exit the loop at the last statement of the loop.
Contd..
• A sequence of process boxes and a decision diamond can map into a
single node.
• The arrows on the flow graph, called edges or links, represent flow of
control and are analogous to flowchart arrows.
• An edge must terminate at a node, even if the node does not
represent any procedural statements (e.g., see the flow graph symbol
for the if-then-else construct).
• Areas bounded by edges and nodes are called regions. When counting
regions, we include the area outside the graph as a region
Contd..
Contd..
Contd..
• Path
• A path through a program is any node and edge sequence from the
start node to a terminal node of the control flow graph of a program.
• A program can have more than one terminal nodes when it contains
multiple exit or return type of statements.
• Writing test cases to cover all paths of a typical program is impractical
since there can be an infinite number of paths through a program in
presence of loops.
Contd..
• Independent Program Paths
• A set of paths for a given program is called set of independent
program paths (or the set of basis paths or simply the basis set), if
each path in the set introduces at least one new edge that is not
included in any other path in the set.
• An independent path must move along at least one edge that has not
been traversed before the path is defined.
Contd..
• For example, a set of independent paths for the flow graph illustrated
in Figure 18.2b is
Contd..
• Paths 1 through 4 constitute a basis set for the flow graph in Figure 18.2b.
• If you can design tests to force execution of these paths (a basis set), every
statement in the program will have been guaranteed to be executed at least one
time and every condition will have been executed on its true and false sides.
• Cyclomatic complexity is a software metric that provides a quantitative measure
of the logical complexity of a program.
• When used in the context of the basis path testing method, the value computed
for cyclomatic complexity defines the number of independent paths in the basis
set of a program and provides you with an upper bound for the number of tests
that must be conducted to ensure that all statements have been executed at least
once.
Contd..
• Complexity is computed in one of three ways:
Contd..
• Referring once more to the flow graph in Figure 18.2b, the cyclomatic
complexity can be computed using each of the algorithms just noted:
1. The flow graph has four regions.
2. V(G) = 11 edges - 9 nodes + 2 = 4.
3. V(G) 3 predicate nodes + 1 = 4.
• Therefore, the cyclomatic complexity of the flow graph in Figure 18.2b is 4.
• The value for V(G) provides you with an upper bound for the number of
independent paths that form the basis set and, by implication, an upper
bound on the number of tests that must be designed and executed to
guarantee coverage of all program statements.
Integration Testing
• Integration testing is a systematic technique for constructing the
program structure while at the same time conducting tests to uncover
errors associated with interfacing.
• The objective is to take unit tested components and build a program
structure that has been dictated by design.
Contd..
• Top-down Integration
• Top-down integration testing is an incremental approach to
construction of program structure.
• Modules are integrated by moving downward through the control
hierarchy, beginning with the main control module (main program).
• Modules subordinate (and ultimately subordinate) to the main
control module are incorporated into the structure in either a depth-
first or breadth-first manner.
Contd..
• Depth-first integration would integrate all
components on a major control path of the
structure.
• Selection of a major path is somewhat arbitrary
and depends on application-specific
characteristics.
• For example, selecting the lefthand path,
components M1, M2 , M5 would be integrated
first. Next, M8 or (if necessary for proper
functioning of M2) M6 would be integrated.
• Then, the central and righthand control paths
are built.
Contd..
• Breadth-first integration incorporates all components directly
subordinate at each level, moving across the structure horizontally.
• From the figure, components M2, M3, and M4 would be integrated
first.
• The next control level, M5, M6, and so on, follows.
Contd..
• The integration process is performed in a series of five steps:
1. The main control module is used as a test driver and stubs are substituted
for all components directly subordinate to the main control module.
2. Depending on the integration approach selected (i.e., depth or breadth
first), subordinate stubs are replaced one at a time with actual components.
3. Tests are conducted as each component is integrated.
4. On completion of each set of tests, another stub is replaced with the real
component.
5. Regression testing may be conducted to ensure that new errors have not
been introduced.
The process continues from step 2 until the entire program structure is built.
Contd…
• Bottom-up approach
• Large software products are often made up of several subsystems.
• A subsystem might consist of many modules which communicate among each
other through well-defined interfaces.
• In bottom-up integration testing, first the modules for the each subsystem are
integrated.
• Thus, the subsystems can be integrated separately and independently.
Contd..
• Bottom-up integration testing, thus begins construction and testing
with atomic modules (i.e., components at the lowest levels in the
program structure).
• Because components are integrated from the bottom up, processing
required for components subordinate to a given level is always
available and the need for stubs is eliminated.
Contd..
• A bottom-up integration strategy may be implemented with the
following steps:
1. Low-level components are combined into clusters (sometimes called builds)
that perform a specific software sub-function.
2. A driver (a control program for testing) is written to coordinate test case input
and output.
3. The cluster is tested.
4. Drivers are removed and clusters are combined moving upward in the program
structure.
Contd..
• Integration follows the pattern
illustrated in Figure 18.7.
• Components are combined to form
clusters 1, 2, and 3.
• Each of the clusters is tested using a
driver (shown as a dashed block).
• Components in clusters 1 and 2 are
subordinate to Ma.
• Drivers D1 and D2 are removed and the
clusters are interfaced directly to Ma.
• Similarly, driver D3 for cluster 3 is
removed prior to integration with
module Mb.
• Both Ma and Mb will ultimately be
integrated with component Mc, and so
forth.
Control Flow Testing
• Condition Testing
• Condition testing is a test case design method that exercises the logical
conditions contained in a program module.
• A simple condition is a Boolean variable or a relational expression, possibly
preceded with one NOT (¬) operator.
• A relational expression takes the form

• where E1 and E2 are arithmetic expressions and relational operator is


one of the following:
Contd…
• A compound condition is composed of two or more simple conditions, Boolean
operators, and parentheses.
• Boolean operators allowed in a compound condition include OR (| ), AND (&) and NOT
(¬).
• If a condition is incorrect, then at least one component of the condition is incorrect.
Therefore, types of errors in a condition include the following:
• Boolean operator error (incorrect/missing/extra Boolean operators).
• Boolean variable error.
• Boolean parenthesis error.
• Relational operator error.
• Arithmetic expression error.
Contd..
• The condition testing method focuses on testing each condition in the
program.
• The purpose of condition testing is to detect not only errors in the
conditions of a program but also other errors in the program.
• If a test set for a program P is effective for detecting errors in the
conditions contained in P, it is likely that this test set is also effective
for detecting other errors in P.
Contd..
• Branch testing is the simplest condition testing strategy. For a
compound condition C, the true and false branches of C and every
simple condition in C need to be executed at least once.
• Domain testing requires three or four tests to be derived for a
relational expression. For a relational expression of the form

three tests are required to make the value of E1 greater than, equal
to, or less than that of E2
Contd..
• If relational operators are incorrect and E1 and E2 are correct, then
these three tests guarantee the detection of the relational operator
error.
• BRO (branch and relational operator) testing, technique guarantees
the detection of branch and relational operator errors in a condition
provided that all Boolean variables and relational operators in the
condition occur only once and have no common variables.
• The BRO strategy uses condition constraints for a condition C.
Contd..
• For a Boolean variable, B, we specify a constraint on the outcome of B that states
that B must be either true (t) or false (f). Similarly, for a relational expression, the
symbols >, =, < are used to specify constraints on the outcome of the expression.
• As an example, consider the condition C1: B1 & B2 where B1 and B2 are Boolean
variables.
• The condition constraint for C1 is of the form (D1, D2), where each of D1 and D2
is t or f.
• The value (t, f) is a condition constraint for C1 and is covered by the test that
makes the value of B1 to be true and the value of B2 to be false.
• The BRO testing strategy requires that the constraint set {(t, t), (f, t), (t, f)} be
covered by the executions of C1. If C1 is incorrect due to one or more Boolean
operator errors, at least one of the constraint set will force C1 to fail.
Contd..
• As a second example, a condition of the form C2: B1 & (E3 = E4)
where B1 is a Boolean expression and E3 and E4 are arithmetic
expressions.
• A condition constraint for C2 is of the form (D1, D2), where each of
D1 is t or f and D2 is >, =, < or >. By replacing (t, t) and (f, t) with (t, =)
and (f, =), respectively, and by replacing (t, f) with (t,< ) and (t,>) the
resulting constraint set for C2 is {(t, =), (f, =), (t,< ), (t,>)}.
Contd..
• Data Flow Testing
• Data flow based testing method selects test paths of a program
according to the definitions and uses of different variables in a
program.
• Consider a program P . For a statement numbered S of P , let
• DEF(S) = {X /statement S contains a definition of X }
• USES(S)= {X /statement S contains a use of X }
Contd..
• For the statement S: a=b+c;, DEF(S)={a}, USES(S)={b, c}. The definition of variable X at
statement S is said to be live at statement S1 , if there exists a path from statement S to
statement S1 which does not contain any definition of X .
• All definitions criterion is a test coverage criterion that requires that an adequate test set
should cover all definition occurrences in the sense that, for each definition occurrence,
the testing paths should cover a path through which the definition reaches a use of the
definition.
• All use criterion requires that all uses of a definition should be covered.
Contd..
• The definition-use chain (or DU chain) of a variable X is of the form [X,
S, S1], where S and S1 are statement numbers, such that X∈ DEF(S)
and X∈ USES(S1) and the definition of X in the statement S is live at
statement S1 .
• One simple data flow testing strategy is to require that every DU
chain be covered at least once.
• Data flow testing strategies are especially useful for testing programs
containing nested if and loop statements.
Contd..
• Loop Testing
• Loop testing is a white-box testing technique that focuses
exclusively on the validity of loop constructs.
• Four different classes of loops can be defined: simple loops,
concatenated loops, nested loops, and unstructured loops.
• Simple loops: The following set of tests can be applied
to simple loops, where n is the maximum number of
allowable passes through the loop.
• 1. Skip the loop entirely.
• 2. Only one pass through the loop..
• 3. Two passes through the loop.
• 4. m passes through the loop where m < n.
• 5. n -1, n, n + 1 passes through the loop.
Contd..
Contd..
• Nested loops: Steps are
1. Start at the innermost loop. Set all other loops to minimum
values.
2. Conduct simple loop tests for the innermost loop while holding the
outer loops at their minimum iteration parameter values.
3. Add other tests for out-of-range or excluded values.
4. Work outward, conducting tests for the next loop, but keeping all
other outer loops at minimum values and other nested loops to
"typical" values.
5. Continue until all loops have been tested
Contd..
• Concatenated loops : Concatenated loops can be tested using the
approach defined for simple loops, if each of the loops is independent
of the other. When the loops are not independent, the approach
applied to nested loops is recommended.
• Unstructured loops: Whenever possible, this class of loops should be
redesigned to reflect the use of the structured programming
constructs.
Smoke Testing
• Smoke testing is an integration testing approach and carried out
before initiating system testing to ensure that system testing would
be meaningful, or whether many parts of the software would fail.
• The idea behind smoke testing is that if the integrated program
cannot pass even the basic tests, it is not ready for a vigorous testing.
Contd..
• The smoke testing approach encompasses the following activities:
• Software components that have been translated into code are integrated
into a “build.” A build includes all data files, libraries, reusable modules,
and engineered components that are required to implement one or
more product functions.
• A series of tests is designed to expose errors that will keep the build
from properly performing its function. The intent should be to uncover
“show stopper” errors that have the highest likelihood of throwing the
software project behind schedule.
• The build is integrated with other builds and the entire product (in its
current form) is smoke tested daily. The integration approach may be
top down or bottom up.
Contd..
• For smoke testing, a few test cases are designed to check whether the basic
and major functionalities are working.
• For example, for a library automation system, the smoke tests may check
whether books can be created and deleted, whether member records can be
created and deleted, and whether books can be loaned and returned.
Regression testing
• Regression testing spans unit, integration, and system testing.
• Regression testing is the practice of running an old test suite after each change to the
system or after each bug fix to ensure that no new bug has been introduced due to the
change or the bug fix.
• However, if only a few statements are changed, then the entire test suite need not be
run — only those test cases that test the functions and are likely to be affected by the
change need to be run.
• Whenever a software is changed to either fix a bug, or enhance or remove a feature,
regression testing is carried out.
Contd..
• The regression test suite (the subset of tests to be executed) contains
three different classes of test cases:
• A representative sample of tests that will exercise all software functions.
• Additional tests that focus on software functions that are likely to be affected
by the change.
• Tests that focus on the software components that have been changed.
VALIDATION TESTING
• Validation succeeds when software functions in a manner that can be
reasonably expected by the customer.
• Reasonable expectations are defined in the Software Requirements
Specification— a document that describes all user-visible attributes of
the software.
Contd..
• Validation Test Criteria
• Software validation is achieved through a series of black-box tests that
demonstrate conformity with requirements.
• A test plan outlines the classes of tests to be conducted and a test
procedure defines specific test cases that will be used to demonstrate
conformity with requirements.
• Both the plan and procedure are designed to ensure that all functional
requirements are satisfied, all behavioral characteristics are achieved, all
performance requirements are attained, documentation is correct, and
human-engineered and other requirements are met (e.g., transportability,
compatibility, error recovery, maintainability).
Contd..
• After each validation test case has been conducted, one of two
possible conditions exist:
(1) The function or performance characteristics conform to specification and are accepted
or
(2) a deviation from specification is uncovered and a deficiency list is
created.
• Deviation or error discovered at this stage in a project can rarely be
corrected prior to scheduled delivery.
• It is often necessary to negotiate with the customer to establish a
method for resolving deficiencies.
Contd..
• Configuration Review
• An important element of the validation process is a configuration
review.
• The intent of the review is to ensure that all elements of the software
configuration have been properly developed, are cataloged, and have
the necessary detail to bolster the support phase of the software life
cycle.
Contd..
• Acceptance Tests
• When custom software is built for one customer, a series of
acceptance tests are conducted to enable the customer to validate all
requirements.
• Conducted by the end-user rather than software engineers, an
acceptance test can range from an informal "test drive" to a planned
and systematically executed series of tests.
• In fact, acceptance testing can be conducted over a period of weeks
or months, thereby uncovering cumulative errors that might degrade
the system over time.
Contd..
• Alpha and Beta Testing
• If software is developed as a product to be used by many customers, it is
impractical to perform formal acceptance tests with each one.
• Most software product builders use a process called alpha and beta testing
to uncover errors that only the end-user seems able to find.
• The alpha test is conducted at the developer's site by a customer. The
software is used in a natural setting with the developer "looking over the
shoulder" of the user and recording errors and usage problems. Alpha tests
are conducted in a controlled environment.
Contd..
• The beta test is conducted at one or more customer sites by the end-user of the
software.
• Unlike alpha testing, the developer is generally not present.
• Therefore, the beta test is a "live" application of the software in an environment that
cannot be controlled by the developer.
• The customer records all problems (real or imagined) that are encountered during beta
testing and reports these to the developer at regular intervals.
• As a result of problems reported during beta tests, software engineers make
modifications and then prepare for release of the software product to the entire
customer base.
SYSTEM TESTING
• System testing is actually a series of different tests whose primary purpose is to fully
exercise the computer-based system.
• Although each test has a different purpose, all work to verify that system elements have
been properly integrated and perform allocated functions.
• Following are the different types of system tests.
• Recovery Testing
• Security Testing
• Stress Testing
• Performance Testing
Contd..
• Recovery Testing
• Many computer based systems must recover from faults and resume processing within a
pre-specified time.
• In some cases, a system must be fault tolerant; that is, processing faults must not cause
overall system function to cease.
• In other cases, a system failure must be corrected within a specified period of time or
severe economic damage will occur.
• Recovery testing is a system test that forces the software to fail in a variety of ways and
verifies that recovery is properly performed.
• If recovery is automatic (performed by the system itself), re-initialization, check-pointing
mechanisms, data recovery, and restart are evaluated for correctness.
• If recovery requires human intervention, the mean-time-to-repair (MTTR) is evaluated to
determine whether it is within acceptable limits.
Contd..
• Security Testing
• Any computer-based system that manages sensitive information or causes
actions that can improperly harm (or benefit) individuals is a target for improper
or illegal penetration.
• Penetration spans a broad range of activities: hackers who attempt to penetrate
systems for sport; disgruntled employees who attempt to penetrate for revenge;
dishonest individuals who attempt to penetrate for illicit personal gain.
• Security testing attempts to verify that protection mechanisms built into a system
will, in fact, protect it from improper penetration.
Contd..
• During security testing, the tester plays the role(s) of the individual who desires to penetrate the system.
• The tester may attempt to acquire passwords through external clerical means; may attack the system with
custom software designed to breakdown any defenses that have been constructed; may overwhelm the
system, thereby denying service to others; may purposely cause system errors, hoping to penetrate during
recovery; may browse through insecure data, hoping to find the key to system entry.
• Given enough time and resources, good security testing will ultimately penetrate a system.
• The role of the system designer is to make penetration cost more than the value of the information that will
be obtained.
Contd..
• Stress Testing
• Stress tests are designed to confront programs with abnormal situations.
• In essence, the tester who performs stress testing asks: "How high can we
crank this up before it fails?"
• Stress testing executes a system in a manner that demands resources in
abnormal quantity, frequency, or volume.
Contd..
• For example
• 1. special tests may be designed that generate ten interrupts per second,
when one or two is the average rate.
• 2. input data rates may be increased by an order of magnitude to determine
how input functions will respond.
• 3. test cases that require maximum memory or other resources are executed.
• 4. test cases that may cause thrashing in a virtual operating system are
designed.
• 5. test cases that may cause excessive hunting for disk-resident data are
created.
• Essentially, the tester attempts to break the program
Contd..
• A variation of stress testing is a technique called sensitivity testing.
• In some situations (the most common occur in mathematical algorithms), a very small
range of data contained within the bounds of valid data for a program may cause
extreme and even erroneous processing or profound performance degradation.
• Sensitivity testing attempts to uncover data combinations within valid input classes that
may cause instability or improper processing.
Contd..
• Performance Testing
• Performance testing is designed to test the run-time performance of software
within the context of an integrated system.
• Performance testing occurs throughout all steps in the testing process.
• Even at the unit level, the performance of an individual module may be assessed
as white-box tests are conducted.
• However, it is not until all system elements are fully integrated that the true
performance of a system can be ascertained.
Contd..
• Performance tests are often coupled with stress testing and usually
require both hardware and software instrumentation.
• That is, it is often necessary to measure resource utilization (e.g.,
processor cycles) in an exacting fashion. External instrumentation can
monitor execution intervals, log events (e.g., interrupts) as they occur,
and sample machine states on a regular basis.
• By instrumenting a system, the tester can uncover situations that lead
to degradation and possible system failure.

You might also like