Software Testing Answer Book
Software Testing Answer Book
2 Marks:
Testing consumes at least half of the time and work required to produce a
functional program.
• History says that even well written programs still have 1-3 bugs per
hundred statements.
i. Unit-Level Bugs
ii. System-Level Integration Bugs
iii. Out of bond bugs
iv. Functional errors
v. Syntax errors
vi. Logic errors
vii. Calculation errors
The domain span is the set of numbers between (and including) the smallest
value and the largest value. For every input variable we want (at least):
compatible domain spans and compatible closures (Compatible but need not be
Equal).
Bugs are more insidious (deceiving but harmful) than ever we expect them to
be. An unexpected test result may lead us to change our notion of what a bug
is and our model of bugs.
Domain is also defined as a set of all probable input values. This means that
the output value will depend on each member. On the other hand, range is
defined as a set of all probable output values. Moreover, the values in a range
can only be calculated by having the domain value.
The decision table is a software testing technique which is used for testing
the system behavior for different input combinations. This is a systematic
approach where the different input combinations and their corresponding
system behavior are captured in a tabular form.
The objective of path testing is to ensure that each independent path through
the program is executed at least once. An independent program path is one
that traverses at least one new edge in the flow graph. In program terms,
this means exercising one or more new conditions.
If you can’t find a solution to any of the sets of inequalities, the path is
unachievable. The act of finding a set of solutions to the path predicate
expression is called path sensitization.
i. Error
ii. Bugs
iii. Accidental changes in program
i. Unit Testing
ii. Integration Testing
iii. System Testing
iv. Acceptance Testing
The main goal of software testing is to find bugs as early as possible and fix
bugs and make sure that the software is bug-free.
It is a software testing technique that divides the input test data of the
application under test into each partition at least once of equivalent data from
which test cases can be derived.
To slice and dice is to break a body of information down into smaller parts or
to examine it from different viewpoints so that you can understand it better.
Any expression that consists of path names and "OR"s and which denotes a set
of paths between two nodes is called a path expression.
5 Marks:
Environment:
* For online systems, the environment may include communication lines, other
systems, terminals and operators.
* The environment also includes all programs that interact with and are used
to create the program under test - such as OS, linkage editor, loader,
compiler, utility routines.
* It’s because the hardware and firmware are stable, it is not smart to blame
the environment for bugs.
Program:
Bugs:
* Bugs are more insidious (deceiving but harmful) than ever we expect them to
be.
* An unexpected test result may lead us to change our notion of what a bug is
and our model of bugs.
* If sufficient time is not spent in quality assurance, the reject rate will be
high and so will be the net cost. If inspection is good and all errors are
caught as they occur, inspection costs will dominate, and again the net cost
will suffer.
The biggest part of software cost is the cost of bugs: the cost of detecting
them, the cost of correcting them, the cost of designing tests that discover
them, and the cost of running those tests.
For software, quality and productivity are indistinguishable because the cost
of a software copy is trivial.
The strategy for state testing is analogous to that used for path testing flow
graphs.
Just as it's impractical to go through every possible path in a flow graph, it's
impractical to go through every path in a state graph.
Even though more state testing is done as a single case in a grand tour, it's
impractical to do it that way for several reasons.
In the early phases of testing, you will never complete the grand tour because
of bugs. Later, in maintenance, testing objectives are understood, and only a
few of the states and transitions have to be tested. A grand tour is waste of
time.
Theirs is no much history in a long test sequence and so much has happened
that verification is difficult.
Define a set of covering input sequences that get back to the initial state
when starting from the initial state.
For each step in each input sequence, define the expected next state, the
expected transition, and the expected output code.
Output sequences
State transition coverage in a state graph model does not guarantee complete
testing.
How defines a hierarchy of paths and methods for combining paths to produce
covers of state graphs.
The simplest is called a "0 switch" which corresponds to testing each transition
individually. The next level consists of testing transitions sequences consisting
of two transitions called "1 switches".
The maximum length switch is "n-1 switch" where there are n numbers of
states. o Situations at which state testing is useful
Any processing where the output is based on the occurrence of one or more
sequences of events, such as detection of specified input sequences, sequential
format validation, parsing, and other situations in which the order of inputs is
important.
Device drivers such as for tapes and discs that have complicated retry and
recovery procedures if the action depends on the state.
ii. Path testing techniques are the oldest of all structural test techniques.
iii. Path testing is most applicable to new software for unit testing. It is a
structural test technique.
vi. The effectiveness of path testing rapidly decreses as the size of the
software aggregate under test increases.
Extract the programs control flowgraph and select a set of tentative covering
paths.
For any path in that set, interpret the predicates along the path as needed to
express them in terms of the input vector. In general, individual predicates
are compound or may become compound as a result of interpretation.
ADFGHIJKL+AEFGHIJKL+BCDFGHIJKL+BCEFGHIJKL
Each product term denotes a set of inequalities that if solved will yield an
input vector that will drive the routine along the designated path.
Solve any one of the inequality sets for the chosen path and you have found a
set of input values for the path.
If you can’t find a solution to any of the sets of inequalities, the path is un
achievable.
The act of finding a set of solutions to the path predicate expression is called
PATH SENSITIZATION.
a. Frequency: How often does that kind of bug occur? Pay more
attention to the
ii. More frequent bug types:
a. Correction Cost: What does it cost to correct the bug after it is
found. The cost is the sum of 2 factors: (1) the cost of discovery
(2) the cost of correction. These costs go up dramatically later in
the development cycle when the bug is discovered. Correction cost
also depends on system size.
b. Installation Cost: Installation cost depends on the number of
installations: small for a single user program but more for
distributed systems. Fixing one bug and distributing the fix could
exceed the entire system's development cost.
c. Consequences: What are the consequences of the bug? Bug
consequences can range from mild to catastrophic.
iii. A reasonable metric for bug importance is:
Importance= ($) = Frequency * (Correction cost + Installation cost +
Consequential cost)
For interface testing, bugs are more likely to concern single variables
rather than peculiar combinations of two or more variables.
iii. Start with the called routine's domains and generate test
points in accordance to the domain-testing strategy used
for that routine in component testing.
Syntax Testing, a black box testing technique, involves testing the System
inputs and it is usually automated because syntax testing produces a large
number of tests. Internal and external inputs have to conform the below
formats:
i. File formats.
State Transition testing, a black box testing technique, in which outputs are
triggered by changes to the input conditions or changes to 'state' of the
system. In other words, tests are designed to execute valid and invalid state
transitions.
When to use?
When we have sequence of events that occur and associated conditions that
apply to those events
When the proper handling of a particular event depends on the events and
conditions that have occurred in the past
It is used for real time systems with various states and transitions involved
Understand the various state and transition and mark each valid and invalid
state
Each one of those visited state and traversed transition should be noted down
Steps 2 and 3 should be repeated until all states have been visited and all
transitions traversed
For test cases to have a good coverage, actual input values and the actual
output values have to be generated.
TERMINOLOGY:
i. Definition-Clear Path Segment, with respect to variable X, is a
connected sequence of links such that X is (possibly) defined on
the first link and not redefined or killed on any subsequent link of
that path segment. ll paths in Figure 3.9 are definition clear
because variables X and Y are defined only on the first link (1,3)
and not thereafter. In Figure 3.10, we have a more complicated
situation. The following path segments are definition-clear:
(1,3,4), (1,3,5), (5,6,7,4), (7,8,9,6,7), (7,8,9,10), (7,8,10),
(7,8,10,11). Subpath (1,3,4,5) is not definition-clear because
the variable is defined on (1,3) and again on (4,5). For practice,
try finding all the definition-clear subpaths for this routine (i.e.,
for all variables).
ii. Loop-Free Path Segment is a path segment for which every node
in it is visited almost once. For Example, path (4,5,6,7,8,10) in
Figure 3.10 is loop free, but path (10,11,4,5,6,7,8,10,11,12) is
not because nodes 10 and 11 are each visited twice.
iii. Simple path segment is a path segment in which at most one node
is visited twice. For example, in Figure 3.10, (7,4,5,6,7) is a
simple path segment. A simple path segment is either loop-free or
if there is a loop, only one node is involved.
Halstead refers to n1* and n2* as the minimum possible number of operators
and operands for a module and a program respectively. This minimum number
would be embodied in the programming language itself, in which the required
operation would already exist (for example, in C language, any program must
contain at least the definition of the function main()), possibly as a function
or as a procedure: n1* = 2, since at least 2 operators must appear for any
function or procedure : 1 for the name of the function and 1 to serve as an
assignment or grouping symbol, and n2* represents the number of
parameters, without repetition, which would need to be passed on to the
function or the procedure.
NJ = log2(n1!) + log2(n2!)
NB = n1 * log2n2 + n2 * log2n1
NC = n1 * sqrt(n1) + n2 * sqrt(n2)
NS = (n * log2n) / 2
Halstead Vocabulary – The total number of unique operator and
unique operand occurrences.
n = n1 + n2
Program Volume – Proportional to program size, represents the size,
in bits, of space necessary for storing the program. This parameter
is dependent on specific algorithm implementation. The properties V,
N, and the number of lines in the code are shown to be linearly
connected and equally valid for measuring relative program size.
V = Size * (log2 vocabulary) = N * log2(n)
The unit of measurement of volume is the common unit for size
“bits”. It is the actual size of a program if a uniform binary
encoding for the vocabulary is used. And error = Volume / 3000
The value of L ranges between zero and one, with L=1 representing
a program written at the highest possible level (i.e., with minimum
size).
And estimated program level is L ^ =2 * (n2) / (n1)(N2)
Program Difficulty – This parameter shows how difficult to handle
the program is.
D = (n1 / 2) * (N2 / n2)
D = 1 / L
As the volume of the implementation of a program increases, the
program level decreases and the difficulty increases. Thus,
programming practices such as redundant usage of operands, or the
failure to use higher-level control constructs will tend to increase
the volume as well as the difficulty.
Programming Effort – Measures the amount of mental activity
needed to translate the existing algorithm into implementation in the
specified program language.
E = V / L = D * V = Difficulty * Volume
Any testing strategy based on paths must at least both exercise every
instruction and take branches in all directions.
A set of tests that does this is not complete in an absolute sense, but it is
complete in the sense that anything less must leave something untested.
Execute all possible control flow paths through the program: typically, this is
restricted to all possible entry/exit paths through the program.
Execute all statements in the program at least once under some test. If we
do enough tests to achieve this, we are said to have achieved 100%
statement coverage.
This is the weakest criterion in the family: testing less than this for new
software is unconscionable (unprincipled or can not be accepted) and should
be criminalized.
Execute enough tests to assure that every branch alternative has been
exercised at least once under some test.
Ask the designers to relate every flow to the specification and to show how
that transaction, directly or indirectly, follows from the requirements:
Make transaction flow testing the corner stone of system functional testing
just as path testing is the corner stone of unit testing. Select additional flow
paths for loops, extreme values, and domain boundaries. o Design more test
cases to validate all births and deaths. Publish and distribute the selected
test paths through the transaction flows as early as possible so that they
will exert the maximum beneficial effect on the project.
PATH SELECTION:
Select a set of covering paths (c1+c2) using the analogous criteria you used
for structural path testing. Select a covering set of paths based on
functionally sensible transactions as you would for control flow graphs. Try to
find the most tortuous, longest, strangest path from the entry to the exit
of the transaction flow.
PATH SENSITIZATION:
Most of the normal paths are very easy to sensitize-80% - 95% transaction
flow coverage (c1+c2) is usually easy to achieve. The remaining small
percentage is often very difficult. Sensitization is the act of defining the
transaction. If there are sensitization problems on the easy paths, then bet
on either a bug in transaction flows or a design bug.
PATH INSTRUMENTATION:
10 Marks:
usually the work of one programmer and consists of several hundred or fewer
lines of code. Unit Testing is the testing we do to show that
the unit does not satisfy its functional specification or that its
structure.
USAGE:
Transaction flows are indispensable for specifying requirements of
complicated systems, especially online systems.
A big system such as an air traffic control or airline reservation system
has not hundreds, but thousands of different transaction flows.
The flows are represented by relatively simple flow graphs, many of
which have a single straight-through path.
Loops are infrequent compared to control flow graphs.
The most common loop is used to request a retry after user input
errors. An ATM system, for example, allows the user to try, saying
three times, and will take the card away the fourth time.
COMPLICATIONS:
In simple cases, the transactions have a unique identity from the time
they're created to the time they're completed.
In many systems, the transactions can give birth to others, and
transactions can also merge.
For variable X and Y: In Figure 3.9, because variables X and Y are used only
on link (1,3), any test that starts at the entry satisfies this criterion (for
variables X and Y, but not for all variables as required by the strategy).
For variable Z: The situation for variable Z (Figure 3.10) is more complicated
because the variable is redefined in many places. For the definition on link
(1,3) we must exercise paths that include sub paths (1,3,4) and (1,3,5). The
definition on link (4,5) is covered by any path that includes (5,6), such as
sub path (1,3,4,5,6, ...). The (5,6) definition requires paths that include
sub paths (5,6,7,4) and (5,6,7,8).
For variable V: Variable V (Figure 3.11) is defined only once on link (1,3).
Because V has a predicate use at node 12 and the subsequent path to the
end must be forced for both directions at node 12, the all-du-paths
strategy for this variable requires that we exercise all loop-free entry/exit
paths and at least one path that includes the loop caused by (11,4). Note
that we must test paths that include both sub paths (3,4,5) and (3,5) even
though neither of these has V definitions. They must be included because
they provide alternate du paths to the V use on link (5,6). Although (7,4) is
not used in the test set for variable V, it will be included in the test set
that covers the predicate uses of array variable V() and U.
The all-du-paths strategy is a strong criterion, but it does not take as many
tests as it might seem at first because any one test simultaneously satisfies
the criterion for several definitions and uses of several different variables.
The all uses strategy is that at least one definition clear path from every
definition of every variable to every use of that definition be exercised
under some test. Just as we reduced our ambitions by stepping down from all
paths (P) to branch coverage (C2), say, we can reduce the number of test
cases by asking that the test set should include at least one path segment
from every definition to every use that can be reached by that definition.
For variable V: In Figure 3.11, ADUP requires that we include sub paths
(3,4,5) and (3,5) in some test because subsequent uses of V, such as on link
(5,6), can be reached by either alternative. In AU either (3,4,5) or (3,5)
can be used to start paths, but we don't have to use both. Similarly, we can
skip the (8,10) link if we've included the (8,9,10) sub path. Note the hole.
We must include (8,9,10) in some test cases because that's the only way to
reach the c use at link (9,10) - but suppose our bug for variable V is on link
(8,10) after all? Find a covering set of paths under AU for Figure 3.11.
For every variable and every definition of that variable, include at least one
definition free path from the definition to every predicate use; if there are
definitions of the variables that are not covered by the above prescription,
then add computational use test cases as required to cover every definition.
For variable Z: In Figure 3.10, for APU+C we can select paths that all take
the upper link (12,13) and therefore we do not cover the c-use of Z: but
that's okay according to the strategy's definition because every definition is
covered. Links (1,3), (4,5), (5,6), and (7,8) must be included because they
contain definitions for variable Z. Links (3,4), (3,5), (8,9), (8,10), (9,6),
and (9,10) must be included because they contain predicate uses of Z. Find a
covering set of test cases under APU+C for all variables in this example - it
only takes two tests.
predicate use cases as are needed to assure that every definition is included
in some test.
The all definitions strategy asks only every definition of every variable be
covered by at least one use of that variable, be that use a computational use
or a predicate use.
The all predicate uses strategy is derived from APU+C strategy by dropping
the requirement that we include a c-use for the variable if there are no p-
uses for the variable. The computational uses strategy is derived from
ACU+P strategy by dropping the requirement that we include a p-use for the
variable if there are no c-uses for the variable.
It is intuitively obvious that ACU should be weaker than ACU+P and that APU
should be weaker than APU+C.
The bug assumption for the domain testing is that processing is okay but the
domain definition is wrong. An incorrectly implemented domain means that
boundaries are wrong, which may in turn mean that control flow predicates
are wrong.
Many different bugs can result in domain errors. Some of them are:
Domain Errors:
A floating point number can equal zero only if the previous definition of that
number set it to zero or if it is subtracted from itself or multiplied by zero.
So, the floating point zero check to be done against an epsilon value.
Contradictory Domains:
Ambiguous domains:
Over-specified Domains:
The domain can be overloaded with so many conditions that the result is a
null domain. Another way to put it is to say that the domain's path is
unachievable.
Closure Reversal:
Faulty Logic:
Refer 5 mark
Refer 5 mark
Functional Bugs
Content Bugs
Content bugs relate to the actual content of websites or apps: text, labels,
pictures, videos, icons, links, data, etc. Hence, typical content bugs are:
Broken links or images (404s) (unless located in the navigation menu,
header, footer, or a breadcrumb navigation, which are low functional
bugs)
Defective redirections in general
Missing text, example, in an empty tooltip
Missing content, example, empty content area
Missing content, example, if 4 out of 5 icons have a tooltip, 1 doesn't
Missing translations, example, some button on an English website having
French labels
Some products are missing in search results but the search function
itself works
Missing data
Visual Bugs
Visual bugs relate to the graphical user interfaces of websites or apps, e.g.:
Layout framework problems such as misaligned texts/elements