0% found this document useful (0 votes)
19 views19 pages

Unit 3 Software Test QB With Answers

Uploaded by

anusenthil04
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views19 pages

Unit 3 Software Test QB With Answers

Uploaded by

anusenthil04
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 19

UCS1010 SOFTWARE TESTING

UNIT III
TEST ADEQUACY CRITERIA AND TEST ENHANCEMENT

PART A
1. Define test adequacy criteria. Give an example. Also mention the application
scope of adequacy criteria,
Adequacy is measured for a given test set designed to test P to determine whether or
not P meets its requirements. This measurement is done against a given criterion C. A
test set is considered adequate with respect to criterion C when it satisfies C.
Example:
Consider a Program sumProduct must meet the following requirements:
R1 Input two integers, say x and y , from the standard input device.
R2.1 Find and print to the standard output device the sum of x and y if x<y .
R2.2 Find and print to the standard output device the product of x and y if x≥y.
Suppose now that the test adequacy criterion C is specified as:
C : A test T for program ( P, R ) is considered adequate if for each requirement r
in R there is at least one test case in T that tests the correctness of P with respect
to r .
Obviously, T={t: <x=2, y=3> is inadequate with respect to C for program
sumProduct. The lone test case t in T tests R1 and R2.1, but not R2.2.
Application scope of adequacy criteria:
✓ Helping testers to select properties of a program to focus on during test.
✓ Helping testers to select a test data set for a program based on the selected
properties.
✓ Supporting testers with the development of quantitative objectives for testing

2. Define coverage domain.


For each adequacy criterion C , we derive a finite set known as the coverage
domain and denoted as Ce . We want to measure the adequacy of T. Given that Ce has
n≥ 0 elements, we say that T covers Ce if for each element e' in Ce there is at least one
test case in T that tests e'. T is considered adequate with respect to C if it covers all
elements in the coverage domain. T is considered inadequate with respect to C if it
covers k elements of Ce where k<n .
The fraction k/n is a measure of the extent to which T is adequate with respect
to C . This fraction is also known as the coverage of T with respect to C , P , and R .
Example:
Consider finite set of elements Ce={R1, R2.1, R2.2}. T covers R1 and R2.1 but not
R2.2. Hence T is not adequate with respect to C . The coverage of T with respect to C,
P, and R is 2/3 = 0.66. (66%)

3. What do you mean by control flow graph? Give an example.


A control flow graph (or flow graph) G is defined as a finite set N of nodes
and a finite set E of edges. An edge (i, j) in E connects two nodes ni and nj in N. We
often write G= (N, E) to denote a flow graph G with nodes given by N and edges by
E.
In a flow graph of a program, each basic block becomes a node and edges are
used to indicate the flow of control between blocks. Blocks and nodes are labeled
such that block bi corresponds to node ni.An edge (i, j) connecting basic blocks bi and
bj implies that control can go from block bi to block bj.
4. Differentiate decision coverage and condition coverage.
Condition coverage requires that each individual condition within a decision (e.g., in
if, while, or for statements) be evaluated to both true and false. It focuses on testing
the outcome of each condition within a decision, regardless of the overall result of the
decision. It ensures that all possible conditions have been tested independently.
Example: For a decision like (A && B), condition coverage would require tests where
A is true and false and where B is true and false.

Decision coverage (also known as branch coverage) requires that each possible
outcome of a decision (e.g., the true or false result of an entire if or while statement) is
tested. It focuses on testing each path that a program’s flow can take, ensuring that
both true and false results of the decision as a whole are covered. It ensures all
branches of the control flow are tested.
Example: For the decision (A && B), decision coverage requires testing only the
overall decision’s true and false outcomes, not each individual condition separately.

Condition coverage generally requires more test cases than decision coverage because
it tests individual conditions within compound decisions.

5. Define path coverage. Give an example.


Path coverage is a code coverage metric that ensures all possible paths in a program's
control flow are executed at least once. This type of coverage aims to test every
unique sequence of decisions or branches from the start to the end of a program or a
specific function, which helps detect any issues in complex branching.
6. Define mutation and mutant.
Mutation testing is a type of software testing where small modifications, known as
mutations, are made to a program's code to create slightly different versions, called
mutants. These mutants are then tested to see if the existing test cases can detect the
modifications (i.e., cause the test cases to fail). Mutation testing aims to assess the
quality and effectiveness of test cases by checking if they can detect faults.
Examples of mutations:
Changing an arithmetic operator (+ to -).
Replacing a conditional operator (< to <=).
Switching variable values.
Example of mutant: If the original line in the code is int result = a + b;, a mutant
might be int result = a - b;

7. Outline the principles of mutation testing? List the two major assumptions of
mutation testing.
The principles of mutation testing guide the creation and analysis of mutants to
evaluate the effectiveness of a test suite. These principles ensure that mutation testing
accurately measures a test suite's ability to detect faults by simulating potential real-
world defects. Two key principles are
Competent Programmer Hypothesis
This principle assumes that developers generally write code that is close to correct,
containing only small, minor errors rather than severe or illogical mistakes. Mutants
are created by making small, realistic changes to the code, simulating the types of
mistakes competent programmers might make (e.g., changing operators, modifying
conditions).

Coupling Effect Hypothesis


The coupling effect suggests that if a test suite can detect simple faults (single small
mutations), it will likely also detect more complex, compounded faults. By testing
small, single changes (or simple mutants), mutation testing helps ensure that the test
suite is robust enough to catch more complex issues in the program.

8. Define equivalent mutant.


An equivalent mutant is a mutant that, despite containing a code modification,
behaves identically to the original program and does not change the output for any
input. As a result, equivalent mutants cannot be detected (or “killed”) by any test case,
making them problematic in mutation testing because they artificially lower the
effectiveness of the test suite without providing useful insights into test coverage.
Equivalent mutants occur when a mutation does not alter the program's logic or
functionality. They are often difficult to identify automatically and require manual
inspection, which can make mutation testing labor-intensive

9. Summarize mutation operator.


Mutation operators are rules or techniques used in mutation testing to create mutants
by introducing small changes in the source code. These operators are designed to
simulate common types of errors that developers might make, allowing testers to
evaluate the effectiveness of their test cases. Some commonly used mutation operators
are
• Changing Arithmetic operators, Ralational operators, Logical operator, Unary
operator, Assignment operator and so on.
• Constant replacement
• Variable replacement
• Statement deletion
• Negation of condition
• Return value mutation
10. What do you mean by fault-based mutation testing?
Fault-based mutation testing is a specific approach in mutation testing that focuses on
creating mutants to mimic realistic and likely faults in the software. Instead of
generating arbitrary mutations, fault-based mutation testing uses knowledge of typical
programming errors and common fault patterns to produce mutants that represent
realistic defects. This approach helps evaluate a test suite's effectiveness in detecting
real-world issues, making the testing process more targeted and efficient.

PART B
11. Illustrate with an example of test adequacy criteria based on control flow.
Adequacy criteria based on control flow in software testing are focused on the
execution paths, branches, and structures within a program's flow of control. These
criteria determine the completeness of testing by analyzing how thoroughly test cases
exercise the various control structures in the code. The most common control flow
adequacy criteria include statement coverage, branch coverage, path coverage,
condition coverage, and loop coverage.
Statement Coverage
This is the most basic control flow criterion. It requires that every executable
statement in the program be executed at least once by the test cases. This ensures that
all parts of the code are at least touched, but it does not guarantee thorough testing of
decisions or conditions.
Any program written in a procedural language consists of a sequence of statements.
Some of these statements are declarative, such as the #define and int statements in C,
while others are executable, such as the assignment, if, and while statements in C and
Java.
Basic block is a sequence of consecutive statements that has exactly one entry point
and one exit point.
The statement coverage of T with respect to (P, R) is computed as Sc/(Se-Si) , where
Sc is the number of statements covered, Si is the number of unreachable statements,
and Se is the total number of statements in the program, i.e. the size of the coverage
domain.
T is considered adequate with respect to the statement coverage criterion if the
statement coverage of T with respect to (P, R) is 1.

Block Coverage
The block coverage of T with respect to (P, R) is computed as Bc/(Be -Bi) , where Bc is the
number of blocks covered, Bi is the number of unreachable blocks, and Be is the total number
of blocks in the program, i.e. the size of the block coverage domain. T is considered adequate
with respect to the block coverage criterion if the statement coverage of T with respect to (P,
R) is 1.

Path Coverage
Path coverage requires that every possible path through the program's control flow is
executed. This is a more exhaustive criterion because it considers all possible paths, including
different combinations of decisions. Path coverage can be challenging in complex programs,
as the number of paths increases exponentially with the number of decision points.
To achieve path coverage, you need to test all possible paths.

Branch Coverage (Decision Coverage)


Branch coverage requires that each decision point (like if, else, or switch statements) in the
code is tested so that both the true and false outcomes of each decision are executed at least
once. This is more thorough than statement coverage, as it ensures all branches are covered.
A decision is considered covered if the flow of control has been diverted to all possible
destinations that correspond to this decision, i.e. all outcomes of the decision have been
taken. This implies that, for example, the expression in the if or a while statement has
evaluated to true in some execution of the program under test and to false in the same or
another execution.

The example illustrates how and why decision coverage might help in revealing an error that
is not revealed by a test set adequate with respect to statement and block coverage. The
decision coverage of T with respect to (P, R ) is computed as Dc/(De -Di) , where Dc is the
number of decisions covered, Di is the number of infeasible decisions, and De is the total
number of decisions in the program, i.e. the size of the decision coverage domain. T is
considered adequate with respect to the decisions coverage criterion if the decision coverage
of T with respect to (P, R ) is 1.
The domain of decision coverage consists of all decisions in the program under test. Note that
each if and each while contribute to one decision whereas a switch contribute to more than
one.
A decision can be composed of a simple condition such as x<0 , or of a more
complex condition, such as (( x<0 AND y<0 ) OR ( p≥q )).
AND, OR, XOR are the logical operators that connect two or more simple
conditions to form a compound condition.
A simple condition is considered covered if it evaluates to true and false in one or
more executions of the program in which it occurs. A compound condition is
considered covered if each simple condition it is comprised of is also covered.

Decision coverage is concerned with the coverage of decisions regardless of whether or not a
decision corresponds to a simple or a compound condition. Thus, in the statement

there is only one decision that leads control to line 2 if the compound condition inside the if
evaluates to true. However, a compound condition might evaluate to true or false in one of
several ways.

The condition at line 1 evaluates to false when x≥0 regardless of the value of y.
Another condition, such as x<0 OR y<0, evaluates to true regardless of the value of
y, when x<0.

Condition Coverage
Condition coverage is more detailed than branch coverage. It requires that each individual
condition in a decision is evaluated to both true and false. This criterion applies when there
are multiple conditions in a decision statement, such as if (a && b).

The condition coverage of T with respect to ( P, R ) is computed as Cc/(Ce -Ci) ,


where Cc is the number of simple conditions covered, Ci is the number of infeasible
simple conditions, and |Ce is the total number of simple conditions in the program, i.e.
the size of the condition coverage domain.
T is considered adequate with respect to the condition coverage criterion if the
condition coverage of T with respect to ( P, R ) is 1.
Multiple Condition Coverage
Multiple condition coverage goes beyond condition coverage and requires that all
combinations of conditions in a decision are tested. For example, in an if (a && b) statement,
both a and b must be true and false in all possible combinations.

Consider a compound condition with two or more simple conditions. Using condition
coverage on some compound condition C implies that each simple condition within C
has been evaluated to true and false.
However, does it imply that all combinations of the values of the individual simple
conditions in C have been exercised?

The multiple condition coverage of T with respect to ( P, R ) is computed as Cc/(Ce -


Ci) , where Cc is the number of combinations covered, Ci is the number of infeasible
simple combinations, and Ce is the total number of combinations in the program.
T is considered adequate with respect to the multiple condition coverage criterion if the
condition coverage of T with respect to ( P, R ) is 1.
12. Discuss with an example of test adequacy criteria based on data flow.

Adequacy criteria based on data flow in software testing focus on the paths of
variables in a program, particularly how they are defined, used, and modified. These
criteria ensure that test cases thoroughly cover the interactions of variables with the
program's logic, identifying issues like uninitialized variables, unnecessary assignments,
or improper usage.

A program written in a procedural language, such as C and Java, contains variables.


Variables are defined by assigning values to them and are used in expressions.
Statement x=y+z defines variable x and uses variables y and z.

Declaration int x, y, A[10]; defines three variables.


Statement scanf(``%d %d", &x, &y) defines variables x and y.
Statement printf(``Output: %d \n", x+y) uses variables x and y

Definition (Def): A point in the code where a variable is assigned a value.


Example: x = 10; is a definition of x.
Usage (Use): A point in the code where the value of a variable is accessed or used in
some computation.
Computation Use (C-Use): The variable is used in an expression or computation.
Example: y = x + 5; uses the value of x in the computation.
Example:

Predicate Use (P-Use):


The variable is used in a conditional statement.
The occurrence of a variable in an expression used as a condition in a branch
statement such as an if and a while, is considered as a p-use. The ``p" in p-use
stands for predicate.
Example: if (x > 0) uses x in a decision.

Definition-Use Pair (DU-Pair):


A pair of locations where a variable is defined and subsequently used before it is
redefined. The goal is to ensure that all definitions of a variable are tested by some
path to their respective uses.

Data Flow Graph


A data-flow graph of a program, also known as def-use graph, captures the
flow of definitions (also known as defs) across basic blocks in a program. It is similar
to a control flow graph of a program in that the nodes, edges, and all paths thorough
the control flow graph are preserved in the data flow graph. An example follows.
Given a program, find its basic blocks, compute defs, c-uses and p-uses in
each block. Each block becomes a node in the def-use graph (this is similar to the
control flow graph).
Attach defs, c-use and p-use to each node in the graph. Label each edge with the
condition which when true causes the edge to be taken.
We use di
(x) to refer to the definition of variable x at node i. Similarly, ui
(x) refers to
the use of variable x at node i.

Def Clear Path


Consider data flow graph

Any path starting from a node at which variable x is defined and ending at a node at which
x is used, without redefining x anywhere else along the path, is a def-clear path for x.
Path 2-5 is def-clear for variable z defined at node 2 and used at node 5. Path 1-2-5 is NOT
def-clear for variable z defined at node 1 and used at node 5. Thus, definition of
z at node 2 is live at node 5 while that at node 1 is not live at node 5.

Definition-Use (DU) Coverage


This criterion ensures that every variable definition is followed by test cases that reach all
possible uses of that variable (both C-Uses and P-Uses) without the variable being redefined
before reaching the use.
Def of a variable at line l1 and its use at line l2 constitute a def-use pair. l1 and l2
can be the same.
dcu (di(x)) denotes the set of all nodes where di(x)) is live and used.
dpu (di(x)) denotes the set of all edges (k, l) such that there is a def-clear path from node i
to edge (k, l) and x is used at node k.
We say that a def-use pair (di
(x), uj
(x)) is covered when a def-clear path that includes
nodes i to node j is executed. If uj
(x)) is a p-use then all edges of the kind (j, k) must also
be taken during some executions.

Data flow based adequacy


CU: total number of c-uses in a program.
PU: total number of p-uses.
Given a total of n variables v1, v2…vn each defined at di Nodes
Coverage of a c- or a p-use requires a path to be traversed through the program.
However, if this path is infeasible, then some c- and p-uses that require this path to be
traversed might also be infeasible. Infeasible uses are often difficult to determine
without some hint from a test tool.

All-Defs Coverage
This criterion requires that each definition of a variable is tested with at least
one test case that follows a path where that definition is used (either in computation or
as part of a decision). Not every use of the variable needs to be tested, but at least one
for each definition must be covered.

All-Uses Coverage
This criterion ensures that for every definition of a variable, test cases cover
all uses (both C-Use and P-Use) of that variable, including all possible paths from the
definition to the use.
All-DU-Paths Coverage
This criterion is more stringent than all-uses coverage. It requires that test
cases not only cover all possible uses of a variable but also every distinct path from
the variable's definition to its use. This helps to ensure that every possible interaction
between a variable’s definition and its subsequent usage is tested.

All-P-Uses/Some-C-Uses Coverage
This criterion requires that all predicate uses of variables are covered (all
decisions or conditions where a variable is involved are tested), while some
computational uses are tested. This ensures that variables affecting decisions are
thoroughly tested.

All-C-Uses/Some-P-Uses Coverage
This is the reverse of the previous criterion. It ensures that all computational uses
of variables are covered (variables used in expressions or calculations), while some of the
predicate uses are tested.

13. Explain the principle of mutation testing with an example.


Mutation testing is a software testing technique used to evaluate the effectiveness and
quality of a test suite by introducing small, controlled modifications, known as
mutants, into the program's source code. These mutants simulate potential faults or
common mistakes that a developer might make. Mutation testing then checks if the
existing test cases can detect these injected faults.

The principles of mutation testing establish a foundation for generating mutants and
analyzing test effectiveness. These principles help ensure that mutation testing is
meaningful, efficient, and provides useful insights into the quality of the test suite.

Competent Programmer Hypothesis (CPH)


• This principle assumes that developers generally write code that is close to
being correct, with only small mistakes that can lead to bugs.
• Mutants are generated by making small, realistic changes that simulate
common mistakes made by competent programmers. This ensures that mutants
reflect errors that might realistically occur, making the mutation testing
process more practical and focused.
• CPH states that given a problem statement, a programmer writes a program P
that is in the general neighborhood of the set of correct programs.
• An extreme interpretation of CPH is that when asked to write a program to
find the account balance, given an account number, a programmer is unlikely
to write a program that deposits money into an account. Of course, while such
a situation is unlikely to arise, a devious programmer might certainly write
such a program.
• A more reasonable interpretation of the CPH is that the program written to
satisfy aset of requirements will be a few mutants away from a correct
program.
• The CPH assumes that the programmer knows of an algorithm to solve the
problem at hand, and if not, will find one prior to writing the program.
• It is Thus, safe to assume that when asked to write a program to sort a list of
numbers, a competent programs knows of, and makes use of, at least one
sorting algorithm. Mistakes will lead to a program that can be corrected by
applying one or more first order mutations.
Coupling Effect
• The coupling effect hypothesis posits that a test suite capable of detecting
simple faults will likely also detect more complex faults.
• By generating simple mutants (with single changes), mutation testing helps
ensure that the test suite is robust enough to identify more complicated bugs.
This is based on the idea that simple and complex faults are “coupled,”
meaning that tests which detect simpler faults are likely to detect more
complex combinations of those faults.
• Test data that distinguishes all programs differing from a correct one by only
simple errors is so sensitive that it also implicitly distinguishes more complex
errors
• For some input, a non-equivalent mutant forces a slight perturbation in the
state space of the program under test. This perturbation takes place at the point
of mutation and has the potential of infecting the entire state of the program.
• It is during an analysis of the behavior of the mutant in relation to that of its
parent that one discovers complex faults.

Killable Mutant
• A mutant is considered “killed” if a test case causes it to behave differently
from the original program, resulting in different outputs.
• This principle allows mutation testing to evaluate a test suite’s effectiveness
by measuring how many mutants it can kill, highlighting gaps in test coverage
where improvements may be needed.

Equivalent Mutant
• Equivalent mutants are mutants that behave identically to the original program
for all possible inputs. They cannot be killed by any test case and are generally
undesirable in mutation testing.
• While it is challenging to detect equivalent mutants automatically, mutation
testing seeks to minimize their impact. Identifying and removing these
mutants helps improve the efficiency and accuracy of mutation testing results,
as equivalent mutants artificially reduce the mutation score.

Selective Mutant
• Instead of generating all possible mutants, selective mutation focuses on
creating only those mutants that are most likely to reveal weaknesses in the
test suite.
• Selective mutation reduces the computational costs of mutation testing by
using a subset of mutation operators that are known to represent common
types of errors. This makes the process more efficient while still providing
useful insights into the quality of the test suite.

Mutation score
• The mutation score is calculated as the ratio of killed mutants to the total
number of non-equivalent mutants, providing a measure of the test suite’s
effectiveness.
• A high mutation score indicates that the test suite is robust and capable of
detecting a wide range of faults, while a low mutation score suggests that the
test suite may need additional test cases to cover untested scenarios.
Realistic Fault Simulation
• This principle emphasizes that mutants should reflect realistic bugs that could
be introduced into the code, such as off-by-one errors or incorrect operator
usage.
• By focusing on realistic faults, mutation testing becomes more practical and
relevant, enhancing its value as a measure of test effectiveness for real-world
applications.
Test Suite Effectiveness
• Mutation testing is designed to measure the effectiveness of a test suite by
determining whether it can detect and isolate bugs represented by the
generated mutants.
• This principle underscores that mutation testing is a tool to assess and improve
the quality of the test suite, rather than the correctness of the code itself.

14. Discuss in detail about mutation operators.


Mutation operators are specific rules or transformations applied to the program code
to create mutants during mutation testing. These operators simulate common
programming errors and represent various types of faults that can occur in code,
helping test the robustness of the test suite.
A mutant operator models a simple mistake that could be made by a programmer
Several error studies have revealed that programmers--novice and experts--make
simple mistakes. For example, instead of using x<y+1 one might use x<y.
While programmers make “complex mistakes” too, mutant operators model simple
mistakes.

Let S1 and S2 denote two sets of mutation operators for language L. Based on the
effectiveness criteria, we say that S1 is superior to S2 if mutants generated using S1
guarantee a larger number of errors detected over a set of erroneous programs.

1. Arithmetic Operators
Simulate common arithmetic errors.
Examples: Replace + with -, * with /, % with *, etc.
Mutation Example:
// Original code
int result = a + b;
// Mutant
int result = a - b;

2. Relational Operators
Simulate logic errors in conditions.
Examples: Replace < with <=, > with <, == with !=, etc.
Mutation Example:
// Original code
if (x > y) { /* ... */ }
// Mutant
if (x >= y) { /* ... */ }
3. Logical Operators
Test logical conditions by changing logical operators.
Examples: Replace && with ||, || with &&.
Mutation Example:
// Original code
if (a > 5 && b < 10) { /* ... */ }
// Mutant
if (a > 5 || b < 10) { /* ... */ }
4. Conditional Operators
Test expressions within conditional statements.
Examples: Change the ternary condition ? :.
Mutation Example:
// Original code
int max = (a > b) ? a : b;
// Mutant
int max = (a > b) ? b : a;

5. Unary Operators
Simulate errors with incrementing and decrementing.
Examples: Change ++ to --, + to - for unary operations.
Mutation Example:
// Original code
x++;
// Mutant
x--;
6. Assignment Operators
Test errors with assignments and compound operators.
Examples: Replace += with -=, *= with /=.
Mutation Example:
// Original code
a += b;
// Mutant
a -= b;
7. Constant Replacement
Simulate errors by altering constant values.
Examples: Replace constants with 0, 1, or another value.
Mutation Example:
// Original code
int maxAttempts = 5;
// Mutant
int maxAttempts = 1;

8. Variable Replacement
Check for errors due to incorrect variable usage.
Examples: Replace one variable with another variable of the same type.
Mutation Example:
// Original code
int result = a + b;
// Mutant
int result = a + c;

9. Statement Deletion
Simulate missing code lines, such as skipped initializations or increments.
Examples: Delete critical lines like increment statements in loops.
Mutation Example:
// Original code
x = x + 1;
int y = x * 2;
// Mutant (deleted statement)
int y = x * 2;

10. Negation of Conditions


Simulate logical errors by negating conditions.
Examples: Change if (condition) to if (!condition).
Mutation Example:
// Original code
if (isAvailable) { /* ... */ }
// Mutant
if (!isAvailable) { /* ... */ }

11. Return Value Mutation


Test if the test suite catches incorrect return values.
Examples: Change return values to incorrect defaults.
Mutation Example:
// Original code
return true;
// Mut/ant
return false;

15. Illustrate fault-based mutation testing with an example.

Fault-based mutation testing is a specific approach in mutation testing that focuses on


creating mutants to mimic realistic and likely faults in the software. Instead of
generating arbitrary mutations, fault-based mutation testing uses knowledge of typical
programming errors and common fault patterns to produce mutants that represent
realistic defects. This approach helps evaluate a test suite's effectiveness in detecting
real-world issues, making the testing process more targeted and efficient.

Characteristics of Fault-Based Mutation Testing


Realistic Fault Simulation: Instead of random or arbitrary changes, mutations target
common and realistic errors that developers frequently make.
Faults might include off-by-one errors, boundary condition errors, missing null
checks, and incorrect arithmetic or logical operations.
Focused Mutation Operators: Uses a specialized set of mutation operators that are
designed to introduce likely faults based on prior experience or common error types.
These operators might simulate specific faults in arithmetic expressions, conditional
logic, loops, exception handling, and more.
Higher Accuracy in Fault Detection: By focusing on common fault patterns, fault-
based mutation testing can be more effective in highlighting weaknesses in the test
suite.
It helps ensure that the test cases cover likely faults and increases the relevance of the
mutation testing process.
Reduction of Equivalent Mutants: Since fault-based mutations are closely aligned
with actual bugs, they tend to avoid the creation of equivalent mutants that don’t
impact program behavior. This results in more efficient testing by reducing time spent
on analyzing mutants that cannot be killed by any test case.

Fault-Based Mutation Testing Example


Let’s look at an example of fault-based mutation testing for a program that checks if a
number is a prime number:

Consider a code
boolean isPrime(int number) {
if (number <= 1) return false;
for (int i = 2; i < number; i++) {
if (number % i == 0) return false;
}
return true;
}
Using fault-based mutation, we can introduce specific faults that are common in
similar code:

Boundary Condition Fault: Modify the loop condition to test for off-by-one errors, a
common boundary fault.
Mutant:
for (int i = 2; i <= number; i++) { // Changed < to <=
if (number % i == 0) return false;
}

Logic Error in Condition:

Change the comparison operator to simulate a logic error in checking divisibility.


Mutant:
if (number % i != 0) return false; // Changed == to !=

Common Initialization Fault: Start the loop with i = 1 instead of i = 2 to simulate an


initialization error.
Mutant:
for (int i = 1; i < number; i++) { // Changed i = 2 to i = 1
if (number % i == 0) return false;
}
Missing Edge Case Check: Skip the initial check of number <= 1, simulating a
missing edge case.
Mutant:
// Removed the line: if (number <= 1) return false;

Benefits of Fault-Based Mutation Testing


• Efficient Use of Resources: By focusing only on realistic faults, fewer mutants
are generated, reducing the computational cost of mutation testing.
• Increased Relevance of Test Coverage: Test cases that can kill fault-based
mutants are more likely to be robust against real-world faults.
• Improved Test Suite Quality: This approach helps create test cases that are
highly effective in catching potential bugs, enhancing ov

You might also like