Se (3) (1) Anu
Se (3) (1) Anu
Activity diagram is basically a flow chart to represent the flow form one activity to
another. The activity can be described as an operation of the system. So the control flow is drawn
from one operation to another. This flow can be sequential, branched or concurrent. Ac- tivity
diagrams deals with all type of flow by using elements like fork, join etc.
Contents
Fork
A fork represents the splitting of a single flow of control into two or more concur-
rentFlow of control. A fork may have one incoming transition and two or more outgoing
transitions, each of which represents an independent flow of control. Below fork the activities
associated with each of these path continues in parallel.
Join
A join represents the synchronization of two or more concurrent flows of control. A join
may have two or more incoming transition and one outgoing transition. Above the join the
activities associated with each of these paths continues in parallel.
Branching
A branch specifies alternate paths takes based on some Boolean expression Branch is
represented by diamond Branch may have one incoming transition and two or more outgoing one
on each outgoing transition, you place a Boolean expression shouldn’t overlap but they should
cover all possibilities.
Swimlane:
Swimlanes are useful when we model workflows of business processes to partition the
activity states on an activity diagram into groups. Each group representing the business
organization responsible for those activities ,these groups are called Swimlanes .
Activity diagram for ATM
Practical 7
Design an ER diagram for with multiplicity.
Multiplicity: Many-to-Many
Enrollment-Student Relationship:
Multiplicity: One-to-Many
Enrollment-Course Relationship:
Each enrollment is associated with one course.
Multiplicity: One-to-Many
Introduction
A visual representation of flow of control within a program may help the developer to perform
static analysis of his code. One could break down his program into multiple basic blocks, and
connect them with directed edges to draw a Control Flow Graph (CFG). A CFG of a program
helps in identifying how complex a program is. It also helps to estimate the maximum number
of test cases one might require to test the code.
In this experiment, we will learn about basic blocks and how to draw a CFG using them. We
would look into paths and linearly independent paths in context of a CFG. Finally, we would
learn about McCabe's cyclomatic complexity, and classify a given program based on that.
A control flow graph (CFG) is a directed graph where the nodes represent different instructions
of a program, and the edges define the sequence of execution of such instructions. Figure 1 shows
a small snippet of code (compute the square of an integer) along with it's CFG. For simplicity,
each node in the CFG has been labeled with the line numbers of the program containing the
instructions. A directed edge from node #1 to node #2 in figure 1 implies that after execution of
the first statement, the control of execution is transferred to the second instruction.
x_2 = x * x;
return x_2;
A program, however, doesn't always consist of only sequential statements. There could be
branching and looping involved in it as well. Figure 2 shows how a CFG would look like if
there are sequential, selection and iteration kind of statements in order.
A real life application seldom could be written in a few lines. In fact, it might consist of thousand
of lines. A CFG for such a program is likely to become very large, and it would contain mostly
straight-line connections. To simplify such a graph different sequential statements could be
grouped together to form a basic block. A basic block is a [ii,iii] maximal sequence of program
instructions I1, I2, ..., In such that for any two adjacent instructions Ik and Ik+1, the following holds
true:
The size of a CFG could be reduced by representing each basic block with a node. To illustrate
this, let's consider the following example.
sum = 0;
i = 1;
while (i ≤ n) {
sum += i;
++i;
printf("%d", sum);
if (sum > 0) {
printf("Positive");
}
The CFG with basic blocks is shown for the above code in figure 3.
Figure 3: Basic blocks in a CFGThe first statement of a basic block is termed as leader. Any node
x in a CFG is said to dominate another node y (written as x dom y) if all possible executionpaths
that goes through node y must pass through node x. The node x is said to bea
dominator [ii]. In the above example, line #s 1, 3, 4, 6, 7, 9, 10 are leaders. The node containing
lines 7, 8 dominate the node containing line # 10. The block containing line #s 1, 2is said to be
the entry block; the block containing line # 10 is said to be the exit block.
If any block (or sub-graph) in a CFG is not connected with the sub-graph containing the entry
block, that signifies the concerned block contains code, which is unreachable while the program
is executed. Such unreachable code can be safely removed from the program. To illustrate this,
let's consider a modified version of our previous code:
sum = 0;
i = 1;
while (i ≤ n) {
sum += i;
++i;
}
return sum;
if (sum < 0) {
return 0;
Figure 4 shows the corresponding CFG. The sub-graph containing line #s 8, 9, 10 is disconnected
from the graph containing the entry block. The code in the disconnected sub- graph would never
get executed, and, therefore, could be discarded.
Terminologies
Path
A path in a CFG is a sequence of nodes and edges that starts from the initial node (or entry block)
and ends at the terminal node. The CFG of a program could have more than one terminalnodes.
1 - 3 - 6 - (7, 8) - 9 - 10
1 - 3 - (4, 5) - 6 - (7, 8) - 10
1 - 3 - (4, 5) - 6 - (7, 8) - 9 - 10
Note that 1 - 3 - (4, 5) - 3 - (4, 5) - 6 - (7, 8) - 10, for instance, won't qualify as a linearly
independent path because there is no new edge not already present in any of the above four
linearly independent paths.
McCabe had applied graph-theoretic analysis to determine the complexity of a program module
. Cyclomatic complexity metric, as proposed by McCabe, provides an upper bound for the number
of linearly independent paths that could exist through a given program module. Complexity of a
module increases as the number of such paths in the module increase. Thus,if Cyclomatic
complexity of any program module is 7, there could be up to seven linearly independent paths in
the module.
Let G be a a given CFG. Let E denote the number of edges, and N denote the number of nodes.
Let V(G) denote the Cyclomatic complexity for the CFG. V(G) can be obtained in either of the
following three ways:
Method #1:V(G) = E - N + 2
Method #2: V(G) could be directly computed by a visual inspection of the CFG:V(G)
= Total number of bounded areas + 1It may be noted here that structured programming
would always lead to a planar CFG.
Method #3: If LN be the total number of loops and decision statements in a program,
thenV(G) = LN + 1
A set of threshold values for Cyclomatic complexity has been presented in [vii], which we
reproduce below.
Merits
Gives an idea about the maximum number of test cases to be executed (hence, the
required effort) for a given module
Demerits
McCabe's Cyclomatic complexity was originally proposed for procedural languages. One may
look in [xi] to get an idea of how the complexity calculation could be modified for object- oriented
languages. In fact, one may also wish to make use of Chidamber-Kemerer metrics [x] (or any
other similar metric), which has been designed for object-oriented programming.
while (x != y) {
if (x > y)
x = x - y;
else
y = y - x;
return x;
Method #1
N = No. of nodes = 7
E = No. of edges = 8
V(G) = E - N + 2 = 8 - 7 + 2 = 3
Method #2
V(G) = Total no. of non overlapping areas + 1 = 2 + 1 = 3
Let us determine the Cyclomatic complexity for the "ReissueBook" method as shown
below:
ID transactionID = null;
user.incrementReissueCount(bookID);
transaction.save();
transactionID = transaction.getID();
return transactionID;
The Control Flow Graph for the above module is shown in figure 1. The CFG has six nodes and
seven edges. So, the Cyclomatic complexity is V(G) = 7 - 6 + 2 = 3. It can be verified with the
other two formulae as well: # of regions + 1 = 2 + 1 = 3. Also, # of decision points = 2. So, V(G)
= 2 + 1 = 3. However, as mentioned in the theory section, for methods of classes we add an extra
1 to the V(G). So, the Cyclomatic complexity of this method becomes 4, which isgood.
Figure 1. CFG for "ReissueBook" method
Note that in line # 3 two decisions have been short-circuited. Taking this into account, V(G) for
the module would become 5, which is OK. This implies that the method could have upto five
linearly independent paths. By looking at figure 1 we can easily identify three such paths.
However, as mentioned that line # 3 consists of two decision points, that results in another
"implicit" path. Based on these, we can design four test cases that would result in Boolean values
for this sequence { user.canIssueNow, Book.IsAvailable, count < REISSUE_LIMIT }. The four
such cases are shown below:
Now let us focus on the "IssueManager" class. For simplicity, let's assume it has only two
methods: IssueBook and ReissueBook, as shown below.
Book.SetStatusIssued(bookID);
user.incrementIssueCount(bookID);
transaction.save();
transactionID = transaction.getID();
return transactionID;
ID transactionID = null;
user.incrementReissueCount(bookID);
transaction.save();
transactionID = transaction.getID();
return transactionID;
}
"IssueBook" has two decision points (if and &&). So, V(GIssueBook) = (2 + 1) + 1 = 4. We have
already determined V(GReissueBook) to be 5. So, the total Cyclomatic complexity of this class
(having two methods) becomesV(G) = (4 + 5) - 2 + 1 = 8
Practical 9
To design an experiment for creating a test suite using the equivalence class
partitioning technique, you can follow these steps:
1. Objectives:
o Determine the effectiveness of equivalence class partitioning in identifying
representative test cases.
o Evaluate the fault detection capability of the test suite derived from equivalence
class partitioning.
o Analyze the relationship between the number of test cases and the fault
detection rate.
2. Setup:
o Select a software program or module for testing.
o Identify the input conditions and their valid and invalid ranges or values.
o Choose a programming language and develop a faulty version(s) of the program
with seeded defects.
3. Equivalence Class Partitioning:
o Define equivalence classes for each input condition based on valid, invalid, and
boundary value ranges.
o Derive test cases by selecting representative values from each equivalence class.
o Document the equivalence classes, test cases, and expected results.
4. Test Suite Execution:
o Execute the derived test suite against the original (correct) version of the
program.
o Record the number of test cases that passed and failed.
o Execute the test suite against the faulty version(s) of the program.
o Record the number of test cases that detected faults and those that did not.
5. Data Collection and Analysis:
o Collect data on the number of test cases derived from each equivalence class.
o Analyze the fault detection rate of the test suite.
o Investigate the relationship between the number of test cases and the fault
detection rate.
o Identify any missing or inadequate test cases based on the faults detected or not
detected.
6. Comparison and Evaluation:
o Compare the fault detection rate of the equivalence class partitioning test suite
with other testing techniques (e.g., boundary value analysis, random testing).
o Evaluate the effectiveness and efficiency of equivalence class partitioning in
identifying representative test cases and detecting faults.
7. Reporting and Recommendations:
o Document the experiment methodology, results, and conclusions.
o Provide recommendations for improving the equivalence class partitioning
process or combining it with other testing techniques.
o Suggest areas for further investigation or additional experiments.
8. Validation and Replication:
o Validate the experiment by repeating it on different programs or modules.
o Replicate the experiment with different input conditions, equivalence classes,
or seeded faults to ensure the reliability of the results.
Equivalence class partitioning is a software testing technique that divides the input domain of
a program into classes of equivalent inputs. Each class represents a set of inputs that should
produce the same output from the system under test.
Input Parameters: e user inputs, configuration settings, or any other relevant data.
Equivalence Classes: Divide the possible values of each input parameter into equivalence
classes. An equivalence class is a set of inputs that should produce the same output and are likely
to exhibit similar behavior from the system. The equivalence classes should cover both valid and
invalid inputs.
Representative Values: Choose representative values from each equivalence class to create test
cases. These values should adequately represent the behavior of the entire class.
Test Cases: For each input parameter, create test cases that cover different combinations of
values from their respective equivalence classes. Ensure that you include both boundary values
and values from the middle of each range.
Execute Test Cases: Execute the designed test cases on the system under test and observe the
outputs. Compare the actual outputs with expected outputs to determine if the system behaves as
expected.
Evaluate Coverage: Assess the coverage achieved by the test suite by analyzing how many
equivalence classes have been tested and how thoroughly they have been covered. Adjust the test
suite as necessary to improve coverage.
Scenario: Testing a login system with two input parameters: username and password.
Username (string)
Password (string)
Username:
Password:
Valid passwords (e.g., minimum length, maximum length)
Execute each test case on the login system and observe the behavior.
Evaluate Coverage:
If all equivalence classes have been covered by the test cases. If not, add additional test cases
to improve coverage.
Practical 10
Boundary value analysis (BVA) is a software testing technique that focuses on testing
boundaries of input domains. It aims to identify errors at the boundaries rather than in the
centre of input ranges. Here's how you can design test cases using boundary value analysis:
Input Parameters: Determine the input parameters or variables that have boundaries that need to
be tested.
Boundary Values: For each input parameter, identify the boundaries and determine the values
immediately before and after those boundaries. These values are the focus of boundary value
analysis.
Test Cases: Create test cases that cover the boundary values identified in step 2. Each test case
should test one boundary value at a time, focusing on the transitions from one state to another.
Additional Test Cases: In addition to the boundary values, include test cases for values just inside
and just outside the boundary values. This ensures thorough testing of the system's behaviour
around the boundaries.
Execution Test Cases: Execute the designed test cases on the system under test and observe the
behaviour at the boundaries.
Evaluate Coverage: Assess the coverage achieved by the test suite by analyzing how many
boundary values have been tested and how thoroughly they have been covered. Adjust the test
suite as necessary to improve coverage.
Scenario: Testing a system that calculates the area of a rectangle based on its length and width.
Length (numeric)
Width (numeric)
For Length:
Lower boundary: 0
Execute each test case on the system and observe the behavior, specifically focusing on how
the system handles values at the boundaries.
Evaluate Coverage:
If all boundary values have been covered by the test cases. If not, add additional test cases to
improve coverage.