Module 04
Module 04
• Binary search trees (BST), sometimes called ordered or sorted binary trees, are a
particular type of containers: data structures that store "items" (such as numbers, names
etc.) in memory. They allow fast lookup, addition and removal of items, and can be used
to implement either dynamic sets of items, or lookup tables that allow finding an item
by its key (e.g., finding the phone number of a person by name).
• Binary search trees keep their keys in sorted order, so that lookup and other operations
can use the principle of binary search: when looking for a key in a tree (or a place to
insert a new key), they traverse the tree from root to leaf, making comparisons to keys
stored in the nodes of the tree and deciding, based on the comparison, to continue
searching in the left or right subtrees. On average, this means that each comparison
allows the operations to skip about half of the tree, so that each lookup, insertion or
deletion takes time proportional to the logarithm of the number of items stored in the
tree. This is much better than the linear time required to find items by key in an
(unsorted) array, but slower than the corresponding operations on hash tables.
• A binary search tree (Fig.4.1) T is a binary tree; either it is empty or each node in the
tree contains an identifier and:
(i) all identifiers in the left subtree of T are less (numerically or alphabetically) than the
identifier in the root node T;
(ii) all identifiers in the right subtree of T are greater than the identifier in the root node T;
(iii) the left and right subtrees of T are also binary search trees.
1
DATA STRUCTURES AND APPLICATIONS (BCS304)
Structure:
struct node
{
int info;
struct node *llink;
struct node *rlink;
};
struct node* NODE;
2
DATA STRUCTURES AND APPLICATIONS (BCS304)
.
.Figure 4.12 : Creating a Binary Search tree
3
DATA STRUCTURES AND APPLICATIONS (BCS304)
cur=root;
while (cur!=NULL)
{
prev=cur;
cur=(temp->info<cur->info)?cur->llink:cur->rlink;
}
if(temp->info<prev->info)
prev->llink=temp;
else if(temp->info>prev->info)
prev->rlink=temp;
return root;
}
}
4
DATA STRUCTURES AND APPLICATIONS (BCS304)
5
DATA STRUCTURES AND APPLICATIONS (BCS304)
cur=cur->llink;
else
cur=cur->rlink;
}
if(i==0)
printf("Key not found\n");
return root;
}
4.1.4 Delete Function:
The delete function deletes a node from the binary search tree. However, utmost care should
be taken that the properties of the binary search tree are not violated and nodes are not lost in
the process.
Case 1: Deleting a Node that has No Children
Look at the binary search tree given in Fig. 4.14. If we have to delete node 78, we can simply
remove this node without any issue. This is the simplest case of deletion.
6
DATA STRUCTURES AND APPLICATIONS (BCS304)
In a binary search tree, the following additional operations are useful in certain applications.
(a) three Way Join (small,mid,big): This creates a binary search tree consisting of the pairs
initially in the binary search trees small and big, as well as the pair mid. It is assumed that
each key in small is smaller than mid, key and that each key in big s greater than mid. key.
Following the join, both small and big are empty.
(b) twoWay Join (small,big): This joins the two binary search trees small and big to obtain a
single binary search tree that contains all the pairs originally in smalt and big. It is assumed
that all keys of small are smaller than all keys of big and that following the join both small
and big are empty.
7
DATA STRUCTURES AND APPLICATIONS (BCS304)
(c) split (theTree,k, small,mid,big): The binary search tree the Tree is split into three parts:
small is a binary search tree that contains all pairs of the Tree that have key less than k; mid is
the pair (if any) in the Tree whose key is k, and big is a binary search tree that contains all
pairs of the Tree that have key larger than k. Following the split operation theTree is empty.
When the Tree has no pair whose key is k, mid.key is set to -1 (this assumes that-1 is not a
valid key for a dictionary pair).
• To perform a split ,we first make the following observation about splitting at the
root.In this case small is left subtree of theTree,mid is the pair in the root,and big is
the right subtree of theTree.
• If k is smaller than the key at the root,then the root together with its right subtree is to
be in big.
• If k is larger than the key at the root,then the root together with its left subtree is to be
in small.
• As we move down we construct the two search tree small and big. The snippet of split
function is given below.
8
DATA STRUCTURES AND APPLICATIONS (BCS304)
Unless care is taken, the height of a binary search tree with n elements can become as large as
n. This is the case, for instance, when we use insert-node to insert the keys 1, 2, 3, • • •, n, in
that order, into an initially empty binary search tree. However, when insertion and deletions
9
DATA STRUCTURES AND APPLICATIONS (BCS304)
are made at random using the above functions, the height of the binary search tree is
O(log2n), on the average.
4.2.1 Introduction:
Suppose we have k ordered, called and is in merged into single ordered sequence. Each run
consists of some records and is in non-decreasing order of a designated field called the key.
Let n be the number of records in all k runs together task can be by the record with the
smallest key. The smallest has to be found from k possibilities, and it could be the leading
record in any of the k runs. The most direct way to merge k runs is to make k-1 to determine
the next record to output. For k > 2, we can achieve a reduction in the number of needed to
find the next smallest element by using the selection tree data structure. There are two kinds
of selection trees: winner trees and loser trees.
4.2.1 Winner Tree:
A winner tree is a binary tree in which each node represents the smaller of its two children.
Thus, the root node the smallest node in the tree. Figure 5.32 illustrates a winner tree for the
case k = 8.
The of this winner tree may be compared to the playing of a tournament in which the winner
is the record with the smaller key. Then, each nonleaf node in the tree the winner of a , and
the root node the overall winner, or the smallest key. Each leaf node the first record in the
corresponding run. Since the records being merged are large, each node will contain only a
pointer to the record it represents. Thus, the root node contains a pointer to the first record in
run 4 A winner tree may be represented using the sequential allocation scheme forbinary trees
that results from Lemma 5.4. The number above each node in Figure 5.32 is the address of
the node in this sequential. The record pointed to by the root has the smallest key and so may
be output. Now, the next record from run 4 enters the winner tree. It has a key value of 15. To
the tree, the tournament has to be replayed only along the path from node 11 to the root.
Thus, the winner from nodes 10 and 11 is again node 11 (15 <20)The winner from nodes 4
and 5 is node 4 9 <15)The winner from 2 and 3 is node 3 (8 < 9)The new tree is shown in
Figure 5.33. The is played between sibling nodes and the result put in the parent node.
Lemma 5.4 may be used to compute the address of sibling and parent nodes efficiently. Each
new take place at the next higher level in the tree.
10
DATA STRUCTURES AND APPLICATIONS (BCS304)
5.33: Winner tree of fig.5.32 after one record has been output and the tree restructured.
After the record with the smallest key value is output, the winner tree of Figure 5.32 is to be
restructured. Since the record with the smallest key value is in run 4this re- involves inserting
the next record from this run into the tree. The next record has key value 15. are played
between sibling nodes along the path from node 11 to the root. Since these sibling nodes
represent the losers of played earlier, we can simplify the process by placing in each nonleaf
node a pointer to the record that loses the rather than to the winner of the tournament. A
selection tree in which each nonleaf node retains a pointer to the loser is called a loser tree.
Figure 5.34 shows the loser tree that to the winner tree of Figure 5.32. For, each node the key
value of a record rather than a pointer to the record represented. The leaf nodes the first
record in each run. An additional node, node 0has been added to represent the overall winner
of the tournament. Following the output of the overall winner, the tree is by playing along the
11
DATA STRUCTURES AND APPLICATIONS (BCS304)
path from node 11 to node 1The records with which these tournaments are to be played are
readily available from the parent nodes. As a result, sibling nodes along the path from 11 to 1
are not accessed.
4.3: Forest
Definition: A forest is a set of n> 0 disjoint trees. When we remove the root of a
tree we obtain a forest. For example, removing the root of any binary tree produces a forest of
two trees.
Three-tree forest
Definition: If T1, . . ., Tn is a forest of trees, then the binary tree corresponding to this forest,
denoted by B (T1, . . . , Tn),
(1) is empty, if n = 0
(2) has root equal to root (T1); has left subtree equal to B(T11,T12. . . T1m), where T11, . . .
,T1m are the subtrees of root (T1); and has right subtree B(T2, . . . ,Tn)
12
DATA STRUCTURES AND APPLICATIONS (BCS304)
Preorder Traversal:
The preorder traversal of T is equivalent to visiting the nodes of Fin tree preorder. We define
this as:
Inorder Traversal:
Inorder traversal of T is equivalent to visiting the nodes of F in tree inorder, which is defined
as:
Postorder Traversal:
There is no natural analog for the postorder traversal of the corresponding binary tree of a
forest. Nevertheless, we can define the postorder traversal of a forest, F, as:
13
DATA STRUCTURES AND APPLICATIONS (BCS304)
4.4.1 Introduction
The use of trees in the representation of sets. assume that the elements of the sets are the
numbers 0, 1, 2,. . .n-1. In practice, these numbers might be indices into a symbol table that
stores the actual names of the elements.
For example, if we have 10 elements numbered 0 through 9, we may partition them into three
disjoint sets, 51 = {0, 6, 7, 8), S2 = {1, 4, 9}, and S3 = {2, 3, 5}.
to obtain the union of S1 and S2Since we have linked the nodes from children to parent, we
simply make one of the trees a subtree of the other.
14
DATA STRUCTURES AND APPLICATIONS (BCS304)
To implement the set union operation, we simply set the parent field of one of the roots to the
other root. We can accomplish this easily if, with each set name, we keep a pointer to the root
of the tree representing that set.
rather than using the set name S1 we refer to this set as 0. The transition to set names is easy.
We assume that a table, name [ ], holds the set names. If i is an element in a tree with root 7,
and j has a pointer to entry k in the set name table, then the set name is just name[k].
Definition: Weighting rule for union(i, j). If the number of nodes in tree i is less than the
number in tree j then make j the parent of i; otherwise make i the parent of j.
15
DATA STRUCTURES AND APPLICATIONS (BCS304)
Definition [collapsing rule] : If j is a node on the path from i to its root and parent[i] !=
root(i), then set parent [j] to root(i).
The equivalence classes to be generated may be regarded as set. These sets are disjoint
since no polygon can be in two equivalence classes. Initially, all n polygons are in an
equivalence class of their own; thus parent{i} = -1, 0 <=i< n. If an equivalence pair, i = j,
is to be processed, we must first determine the sets containing i and j. If they are different,
then we replace the two sets by their union. If the two sets are the same, then we do
nothing since the relation i= j is redundant: I and j are already in the same equivalence
class. To process each equivalence pair, we need to perform two finds and at most one
union.
16
DATA STRUCTURES AND APPLICATIONS (BCS304)
if n = 0 or n = 1, there is only one binary tree. If n = 2, then there are two distinct trees and if
n = 3.
Suppose we have the preorder sequence: ABCDEFGHI and the inorder sequence:
BCAEDGHFI of the binary tree. To construct the binary tree from these sequences, we
look at the first letter in the preorder sequence, A. This letter must be the root of the tree
by definition of the preorder traversal (VLR.}. We also know by definition of the inorder
traversal {LVR} that all nodes preceding A in the inorder sequence (B Q are in the left
subtree, while the remaining nodes {ED GHFI) are in the right subtree. Figure 5.49(a) is
our first approximation to the correct tree. Moving right in the preorder sequence, we find
B as the next root. Since no node precedes B in the inorder sequence, B has an empty left
subtree, which means that C is in its right subtree. Figure 5.49(b) is the next
approximation. Continuing in this way, we arrive at the binary tree of Figure 5.49(c). By
formalizing this argument (see the exercises for this section), we can verify that every
binary tree has a unique pair of preorder inorder sequences.
17
DATA STRUCTURES AND APPLICATIONS (BCS304)
• If the nodes of the tree are numbered such that its preorder permutation is 1, 2, • • •
,n, then from our earlier discussion it follows that distinct binary trees define
distinct inorder permutations.
• Thus, the number of distinct binary trees is equal to the number of distinct inorder
permutations obtainable from binary trees having the preorder permutation, 1,2, • • •
n. Using the concept of an inorder permutation, we can show that the number of dis
tinct permutations obtainable by passing the numbers 1 to n through a stack and
deleting in all possible ways is equal to the number of distinct binary trees with n
nodes (see the exercises). If we start with the numbers 1, 2, 3, then the possible
permutations obtain able by a stack are: (1, 2, 3) (1, 3, 2) (2, 1, 3) (2, 3, 1) (3, 2, 1)
Obtaining (3, 1, 2) is impossible.
18
DATA STRUCTURES AND APPLICATIONS (BCS304)
The number of distinct ways to obtain M1i and M1 + Iare bi and bn-i, respectively. Therefore,
letting
b 1 = 1, we have
Then we see that bn is the sum of all the possible binary trees formed in the following way: a
root and two subtrees with bi and bn-i-1 nodes, for 0 <i< n. This explanation says that
Therefore, the number of binary trees with n nodes, the number of permutations of 1 to n
obtainable with a stack, and the number of ways to multiply n + 1 matrices are all equal.
To obtain the number of distinct binary trees with n nodes, we must solve the recurrence
To begin we let:
19
DATA STRUCTURES AND APPLICATIONS (BCS304)
which is the generating function for the number of binary trees. Next observe that by the
recurrence relation we get the identity:
Using the formula to solve quadratics and the recurrence that B (0) = b0= 1 we get:
which is approximately
5. GRAPHS
5.1.1 Introduction
20
DATA STRUCTURES AND APPLICATIONS (BCS304)
In Koenigsberg, the Pregal river flows around the island of Kneiphof. There are four land
areas, labelled A through D in Figure 6.1, that have this river on their border. Seven bridges,
labelled a through g, connect the land areas. The Koenigsberg bridge problem is as follows:
Starting at some land area, is it possible to return to our starting location after walking across
each of the bridges exactly once?
This walk does not cross all bridges exactly once, nor does it return to the starting land
area B.
Euler solved the problem by using a graph (actually a multigraph) in which the land areas
are vertices and the bridges are edges. His solution is not only elegant, it applies to all
graphs.
Euler defined the degree of a vertex as the number of edges incident on it. He then
showed that there is a walk starting at any vertex, going through each edge exactly once,
21
DATA STRUCTURES AND APPLICATIONS (BCS304)
and terminating at the starting vertex iff the degree of each vertex is even. We now call a
walk that does this an Eulerian walk. Since this first application, graphs have been used in
a wide variety of applications, including analysis of electrical circuits, finding shortest
routes, project planning, and the identification of chemical compounds. Indeed graphs
may be the most widely used of all mathematical structures.
5.1.2 Definitions
5.1.3 Terminologies
22
DATA STRUCTURES AND APPLICATIONS (BCS304)
• Complete graph: A graph G is said to be complete if all its nodes are fully
connected. That is, there is a path from one node to every other node in the graph. A
complete graph has n(n–1)/2 edges, where n is the number of nodes in G. Example is
G1 in Fig.5.5.
• Adjacent and incident: If (v1,v2) is an edge in E(G), then we shall say the vertices
v1 and v2 are adjacent and that the edge (v1, v2) is incident on vertices v1 and v2.The
vertices adjacent to vertex 2 in G2 are 4, 5 and 1. The edges incident on vertex 3 in
G2 are (1,3), (3,6) and (3,7). If<v1,v2> is a directed edge, then vertex v1 will be said
to be adjacent to v2 while v2 is adjacent from v1. The edge<v1,v2>is incident to v1
and v2. In G3 the edges incident to vertex 2 are <1,2>,<2,1> and<2,3>.
• A subgraph: A subgraph of G is a graph G' such that V(G')⊆ V(G) and E(G') ⊆
E(G). Figure 5.6 shows some of the subgraphs of G1 and G3 in the figure 5.5.
• Path: a path from vertex vp to vertex vq in graph G is a sequence of vertices vp, vi1,
vi2, ..., vin,vq such that (vp, vi1),(vi1,vi2), ...,(vin,vq) are edges in E(G). If G' is
directed then the path consists of <vp, vi1>,<vi, vi2>, ...,<vin,vq> , edges in E(G').
• Length: length of a path is the number of edges on it.
• Simple Path:A simple path is a path in which all vertices except possibly the first and
last are distinct. A path such as (1,2) (2,4) (4,3) we write as 1,2,4,3. Paths 1,2,4,3 and
1,2,4,2 are both of length 3 in G1. The first is a simple path while the second is not.
1,2,3 is a simple directed path in G3. 1,2,3,2 is not a path in G3 as the edge is not in
E(G3).
23
DATA STRUCTURES AND APPLICATIONS (BCS304)
• Cycle: A cycle is a simple path in which the first and last vertices are the same.
1,2,3,1 is a cycle in G1. 1,2,1 is a cycle in G3. For the case of directed graphs we
normally add on the prefix "directed" to the terms cycle and path.
• Connected graph: In an undirected graph, G, two vertices v1 and v2 are said to be
connected if there is a path in G from v1 to v2 (since G is undirected, this means there
must also be a path from v2 to v1). An undirected graph is said to be connected if for
every pair of distinct vertices vi , vi in V(G) there is a path from vi to vj in G. Graphs
G1 and G2 are connected while G4 of figure 5.7 is not. A connected component or
simply a component of an undirected graph is a maximal connected subgraph. G4 has
two components H1 and H2.
• Connected acyclic graph: A tree is a connected acyclic (i.e., has no cycles) graph.
• Strongly connected:A directed graph G is said to be strongly connected if for every
pair of distinct vertices vi ,vj in V(G) there is a directed path from vi to vj and also
from vj to vi . The graph G3 is not strongly connected as there is no path from v3 to
v2. A strongly connected component is a maximal subgraph that is strongly
connected. G3 has two strongly connected components.
• Degree: The degree of a vertex is the number of edges incident to that vertex. The
degree of vertex 1 in G1 is 3. In case G is a directed graph, we define the in-degree of
a vertex v to be the number of edges for which v is the head. The out-degree is
defined to be the number of edges for which v is the tail. Vertex 2 of G3 has in-degree
1, out-degree 2 and degree 3. If di is the degree of vertex i in a graph G with n vertices
and e edges, then it is easy to see that e = (1/2)
24
DATA STRUCTURES AND APPLICATIONS (BCS304)
25
DATA STRUCTURES AND APPLICATIONS (BCS304)
2.Adjacency Lists:
In this representation the n rows of the adjacency matrix are represented as n linked lists.
There is one list for each vertex in G. The nodes in list i represent the vertices that are
adjacent from vertex i. Each node has at least two fields: VERTEX and LINK. The VERTEX
fields contain the indices of the vertices adjacent to vertex i. The adjacency lists for G1 and
G3 are shown in figure 5.10. Each list has a head node. The head nodes are sequential
providing easy random access to the adjacency list for any particular vertex. In the case of an
undirected graph with n vertices and e edges, this representation requires n head nodes and 2e
list nodes. Each list node has 2 fields.
3.Adjacency Multilists:
In the adjacency list representation of an undirected graph, we represent each edge, (vi, Vj),
by two entries. One entry is on the list for vi, and the other is on the list for Vj. For each edge
there is exactly one node, but this node is on the adjacency list for each of the two vertices it
is incident to the new node structure. Multi-lists are the lists in which nodes may be shared
among several lists. The node structure is given in the Figure 5.11.
26
DATA STRUCTURES AND APPLICATIONS (BCS304)
4.Weighted Edges:
The edges of a graph are assigned weights. These weights may represent the distance from
one vertex to another or the cost of going from one vertex to an adjacent vertex. A graph with
weighted edges is called a network.
Given an undirected graph, G = (V, E), and a vertex, v, in V(G) we wish to visit all vertices
in G that are reachable from v, that is, ail vertices that are connected to v. There are two ways
of doing this: depth first search and breadth first search.
Depth first search is similar to a preorder tree traversal. We begin the search by visiting the
start vertex, v. visiting consists of printing the node’s vertex field. Next, we select an
unvisited vertex, w, from v’s adjacency list and carry out a depth first search on w.
Eventually our search reaches a vertex, M, that has no unvisited vertices on its adjacency list.
At this point, we remove a vertex from the stack and continue processing its adjacency list.
Previously visited vertices are discarded; unvisited vertices are visited and placed on the
stack. The search terminates when the stack is empty.
27
DATA STRUCTURES AND APPLICATIONS (BCS304)
Program to Check whether a given graph is conncetd or not using DFS method
#include<stdio.h>
int a[20][20],n,i,j,visited[20],count;
/********Connectivity using DFS***********/
// to insert the verices which are visited
void dfs(int v)
{
int i;
visited[v]=1;
for(i=1;i<=n;i++)
{
if(a[v][i] && !visited[i])
{
printf("\n %d->%d",v,i);
count++;
dfs(i);
}
}
}
void main()
{
int v, choice;
printf("\n Enter the number of cities: ");
scanf("%d",&n);
printf("\n Enter graph data in matrix form:\n");
for(i=1;i<=n;i++)
for(j=1;j<=n;j++)
scanf("%d",&a[i][j]);
while(1)
{
printf("\n1.Test for connectivity") ;
printf("\n2.Exit");
printf("\nEnter Your Choice: ");
28
DATA STRUCTURES AND APPLICATIONS (BCS304)
scanf("%d",&choice);
switch(choice)
{
case 1: for(i=1;i<=n;i++){
visited[i]=0;}
dfs(v);
if(count==n-1)
printf("\nGraph is connected\n");
else
printf("\n Graph is not connected");
count=0;
break;
case 3:return;
default:printf("\nEnter proper Choice");
}
}}
Output
29
DATA STRUCTURES AND APPLICATIONS (BCS304)
breadth first search resembles a level order tree traversal. Breadth first search starts at vertex
v and marks it as visited. It then visits each of the vertices on v’s adjacency list. When we
have visited all the vertices on v’s adjacency list, we visit all the unvisited vertices that are
adjacent to the first vertex on v’s adjacency list. To implement this scheme, as we visit each
vertex, we place the vertex in a queue. When we have exhausted an adjacency list, we
remove a vertex from the queue and proceed by examining each of the vertices on its
adjacency list. Unvisited vertices are visited and then placed on the queue; visited vertices are
ignored. We have finished the search when the queue is empty.
Program to print all the nodes reachable from a given starting node in a digraph using BFS
method
#include<stdio.h>
int a[20][20], q[20], visited[20];
int n, i, j, f=0, r=-1;
30
DATA STRUCTURES AND APPLICATIONS (BCS304)
void main()
{
int v, choice;
while(1)
{
printf("\n1. Create a Digraph of N cities using Adjacency Matrix");
printf("\n2. Print all the nodes reachable from a given starting node in a digraph using
BFS method") ;
printf("\n3. Exit");
printf("\n Enter Your Choice: ");
scanf("%d",&choice);
switch(choice)
{
case 1: create_graph();
31
DATA STRUCTURES AND APPLICATIONS (BCS304)
break;
output:
32
DATA STRUCTURES AND APPLICATIONS (BCS304)
A spanning tree is any tree that consists solely of edges in G and that includes all the vertices
in G.
33
DATA STRUCTURES AND APPLICATIONS (BCS304)
we may use either dfs or bfs to create a spanning tree. When dfs is used, the resulting
spanning tree is known as a depth first spanning tree. When bfs is used, the resulting
spanning tree is called a breadth first spanning tree.
A spanning tree is a minimal subgraph G’ of G such that V (G) = V(G) and G' is connected.
We define a minimal subgraph as one with the fewest number of edges. Any connected graph
with n vertices must have at least n - 1 edges, and all connected graphs with n - 1 edges are
trees. Therefore, we conclude that a spanning tree has n - 1 edges.
• An articulation point is a vertex v of G such that the deletion of v, together with all
edges incident on v, produces a graph, G', that has at least two connected
components. For example, the connected graph of Figure has four articulation
points, vertices 1, 3, 5, and 7.
• A biconnected component of a connected undirected graph is a maximal
biconnected subgraph, H, of G. By maximal, we mean that G contains no other
subgraph that is both biconnected and properly contains H. For example, the graph
of Figure (a) contains the six biconnected components shown in Figure (b).
34
DATA STRUCTURES AND APPLICATIONS (BCS304)
35
DATA STRUCTURES AND APPLICATIONS (BCS304)
• The broken lines in Figure 6.23(b) represent nontree edges. A nontree edge (u, v) is
a back edge if either u is an ancestor of v or v is an ancestor of u. From the
definition of depth first search, it follows that all nontree edges are back edges.
• This means that the root of a depth first spanning tree is an articulation point iff it
has at least two children. In addition, any other vertex u is an articulation point iff it
has at least one child w such that we cannot reach an ancestor of u using a path that
consists of only w, descendants of w, and a single back edge. These observations
lead us to define a value, low, for each vertex of G such that low(u) is the lowest
depth first number that we can reach from u using a path of descendants followed
by at most one back edge:
Therefore, we can say that u is an articulation point if u is either the root of the spanning tree
and has two or more children, or u is not the root and u has a child w such that low(w) >=
dfn{u}. Figure 6.24 shows the dfn and low values for each vertex of the spanning tree of
Figure 6.23(b). From this table we can conclude that vertex 1 is an articulation point since it
has a child 0 such that low (0) = 4 >=dfn (1) = 3. Vertex 7 is also an articulation point since
low (8) = 9 >=dfn (7) = 7, as is vertex 5 since low (6) = 5 >=dfn {5} = 5. Finally, we note
that the root, vertex 3, is an articulation point because it has more than one child.
36
DATA STRUCTURES AND APPLICATIONS (BCS304)
We can easily modify dfs to compute dfn and low for each vertex of a connected undirected
graph. The program is given below.
37