0% found this document useful (0 votes)
30 views7 pages

Daa Module 2

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
30 views7 pages

Daa Module 2

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

Module-2 : Decrease and Conquer

1. Decrease and Conquer Approach


2. Insertion sort
3. Graph searching algorithm
a) Depth First Search
b) Bredth First Search
4. Topological Sorting
1. Decrease and Conquer Approach
Decrease-and-conquer is a general algorithm design technique, based on exploiting a
relationship between a solution to a given instance of a problem and a solution to a smaller
instance of the same problem. Once such a relationship is established, it can be exploited either top
down (usually recursively) or bottom up.
There are three major variations of decrease-and-conquer:
1)decrease-by-a-constant, most often by one (e.g., insertion sort)
2)decrease-by-a-constant-factor, most often by the factor of two (e.g., binary search)
3)variable-size-decrease (e.g., Euclid’s algorithm)
In the decrease-by-a-constant variation, the size of an instance is reduced by the same constant on
each iteration of the algorithm. Typically, this constant is equal to one although other constant size
reductions do happen occasionally.
Consider, as an example, the exponentiation problem of computing an where a!=0 and n is a
nonnegative integer. The relationship between a solution to an instance of size n and an instance of
size n−1 is obtained by the obvious formula a n = an−1. a. So the function f(n) = an can be
computed either “top down” by using its recursive definition or “bottom up” by multiplying 1 by a
n times.
The decrease-by-a-constant-factor technique suggests reducing a problem instance by the same
constant factor on each iteration of the algorithm. For an example, let us revisit the exponentiation
problem. If the instance of size n is to compute an, the instance of half its size is to compute a n/2,
with the obvious relationship between the two: an = (an/2)2. But since we consider here instances
with integer exponents only, the former does not work for odd n. If n is odd, we have to compute a
n−1 by using the rule for even-valued exponents and then multiply the result by a.

In the variable-size-decrease variety of decrease-and-conquer, the size-reduction pattern varies


from one iteration of an algorithm to another. Euclid’s algorithm for computing the greatest
common divisor provides a good example of such a situation. Recall that this algorithm is based on
the formula gcd(m, n) = gcd(n, m mod n).

2. Insertion Sort

To sort an array of size N in ascending order iterate over the array and compare the current element
(key) to its predecessor, if the key element is smaller than its predecessor, compare it to the
elements before. Move the greater elements one position up to make space for the swapped
element. We need to find an appropriate position for A[n − 1] among the sorted elements and
insert it there. This is usually done by scanning the sorted subarray from right to left until the first
element smaller than or equal to A[n − 1] is encountered to insert A[n − 1] right after that element.
The resulting algorithm is called insertion sort. Here is pseudocode of this algorithm.
Time

Complexity:

Best Case: In the best case, the comparison A[j ] > v is executed only once on every iteration of
the outer loop.

Worst Case: for the worst-case input, we get A[0] > A[1] (for i = 1), A[1] > A[2] (for i = 2), . . . ,
A[n − 2] > A[n − 1] (for i = n − 1). In other words, the worst-case input is an array of strictly
decreasing values.

Average Case: It shows that on randomly ordered arrays, insertion sort makes on average half as
many comparisons as on decreasing arrays, i.e.,
3. Graph searching algorithms

a) Depth-first Search

The algorithm starts at the root node (in the case of a graph, you can use any random node as the
root node) and examines each branch as far as possible before backtracking. Ex.

(a) (b) (c)

Example of a DFS traversal. (a) Graph. (b) Traversal’s stack (the first subscript number indicates
the order in which a vertex is visited, i.e., pushed onto the stack; the second one indicates the order
in which it becomes a dead-end, i.e., popped off the stack). (c) DFS with the tree and back edges
shown with solid and dashed lines, respectively.

ALGORITHM DFS(A[0...n-1][0...n-1], n)
{
for (i=0; i<n; i++)
V[i]=0
for (i=0; i<n; i++)
dfs(A, i, V, n)
}
dfs(A[0...n-1], i, V[0...n-1], n)
{
for (j=1; j<n; j++){
if(V[j] == 0 && A[i][j] == 1)
{
dfs(A, j, V, n)
}
}
Time Complexity: for the adjacency matrix representation, the traversal time is in Θ(|V|2), and for
the adjacency list representation, it is in Θ(|V| + |E|) where |V| and |E| are the number of the
graph’s vertices and edges, respectively.
b) Bredth-first Search
 It starts at the root of the graph and visits all nodes at the current depth level before moving
on to the nodes at the next depth level.
 Starting from the root, all the nodes at a particular level are visited first and then the nodes
of the next level are traversed till all the nodes are visited.
 To do this a queue is used. All the adjacent unvisited nodes of the current level are pushed
into the queue and the nodes of the current level are marked visited and popped from the queue.
Ex.

(a) (b)
(c)

ALGORITHM BFS(A[0...n-1][0...n-1], n)
{
for (i=0; i<n;i++)
V[i]=0
bfs(A, s, V, n)
}
bfs(A[0...n-1], s, V[0...n-1], n)
{
f=0, r=-1, r++
q[r]=s
while(f<=r){
i=q[f]
f++
for (j=1; j<n; j++){
if(V[j] == 0 && A[i][j] == 1)
{
q[r]=j
r++
}
}
}
}
Time Complexity: for the adjacency matrix representation, the traversal time is in Θ(|V|2), and for
the adjacency list representation, it is in Θ(|V| + |E|) where |V| and |E| are the number of the
graph’s vertices and edges, respectively.
4. Topological Sorting
Consider a set of five required courses {C1, C2, C3, C4, C5} a part-time student has to take in
some degree program. The courses can be taken in any order as long as the following course
prerequisites are met: C1 and C2 have no prerequisites, C3 requires C1 and C2, C4 requires C3,
and C5 requires C3 and C4. The student can take only one course per term. In which order should
the student take the courses? The situation can be modeled by a digraph in which vertices
represent courses and directed edges indicate prerequisite requirements.
In terms of this digraph, the question is whether we can list its vertices in such an order that for
every edge in the graph, the vertex where the edge starts is listed before the vertex where the edge
ends. In other words, can you find such an ordering of this digraph’s vertices? This problem is
called topological sorting.
Topological Sort
For topological sorting to be possible, a digraph in question must be a dag (Directed Acyclic
Graph). i.e., if a digraph has no directed cycles, the topological sorting problem for it has a
solution.
There are two efficient algorithms that both verify whether a digraph is a dag and, if it is, produce
an ordering of vertices that solves the topological sorting problem. The first one is based on depth-
first search; the second is based on a direct application of the decrease-by-one technique called
source removal technique.
a) Topological Sorting based on DFS
Method
1. Perform a DFS traversal and note the order in which vertices become dead-ends
2. Reversing this order yields a solution to the topological sorting problem, provided, of
course, no back edge has been encountered during the traversal. If a back edge has been
encountered, the digraph is not a dag, and topological sorting of its vertices is impossible.
Illustration
a) Digraph for which the topological sorting problem needs to be solved.
b) DFS traversal stack with the subscript numbers indicating the popping off order.
c) Solution to the problem. Here we have drawn the edges of the digraph, and they all point
from left to right as the problem’s statement requires. It is a convenient way to check
visually the correctness of a solution to an instance of the topological sorting problem.
b) Source removal technique:
Method: The algorithm is based on a direct implementation of the decrease-(by one)-and-
conquer technique:
1. Repeatedly, identify in a remaining digraph a source, which is a vertex with no incoming
edges, and delete it along with all the edges outgoing from it. (If there are several sources,
break the tie arbitrarily. If there are none, stop because the problem cannot be solved.)
2. The order in which the vertices are deleted yields a solution to the topological sorting
problem.
Illustration - Illustration of the source-removal algorithm for the topological sorting problem is
given here. On each iteration, a vertex with no incoming edges is deleted from the digraph.

You might also like