0% found this document useful (0 votes)
47 views14 pages

Btech Degree Examination, May2014 Cs010 601 Design and Analysis of Algorithms Answer Key Part-A 1

The document provides information about algorithms and data structures. It includes: 1) Descriptions of recursive algorithms, control abstraction, minimum spanning trees, topological sorting, and divide-and-conquer algorithms. 2) Definitions of asymptotic complexity including Big O, Omega, and Theta notations. 3) Explanations of different algorithms like matrix multiplication, mergesort, Kruskal's algorithm for minimum spanning trees.

Uploaded by

kalaraiju
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
47 views14 pages

Btech Degree Examination, May2014 Cs010 601 Design and Analysis of Algorithms Answer Key Part-A 1

The document provides information about algorithms and data structures. It includes: 1) Descriptions of recursive algorithms, control abstraction, minimum spanning trees, topological sorting, and divide-and-conquer algorithms. 2) Definitions of asymptotic complexity including Big O, Omega, and Theta notations. 3) Explanations of different algorithms like matrix multiplication, mergesort, Kruskal's algorithm for minimum spanning trees.

Uploaded by

kalaraiju
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 14

BTECH DEGREE EXAMINATION,MAY2014

CS010 601 DESIGN AND ANALYSIS OF ALGORITHMS


ANSWER KEY
PART-A
1. A recursive algorithm is an algorithm which calls itself with "smaller (or
simpler)" input values, and which obtains the result for the current input by
applying simple operations to the returned value for the smaller (or
simpler) input.
int factorial(int n)
{
if (n == 0)
return 1;
else
return (n * factorial(n-1));
}


2. Control abstraction means a procedures whose flowof control is clear but
whose primary operations are specified by other procedures whose precise
meanings are left defined.
3. MONTE CARLO METHOD
4. A minimum spanning tree is a least-cost subset of the edges of a graph that
connects all the nodes
Start by picking any node and adding it to the tree
Repeatedly: Pick any least-cost edge from a node in the tree to a node not in
the tree, and add the edge and new node to the tree
Stop when all nodes have been added to the tree
5. An ordering of the vertices in a directed acyclic graph, such that:
If there is a path from u to v, then v appears after u in the ordering.
1. Compute the indegrees of all vertices
2. Find a vertex U with indegree 0 and print it (store it in the ordering)
If there is no such vertex then there is a cycle
and the vertices cannot be ordered. Stop.
3. Remove U and all its edges (U,V) from the graph.
4. Update the indegrees of the remaining vertices.
5. Repeat steps 2 through 4 while there are vertices to be processed.
PART B
6. Space Complexity:-Space complexity of an algorithm is the amount to
memory needed by the program for its completion.
The space requirement s(p) of any algorithm p may thereore be written
as s(p) =c+Sp, where c is a constant
Time Complexity:-Time complexity of an algorithm is the amount of
time needed by the program for its completion
Step count
Step per execution or Build a table
7. Finding the Maximum And Minimum.

to find the minimum and maximum items in a list of numbers
Algorithm StraightMaxMin(a,n,max,min)
// Set max to the maximum and min to the minimum of a[1:n];
{
max=min=a[1];
for i=2 to n do
{
if (a[i]>max) then max=a[i];
if(a[i]<min) then min=a[i];
}
}

The best case occur when the elements are in increasing order
The number of element comparison is (n-1)
Worst case occur when the elements are in decreasing order
The no of element comparison is 2(n-1)
The average no. of comaprison is
[(n-1)+(2n-2)]/2
=3/2(n-1)
8. Divide-and-conquer algorithms split a problem into separate subproblems, solve the
subproblems, and combine the results for a solution to the original problem.
Example: Quicksort, Mergesort, Binary search
Divide-and-conquer algorithms can be thought of as top-down algorithms
In divide and conquer, subproblems are independent.
Divide & Conquer solutions are simple as compared to Dynamic programming .
Divide & Conquer can be used for any kind of problems.
Only one decision sequence is ever generated
Dynamic Programming split a problem into subproblems, some of which are common, solve the
subproblems, and combine the results for a solution to the original problem.
Example: Matrix Chain Multiplication, Longest Common Subsequence
Dynamic programming can be thought of as bottom-up
In Dynamic Programming , subproblems are not independent.
Dynamic programming solutions can often be quite complex and tricky.
Dynamic programming is generally used for Optimization Problems.
Many decision sequences may be generated.
9.

10. DETERMINISTIC ALGORITHMS
Algorithms with the property that the result of every operation is uniquely defined are termed
deterministic.
Such algorithms agree with the way programs are executed on a computer.
Nondeterministic algorithms
In a theoretical framework we can allow algorithms to contain operations whose outcome is
not uniquely defined but is limited to a specified set of possibilities.
The machine executing such operations are allowed to choose any one of these outcomes
subject to a termination condition.
This leads to the concept of non deterministic algorithms.
PART C
2. ASYMPTOTIC NOTATIONS
Asymptotic efficiency of algorithms concerned with how the running time of
an algorithm increases with he size of the input in the limit.The notations used
to decribe the asymptotic efficiency of an algorithm is called asymptotic
notations. Asymptotic complexity is a way of expressing the main component
of the cost of an algorithm, using idealized units of computational work. Note
that we have been speaking about bounds on the performance of algorithms,
rather than giving exact speeds.
Different asymptotic Notations are,
- Big Oh
- Omega
- Theta
- Little oh
- Little omega
BIG-OH NOTATION (O)
Big-O is the formal method of expressing the upper bound of an
algorithm's running time. It's a measure of the longest amount of time it
could possibly take for the algorithm to complete.
Defenition
Given functions f(n) and g(n), we say that f(n) = O(g(n)) if and only if
there are positive constants c and n
0
such that f(n) c g(n) for n n
0
for
the algorithm to complete.



Omega Notation ()
Definition
Given functions f(n) and g(n), we say thatf(n) = O (g(n)) if and only if there are
positive constants c and n
0
such that f(n) c g(n) for n n
0.
It is the lower bound of any function. Hence it denotes the best case complexity of
any algorithm. We can represent it graphically as




Theta Notation ()
Definition
Given functions f(n) and g(n), we say that f(n) = O (g(n)) if and only if there are
positive constants c
1
,c
2
and n
0
such that c
1
g(n)f(n) c
2
g(n) for n n
0.
The theta notation is more precise than both the big oh and big omega notations.
The function f(n)=theta(g(n)) iff g(n) is both lower and upper bound of f(n).


13, Matrix Multiplication Algorithms:
When multiplying two 2 2 matrices [A
ij
] and [B
ij
]







The resulting matrix [C
ij
] is computed by the equations
C
11
= A
11
B
11
+ A
12
B
21

C
12
= A
11
B
12
+ A
12
B
22

C
21
= A
21
B
11
+ A
22
B
21


C
22
= A
21
B
12
+ A
22
B
22

When the size of the matrices are n n, the amount of computation is O(n
3
), because there are
n
2
entries in the product matrix where each require O(n) time to compute.

Strassens Matrix Multiplication
Strassens Algorithm (two 2x2 matrices)
c
11
c
12
a
11
a
12
b
11
b
12


= *
c
21
c
22
a
21
a
22
b
21
b
22


P + S - T

+ V R

+ T



=
Q + S

P + R Q + U


P = (a
11
+ a
22
) * (b
11
+ b
22
)
Q = (a
21
+ a
22
) * b
11

(

=
(

22 21
12 11
22 21
12 11
22 21
12 11
C C
C C
B B
B B
A A
A A
R = a
11
* (b
12
- b
22
)
S= a
22
* (b
21
- b
11
)
T = (a
11
+ a
12
) * b
22

U= (a
21
- a
11
) * (b
11
+ b
12
)
V = (a
12
- a
22
) * (b
21
+ b
22
)
7 multiplications
18 additions / subtractions
The resulting recurrence relation for T(n) is

Where
n is a power of 2, n=2
k
for some integer k.
a and b are constant.
14.

Divide: divide array A[0..n-1] in two about equal halves and make copies of each half in arrays B
and C
Conquer:
If number of elements in B and C is 1, directly solve it
Sort arrays B and C recursively
Combine: Merge sorted arrays B and C into array A
Repeat the following until no elements remain in one of the arrays:
compare the first elements in the remaining unprocessed portions of the arrays
B and C
copy the smaller of the two into A, while incrementing the index indicating the
unprocessed portion of that array
Once all elements in one of the arrays are processed, copy the remaining unprocessed
elements from the other array into A.

Algorithm MergeSort(low,high)
//a[low:high] is a global array to be sorted. Small(P) is true if there is only one element to sort.In this
case the list is already sorted.
{
if(low<high) then //if there are more than one element
{
//Divide P into subproblems
// find where to split the set.

mid:= [(low+high)/2];
//Solve the subproblems

MergeSort(low,mid);
MergeSort(mid+1,high);
// Combine the solutions

Merge(low,mid,high)
}
}

Algorithm Merge(low,mid,high)
//a[low:high] is a global array containing two sorted subset in a[low:mid] and in a[mid+1:high]. The goal
is to merge these two sets into a single set residing in a[low:high]. b[] is an auxilary global array.
{
h:=low; i:=low; j:=mid+1;
while((hmid) and (j high)) do
{
if(a*h+ a*j+) then
{
b[i]:=a[h]; h=h+1;
}
else
{
b[i]=a[j];j=j+1;
}
i=i+1;
}
if(h>mid)then
for k:= j to high do
{
b[i]=a[k];i=i+1;
}
else
for k:=h to mid do
{
b[i]:=a[k]; i=i+1;
}
for k:= low to high do
a[k]:=b[k];
}
If T(n) represent the time for the merging operation is proportional to n then the computing time for
merge sort is ,


Where n is a power of 2, n=2
k
for some integer k.
c and a are constant.
15. Basics of Kruskals Algorithm
Edges are initially sorted by increasing weight
Start with an empty forest ,grow MST one edge at a time
intermediate stages usually have forest of trees (not connected)
at each stage add minimum weight edge among those not yet used that does not create a cycle
at each stage the edge may:
expand an existing tree
combine two existing trees into a single tree
create a new tree
need efficient way of detecting/avoiding cycles
algorithm stops when all vertices are included
Algorithm Kruskal(E,cost,n,t)
// E is the set of edges in G . G has n vertices. Cost[u,v] is the
// cost of edge(u,v). T is the set of edges in the minimum-cost
//spanning tree. The finalcost is returned
{
construct a heap out of the edge costs using heapify;
for i=1 to n do parent[i]=-1;
// each vertex is in a different set.
i=0; mincost=0.0;
while((i<n-1) and (heap not empty)) do
{
Delete a minimum cost edge(u,v) from the heap
and reheapify using Adjust;
j= Find(u); k=Find(v);

if(j k) then
{
i=i+1;
t[i,1]=u; t[i,2]=v;
mincost=mincost + cost[u,v];
Union(j,k)
}
}
if ( i n-1) then write ( no spanning tree);
else return mincost;
}





The final set is
{1 2 3 4 5 6 7}
Analysis of Kruskal
(initialization): O(V)
(sorting): O(E log E)
(set-operation): O(E log E)
Total: O(E log E)

You might also like