0% found this document useful (0 votes)
28 views31 pages

DAA UNIT1 QuestionBank

The document is a question bank for a DAA course at Keshav Memorial Institute of Technology, covering various topics related to algorithms, including definitions, asymptotic notations, and complexities of sorting algorithms. It includes questions on algorithm correctness, matrix multiplication methods, and the time complexity analysis of matrix addition. The content is structured into units with short answer questions and examples to illustrate key concepts.

Uploaded by

shaikakramim
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
28 views31 pages

DAA UNIT1 QuestionBank

The document is a question bank for a DAA course at Keshav Memorial Institute of Technology, covering various topics related to algorithms, including definitions, asymptotic notations, and complexities of sorting algorithms. It includes questions on algorithm correctness, matrix multiplication methods, and the time complexity analysis of matrix addition. The content is structured into units with short answer questions and examples to illustrate key concepts.

Uploaded by

shaikakramim
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 31

KESHAV MEMORIAL INSTITUTE OF TECHNOLOGY

Narayanguda, Hyderabad.
Sub: DAA Question Bank Yr/Sem: III/II

Unit I

short answers
1. Define the term algorithm and state the criteria the algorithm should satisfy.
ALGORITHM:
Algorithm was first time proposed by a Persian Mathematician, Al-Chwarizmi in 825 AD.
According to web star dictionary, algorithm is a special method to represent the procedure
to solve given problem.
An Algorithm is a finite set of instructions that, if followed, accomplishes a
particular task. In addition, all algorithms should satisfy the following criteria.
1 Input. Zero or more quantities are externally supplied.
2 Output. At least one quantity is produced.
3 Definiteness. Each instruction is clear and unambiguous.
4 Finiteness. If we trace out the instructions of an algorithm, then for all cases, the
algorithm terminates after a finite number of steps.
5. Effectiveness. Every instruction must very basic so that it can be carried out, in
principle, by a person using only pencil & paper.

2. Write order of an algorithm and the need to analyze the algorithm.[4M]


1. Comments begin with // and continue until the end of line.
2. Blocks are indicated with matching braces {and}.
3. An identifier begins with a letter. The data types of variables are not explicitly
declared.
4. Compound data types can be formed with records. Here is an example,
Node. Record
{
data type – 1 data-1; .
data type – n data – n;
node * link;
}
Here link is a pointer to the record type node. Individual data items of a
record can be accessed with  and period.
5. Assignment of values to variables is done using the assignment statement.
<Variable>:= <expression>;
6. There are two Boolean values TRUE and FALSE.
Logical Operators
AND, OR, NOT
Relational Operators <, <=,>,>=, =, !=
7. The following looping statements are employed.
For, while and repeat-until
While Loop:
While < condition >do{
<statement-1>
.
<statement-n>
}
For Loop:
For variable: = value-1 to value-2 step step do
{
<statement-1>
.
.
<statement-n>
.}
One step is a key word, other Step is used for increment or decrement.
repeat-until:
repeat{
<statement-1>
.
.
<statement-n>
}until<condition>
8. A conditional statement has the following forms.
(1) If <condition> then <statement>
(2) If <condition> then <statement-1>

9. Input and output are done using the instructions read & write.
10. There is only one type of procedure:
Algorithm, the heading takes the form,
Algorithm Name (<Parameter list>)
As an example, the following algorithm fields & returns the maximum of ‘n’ given
numbers:
Algorithm Max(A,n)
// A is an array of size n
{
Result := A[1];
for I:= 2 to n do
if A[I] > Result then
Result :=A[I];
return Result;
}
In this algorithm (named Max), A & n are procedure parameters. Result & I are
Local variables.

3. Define asymptotic notations: big ‘Oh’, omega and theta .


Big oh notation:O
The function f(n)=O(g(n)) (read as “f of n is big oh of g of n”) iff there exist positive
constants c and n 0 such that f(n)≤C*g(n) for all n, n≥0
The value g(n)is the upper bound value of f(n).
Example:
3n+2=O(n) as
3n+2 ≤4n for all n≥2
Omega notation:Ω
The function f(n)=Ω (g(n)) (read as “f of n is Omega of g of n”) iff there exist positive
constants c and n 0 such that f(n)≥C*g(n) for all n, n≥0
The value g(n) is the lower bound value of f(n).
Example:
3n+2=Ω (n) as
3n+2 ≥3n for all n≥1

Theta notation:θ
The function f(n)= θ (g(n)) (read as “f of n is theta of g of n”) iff there exist positive
constants c1, c2 and n 0 such that C1*g(n) ≤f(n)≤C2*g(n) for all n, n≥0

Example:
3n+2=θ (n) as
3n+2 ≥3n for all n≥2
3n+2 ≤3n for all n≥2
Here c1=3 and c2=4 and n 0 =2

Little oh: o
The function f(n)=o(g(n)) (read as “f of n is little oh of g of n”) iff
Lim f(n)/g(n)=0
for all n, n≥0
n~
Example:
3n+2=o(n 2 ) as
Lim ((3n+2)/n 2 )=0
n~

4. If f(n)=5n2 + 6n + 4, then prove that f(n) is O(n2)


f(n)=5n^2+6n+4
5n^2+4 for all n >=6n for n>=1
The function f(n)=O(g(n)) (read as “f of n is big oh of g of n”) iff there exist positive
constants c and n 0 such that f(n)≤C*g(n) for all n, n≥0
The value g(n)is the upper bound value of f(n).

c=4 n=1

f(n)<= c*g(n)= f(n^2)= )(n^2)

5. List the two different types of recurrence[2M]


Type 1: Divide and conquer recurrence relations –
Following are some of the examples of recurrence relations based on divide and conquer.

T(n) = 2T(n/2) + cn
T(n) = 2T(n/2) + √n
These types of recurrence relations can be easily solved using master method (Put link to master
method).
For recurrence relation T(n) = 2T(n/2) + cn, the values of a = 2, b = 2 and k =1. Here logb(a) =
log2(2) = 1 = k. Therefore, the complexity will be Θ(nlog2(n)).
Similarly for recurrence relation T(n) = 2T(n/2) + √n, the values of a = 2, b = 2 and k =1/2. Here
logb(a) = log2(2) = 1 > k. Therefore, the complexity will be Θ(n).

Type 2: Linear recurrence relations –


Following are some of the examples of recurrence relations based on linear recurrence relation.

T(n) = T(n-1) + n for n>0 and T(0) = 1These types of recurrence relations can be easily soled using
substitution method (Put link to substitution method).

6. Give the recurrence equation for the worst case behavior of merge sort. [2M]

We assume that we're sorting a total of n nn elements in the entire array.


The divide step takes constant time, regardless of the subarray size. After all, the divide step just
computes the midpoint q qq of the indices p pp and r rr. Recall that in big-Θ notation, we indicate
constant time by \Theta(1) Θ(1).
The conquer step, where we recursively sort two subarrays of approximately n/2 n/2n, slash, 2
elements each, takes some amount of time, but we'll account for that time when we consider the
subproblems.

The combine step merges a total of n nn elements, taking \Theta(n) Θ(n) time.
If we think about the divide and combine steps together, the \Theta(1) Θ(1) running time for the
divide step is a low-order term when compared with the \Theta(n) Θ(n) running time of the combine
step. So let's think of the divide and combine steps together as taking \Theta(n) Θ(n) time. To make
things more concrete, let's say that the divide and combine steps together take cn cnc, n time for
some constant c cc.

To keep things reasonably simple, let's assume that if n>1 n>1n, is greater than, 1, then n nn is
always even, so that when we need to think about n/2 n/2n, slash, 2, it's an integer. (Accounting for
the case in which n nn is odd doesn't change the result in terms of big-Θ notation.) So now we can
think of the running time of mergeSort on an n nn-element subarray as being the sum of twice the
running time of mergeSort on an (n/2) (n/2)left parenthesis, n, slash, 2, right parenthesis-element
subarray (for the conquer step) plus cn cnc, n (for the divide and combine steps—really for just the
merging)

7. Define algorithm correctness[3M]

To frame the problem of correctness of the constraint solving algorithm precisely, we must make
more precise the notions of well-constrained, overconstrained and underconstrained constraint
systems. As discussed in the section on constraint assignment, each geometric element in the
constraint problems we consider has two degrees of freedom. Each constraint in the problem
eliminates one of these, thus if there are no fixed geometries in the problem, we expect that
|E|=2|V|-3
where | E | is the number of edges and | V | is the number of nodes in the corresponding constraint
graph. Note that the solution will be a rigid body with three remaining degrees of freedom, because
the constraints determine only the relative position of the geometric elements.

8. Discuss various the asymptotic notations used for best case average case and worst case
analysis of algorithm

The complexity of an algorithm M is the function f(n) which gives the running time
and/or storage space requirement of the algorithm in terms of the size ‘n’ of the input
data. Mostly, the storage space required by an algorithm is simply a multiple of the data
size ‘n’. Complexity shall refer to the running time of thealgorithm.
The function f(n), gives the running time of an algorithm, depends not only on the size ‘n’
of the input data but also on the particular data. The complexity function f(n) for certain
casesare:
1. Best Case : The minimum possible value of f(n) is called the bestcase.
2. Average Case : The average value off(n).
3. Worst Case : The maximum value of f(n) for any key possibleinput.
The field of computer science, which studies efficiency of algorithms, is known as
analysis ofalgorithms.
assymtotic notation- refer to answer 3 above

9. Compute the average case time complexity of quick sort-[2M]


Like merge sort, quick sort is recursive, and hence its analysis requires solving a
recurrence formula. We will do the analysis for a quick sort, assuming a random pivot
We will take T (0) = T (1) = 1, as in merge sort.
The running time of quick sort is equal to the running time of the two recursive calls
plus the linear time spent in the partition (The pivot selection takes only constant time).
This gives the basic quick sortrelation:
T (n) = T (i) + T (n – i – 1) + Cn
-
(1) Where, i = |S1| is the number of elements in S1.
Worst CaseAnalysis
The pivot is the smallest element, all the time. Then i=0 and if we ignore T(0)=1,
which is insignificant, the recurrenceis:
T (n) = T (n – 1) + Cn
Using equation – (1) repeatedly,thus
T (n – 1) = T (n – 2) + C (n –1)
n>1

(2)T (n – 2) = T (n – 3) + C (n –2)
- - - - - - --
T (2)
= T (1) + C(2)
Adding up all these equationsyields
=O(n2)

10. What are the drawbacks of Merge Sort algorithm.- [3M]


1. The worst case time complexity could be quadratic.
2. Sorting is done in place requiring the client to keep a copy of the original elements.
3. Requires additional memory to sort the elements.
4. Recursive calls result in additional overhead making it unsuitable for small number of
elements.

11. Describe strassen’s matrix multiplication.

The matrix multiplication of algorithm due to Strassens is the most dramatic example of
divide and conquer technique(1969).
Let A and B be two n×n Matrices. The product matrix C=AB is also a n×n matrix whose i,
j th element is formed by taking elements in the i th row of A and j th column of B and
multiplying them to get
The usual wayC(i, j)=
Here 1≤ i & j ≤ n means i and j are in between 1 and n .
To compute C(i, j) using this formula, we need n multiplications. The divide and conquer strategy
suggests another way to compute the product of two n×n
matrices.For Simplicity assume n is a power of 2 that is n=2 k , k is a nonnegative integer.
If n is not power of two then enough rows and columns of zeros can be added to both A and
B, so that resulting dimensions are a power of two.
To multiply two n x n matrices A and B, yielding result matrix ‘C’ as follows:
Let A and B be two n×n Matrices. Imagine that A & B are each partitioned into four square
sub matrices. Each sub matrix having dimensions n/2×n/2.
The product of AB can be computed by using previous formula.
If AB is product of 2×2 matrices then
=
Then cij can be found by the usual matrix multiplication algorithm,
C11 = A11 .B11 + A12 .B21
C12 = A11 .B12 + A12 .B22
C21 = A21 .B11 + A22 .B21
C22 = A21 .B12 + A22 .B22
This leads to a divide–and–conquer algorithm, which performs nxn matrix multiplication
by partitioning the matrices into quarters and performing eight (n/2)x(n/2) matrix
multiplications and four (n/2)x(n/2) matrixadditions.
T(1) = 1
T(n) = 8T(n/2)

Which leads to T (n) = O (n3), where n is the power of2.

Strassens insight was to find an alternative method for calculating the Cij, requiring seven
(n/2) x (n/2) matrix multiplications and eighteen (n/2) x (n/2) matrix additions
And subtractions:
P = (A11 + A22) (B11 + B22)
Q = (A21 + A22)B11
R = A11 (B12 -B22)
S = A22 (B21 - B11)
T = (A11 + A12)B22
U = (A21 – A11) (B11 + B12)
V = (A12 – A22) (B21 + B22)
C11 = P + S – T +V
C12 = R + T
C21 = Q +S
C22 = P + R - Q +U.T
his method is used recursively to perform the seven (n/2) x (n/2) matrix multiplications,
then the recurrence equation for the number of scalar multiplications performedis:
=O(n log2 7)= O(n 2.81 )
So, concluding that Strassen’s algorithm is asymptotically more efficient than the
standard algorithm. In practice, the overhead of managing the many small matrices does
not pay off until ‘n’ revolves the hundreds.

12. Use step count method and analyze the time complexity when two n×n matrices are
added[2M]

Program for matrix addition


Algorithm add( a[ ][MAX_SIZE], b[ ][MAX_SIZE],
c[ ][MAX_SIZE], rows, cols )
{
for i := 1 to rows do {
count++; /* for i for loop */
for j := 1 to cols do {
count++; /* for j for loop */
c[i][j] := a[i][j] + b[i][j];
count++; /* for assignment statement */
}
count++; /* last time of j for loop */
}count++;
}
/* last time of i for loop */
T(n)=2rows*cols+2*rows+1

13. What is meant by divide and conquer? Give the recurrence relation for divide and
conquer[2M]

If this so, the function ‘S’ is invoked.Otherwise, the problem P


is divided into smaller sub problems.These sub problems P1, P2 ...P k are solved by
recursive application of DandC.
Combine is a function that determines the solution to p
using the solutions to the ‘k’ sub problems.If the size of ‘p’ is n and the sizes of the ‘k’ sub
problems are n1, n2 ....nk, respectively, then the computing time of DAndC is described by
the recurrence relation.
T(n)= { g(n) n small
T(n1)+T(n2)+...............+T(n k )+f(n); otherwise.

Where T(n) is the time for DAndC on any i/p of size ‘n’.
g(n) is the time of compute the answer directly for small i/ps.
f(n) is the time for divi
ding P & combining the solution to
sub problems.
Algorithm DAndC(P)
{
if small(P) then return S(P);
else
{
divide P into smaller instances
P1, P2... Pk, k>=1;
Apply DAndC to each of these sub problems;
return combine (DAndC(P1), DAndC(P2),.......,DAndC(Pk));
}}

Long answers
1. What is algorithm? Explain the properties of algorithm.

ALGORITHM:
Algorithm was first time proposed by a Persian Mathematician, Al-Chwarizmi in 825 AD.
According to web star dictionary, algorithm is a special method to represent the procedure to
solve given problem.

OR

An Algorithm is any well-defined computational procedure that takes some value or set of values
as Input and produces a set of values or some value as output. Thus algorithm is a sequence of
computational steps that transforms the input into the output.

Formal Definition:

An Algorithm is a finite set of instructions that, if followed, accomplishes a particular


task. In addition, all algorithms should satisfy the following criteria.

1. Input. Zero or more quantities are externally supplied.


2. Output. At least one quantity is produced.
3. Definiteness. Each instruction is clear and unambiguous.
4. Finiteness. If we trace out the instructions of an algorithm, then for all cases, the
algorithm terminates after a finite number of steps.
5. Effectiveness. Every instruction must very basic so that it can be carried out, in principle,
by a person using only pencil & paper.

Areas of study of Algorithm:

 How to device or design an algorithm– It includes the study of various design


techniques and helps in writing algorithms using the existing design techniques like
divide and conquer.
 How to validate an algorithm– After the algorithm is written it is necessary to check
the correctness of the algorithm i.e for each input correct output is produced, known
as algorithm validation. The second phase is writing a program known as program
proving or program verification.
 How to analysis an algorithm–It is known as analysis of algorithms or performance
analysis, refers to the task of calculating time and space complexity of the algorithm.
 How to test a program – It consists of two phases . 1. debugging is detection and
correction of errors. 2. Profiling or performance measurement is the actual amount of
time required by the program to compute the result.
Algorithm Specification:

Algorithm can be described in three ways.


1. Natural language like English:
2. Graphic representation called flowchart:
This method will work well when the algorithm is small& simple.
3. Pseudo-code Method:
In this method, we should typically describe algorithms as program, which resembles
language like Pascal &algol.

2. What is performance Analysis? How we do performance Analysis?

There are many Criteria to judge an algorithm.


– Is it correct?
– Is it readable?
– How it works
Performance evaluation can be divided into two major phases.

1. Performance Analysis (machine independent)

– space complexity: The space complexity of an algorithm is the amount of memory it


needs to run for completion.

– time complexity: The time complexity of an algorithm is the amount of computer


time it needs to run to completion.

2 .Performance Measurement (machine dependent).

Space Complexity:
The Space Complexity of any algorithm P is given by S(P)=C+SP(I),C is constant.

1.Fixed Space Requirements (C)


Independent of the characteristics of the inputs and outputs
– It includes instruction space
– space for simple variables, fixed-size structured variable, constants
2. Variable Space Requirements (SP(I))
depend on the instance characteristic I
– number, size, values of inputs and outputs associated with I
– recursive stack space, formal parameters, local variables, return address
Examples:
*Program 1 :Simple arithmetic function
Algorithmabc( a, b, c)
{
return a + b + b * c + (a + b - c) / (a + b) + 4.00;
}
SP(I)=0

Hence
S(P)=Constant

Program 2: Iterative function for sum a list of numbers


Algorithm sum( list[ ], n)
{
tempsum = 0;
for i = 0 ton do
tempsum += list [i];
return tempsum;

}
In the above example list[] is dependent on n. Hence S P(I)=n. The remaining variables are i,n,
tempsum each requires one location.
Hence S(P)=3+n

*Program 3: Recursive function for sum a list of numbers


Algorithmrsum( list[ ], n)
{

If (n<=0) then

return 0.0

else
return rsum(list, n-1) + list[n];

In the above example the recursion stack space includes space for formal parameters local
variables and return address. Each call to rsum requires 3 locations i.e for list[],n and return
address .As the length of recursion is n+1.

S(P)>=3(n+1)

Time complexity:
T(P)=C+TP(I)

It is combination of-Compile time (C)


independent of instance characteristics
-run (execution) time TP
dependent of instance characteristics
Time complexity is calculated in terms of program step as it is difficult to know the
complexities of individual operations.
Definition: Aprogram step is a syntactically or semantically meaningful program
segment whose execution time is independent of the instance characteristics.

Program steps are considered for different statements as : for comment zero steps .
assignment statement is considered as one step. Iterative statements such as “for, while
and until-repeat” statements, we consider the step counts based on the expression .

Methods to compute the step count:


1) Introduce variable count into programs
2) Tabular method
– Determine the total number of steps contributed by each statement step
per execution  frequency
– add up the contribution of all statements

Program 1.with count statements

Algorithm sum( list[ ], n)


{
tempsum := 0; count++; /* for assignment */
for i := 1 to n do {
count++; /*for the for loop */
tempsum := tempsum + list[i]; count++; /* for assignment */
}
count++; /* last execution of for */
return tempsum;
count++; /* for return */

Hence T(n)=2n+3

Program :Recursive sum

Algorithmrsum( list[ ], n)
{
count++; /*for if conditional */
if (n<=0) {
count++; /* for return */
return 0.0 }

else

returnrsum(list, n-1) + list[n];

count++;/*for return and rsum invocation*/

T(n)=2n+2
Program for matrix addition

Algorithm add( a[ ][MAX_SIZE], b[ ][MAX_SIZE], c[ ]


[MAX_SIZE], rows, cols )
{
for i := 1 to rows do {
count++; /* for i for loop */
for j := 1 to cols do {
count++; /* for j for loop */
c[i][j] := a[i][j] + b[i][j];
count++; /* for assignment statement */
}
count++; /* last time of j for loop */
}
count++; /* last time of i for loop */
}

T(n)=2rows*cols+2*rows+1

II Tabular method.

Complexity is determined by using a table which includes steps per execution(s/e) i.e
amount by which count changes as a result of execution of the statement.

Frequency – number of times a statement is executed.

Statement s/e Frequency Total steps


Algorithm sum( list[ ], n) 0 - 0
{ 0 - 0
tempsum := 0; 1 1 1
for i := 0 ton do 1 n+1 n+1
tempsum := tempsum + list [i]; 1 N n
} return tempsum; 1 1 1
0 0 0

Total 2n+3

Statement s/e Frequency Total steps


n=0 n>0 n=0 n>0
Algorithmrsum( list[ ], 0 - - 0 0
n)
{ 0 - - 0 0
If (n<=0) then 1 1 1 1 1
return 0.0; 1 1 0 1 0
else 0 0 0 0 0
return rsum(list, n-1) + 1+x 0 1 0 1+x
list[n];
} 0 0 0 0 0
Total 2 2+x

Statement s/e Frequency Total


steps
Algorithm add(a,b,c,m,n) 0 - 0
{ 0 - 0
for i:=1 to m do 1 m+1 m+1
for j:=1 to n do 1 m(n+1) mn+m
c[i,j]:=a[i,j]+b[i,j]; 1 mn mn
} 0 - 0

Total 2mn+2m+1

3. Write about Asymptotic Notations.

The complexity of an algorithm M is the function f(n) which gives the running time and/or
storage space requirement of the algorithm in terms of the size ‘n’ of the input data. Mostly, the
storage space required by an algorithm is simply a multiple of the data size ‘n’. Complexity
shall refer to the running time of thealgorithm.
The function f(n), gives the running time of an algorithm, depends not only on the size ‘n’ of
the input data but also on the particular data. The complexity function f(n) for certain casesare:

1. Best Case : The minimum possible value of f(n) is called the bestcase.
2. Average Case : The average value off(n).

3. Worst Case : The maximum value of f(n) for any key possibleinput.

The field of computer science, which studies efficiency of algorithms, is known as


analysis of algorithms.

Algorithms can be evaluated by a variety of criteria. Most often we shall be interested in the
rate of growth of the time or space required to solve larger and larger instances of a problem.
We will associate with the problem an integer, called the size of the problem, which is a
measure of the quantity of inputdata.Rate ofGrowth:

The following notations are commonly use notations in performance analysis and used to
characterize the complexity of analgorithm:

Asymptotic notation

Big oh notation:O
The function f(n)=O(g(n)) (read as “f of n is big oh of g of n”) iff there exist positive
constants c and n0 such that f(n)≤C*g(n) for all n, n≥0

The value g(n)is the upper bound value of f(n).


Example:
3n+2=O(n) as
3n+2 ≤4n for all n≥2

Omega notation:Ω
The function f(n)=Ω (g(n)) (read as “f of n is Omega of g of n”) iff there exist positive constants
c and n0 such that f(n)≥C*g(n) for all n, n≥0
The value g(n) is the lower bound value of f(n).
Example:
3n+2=Ω (n) as
3n+2 ≥3n for all n≥1

Theta notation:θ
The function f(n)= θ (g(n)) (read as “f of n is theta of g of n”) iff there exist positive constants c1,
c2 and n0 such that C1*g(n) ≤f(n)≤C2*g(n) for all n, n≥0
Example:
3n+2=θ (n) as
3n+2 ≥3n for all n≥2
3n+2 ≤3n for all n≥2
Here c1=3 and c2=4 and n0=2
Little oh: o
The function f(n)=o(g(n)) (read as “f of n is little oh of g of n”) iff
Lim f(n)/g(n)=0 Example:
n~ for all n, n≥0
3n+2=o(n2) as

Lim ((3n+2)/n2)=0
n~

4. Explain control abstraction for divide & conquer and give binary search algorithm as an
example.

Given a function to compute on ‘n’ inputs the divide-and-conquer strategy suggests splitting the
inputs into ‘k’ distinct subsets, 1<k<=n, yielding ‘k’ sub problems.
These sub problems must be solved, and then a method must be found to combine sub solutions
into a solution of the whole.

If the sub problems are still relatively large, then the divide-and-conquer strategy can possibly be
reapplied. Often the sub problems resulting from a divide-and-conquer design are of the same
type as the original problem.
For those cases the re application of the divide-and-conquer principle is naturally expressed by a
recursive algorithm. DAndC(Algorithm) is initially invoked as DandC(P), where ‘p’ is the
problem to be solved. Small(P) is a Boolean-valued function that determines whether the i/p size
is small enough that the answer can be computed without splitting.
If this so, the function ‘S’ is invoked. Otherwise, the problem P is divided into smaller sub
problems.These sub problems P1, P2 …Pk are solved by recursive application of DAndC.
Combine is a function that determines the solution to p using the solutions to the ‘k’ sub
problems . If the size of ‘p’ is n and the sizes of the ‘k’ sub problems are n1, n2 ….nk,
respectively, then the computing time of DAndC is described by the recurrence relation.

T(n)= { g(n) n small


otherwise.
T(n1)+T(n2)+……………+T(nk)+f(n);

Where T(n) is the time for DAndC on any i/p of size ‘n’.
g(n) is the time of compute the answer directly for small i/ps.
f(n) is the time for dividing P & combining the solution to sub problems.
control algorithm for divide & conquer
Algorithm DAndC(P)
{
if small(P) then return S(P);
else
{
divide P into smaller instances
P1, P2… Pk, k>=1;

Apply DAndC to each of these sub problems;


return combine (DAndC(P1), DAndC(P2),…….,DAndC(Pk));
}
}

The complexity of many divide-and-conquer algorithms is given by recurrence relation of


the form

T(n) = T(1) n=1


= aT(n/b)+f(n) n>1

Where a & b are known constants.

We assume that T(1) is known & ‘n’ is a power of b(i.e., n=bk)

BINARY SEARCH
Given a list of n elements arranged in increasing order. The problem is to determine whether a
given element is present in the list or not. If x is present then determine the position of x,
otherwise position is zero.

Divide and conquer is used to solve the problem. The value Small(p) is true if n=1. S(P)= i, if
x=a[i], a[] is an array otherwise S(P)=0.If P has more than one element then it can be divided into
sub-problems. Choose an index j and compare x with a j. then there 3 possibilities (i). X=a[j] (ii)
x<a[j] (x is searched in the list a[1]…a[j-1])
5. x>a[j ] ( x is searched in the list a[j+1]…a[n]).
And the same procedure is applied repeatedly until the solution is found or solution is zero.
Algorithm Binsearch(a,n,x)
Given an array a[1:n] of elements in non-decreasing
//order, n>=0,determine whether ‘x’ is present and
if so, return ‘j’ such that x=a[j]; else return 0.
{
low:=1; high:=n;
while (low<=high) do
{

mid:=[(low+high)/2];
if (x<a[mid]) then high;
else if(x>a[mid]) then
low:=mid+1;
else return mid;
}
return 0;
}

Algorithm, describes this binary search method, where Binsrch has 4 inputssa[], I , n& x.It is
initially invoked as Binsrch (a,1,n,x)A non-recursive version of Binsrch is given below. This
Binsearch has 3 i/psa,n, & x.The while loop continues processing as long as there are more
elements left to check.At the conclusion of the procedure 0 is returned if x is not present, or ‘j’ is
returned, such that a[j]=x.We observe that low & high are integer Variables such that each time
through the loop either x is found or low is increased by at least one or high is decreased at least
one.

Thus we have 2 sequences of integers approaching each other and eventually


low becomes greater than high & causes termination in a finite no. of steps if ‘x’ is not present.

Example:
1) Let us select the 14 entries.
-15,-6,0,7,9,23,54,82,101,112,125,131,142,151.

Place them in a[1:14], and simulate the steps Binsearch goes through as it searches for different
values of ‘x’.
Only the variables, low, high & mid need to be traced as we simulate the algorithm.

We try the following values for x: 151, -14 and 9.


for 2 successful searches & 1 unsuccessful search.
Table. Shows the traces of Binsearch on these 3 steps.
X=151 low high mid
1 14 7

8 14 11
12 14 13
14 14 14
Found

x=-14 low high mid


1 14 7
1 6 3
1 2 1
2 2 2
2 1 Not found

x=9 low high mid


1 14 7
1 6 3
4 6 5
Found

5. Write algorithms for merge sort and its complexity


Merge sort algorithm is a classic example of divide and conquer. To sort an array, recursively,
sort its left and right halves separately and then merge them. The time complexity of merge
sort in the best case, worst case and average case is O(n log n) and the number of comparisons
used is nearly optimal.

This strategy is so simple, and so efficient but the problem here is that there seems to be no
easy way to merge two adjacent sorted arrays together in place (The result must be build up in
a separate array).The fundamental operation in this algorithm is merging two sorted lists.
Because the lists are sorted, this can be done in one pass through the input, if the output is put
in a third list.

Algorithm MERGESORT (low,high)

/ a (low : high) is a global array to besorted.


{
if (low <high)
{
mid := (low+high)/2;//finds where to split theset
MERGESORT(low,mid); //sortonesubset
MERGESORT(mid+1, high); //sort the other subset
MERGE(low,mid,high); // combine theresults
}

Algorithm MERGE (low, mid,high)


/ a (low : high) is a global array containing two sortedsubsets
/ in a (low : mid) and in a (mid + 1 :high).
/ The objective is to merge these sorted sets into singlesorted
/ set residing in a (low : high). An auxiliary array B isused.
{
h :=low; i := low; j:= mid + 1;
while ((h <mid) and (J <high))do
{
if (a[h] <a[j])then
{
b[i] :=a[h]; h:=h+1;

}
else
{ b[i] :=a[j]; j := j +1;
}
i := i +1;
}
if (h > mid)then
for k := j to highdo
{
b[i]:=a[k];
i:=i+1
for k := h to middo
{
b[i] := a[K]; i := i +l;
}
for k := low to highdo
a[k] :=b[k];
}

Example:

Analysis of Merge Sort or its complexity

We will assume that ‘n’ is a power of 2, so that we always split into even halves, so we
solve for the case n = 2k.
For n = 1, the time to merge sort is constant, which we will be denote by 1. Otherwise, the
time to merge sort ‘n’ numbers is equal to the time to do two recursive merge sorts of size
n/2, plus the time to merge, which is linear. The equation says this exactly:

T(1) = 1
T(n) = 2 T(n/2) + n

This is a standard recurrence relation, which can be solved several ways. We will solve by
substituting recurrence relation continually on the right–hand side.

We have, T(n) = 2T(n/2) + n

Since we can substitute n/2 into this main equation


2 T(n/2) = 2 (2 (T(n/4)) + n/2)
= 4 T(n/4) + n
We have,
T(n/2) = 2 T(n/4) + n

T(n) = 4 T(n/4) + 2n

Again, by substituting n/4 into the main equation, we see that

4T (n/4) = 4 (2T(n/8)) + n/4


= 8 T(n/8) + n
So we have,

T(n/4) = 2 T(n/8) + n
T(n) = 8 T(n/8) + 3n

Continuing in this manner, we obtain:

T(n)=2k T(n/2k) + K. n
As n = 2k, K = log2n, substituting this in the above equation

Representing this in O notation:

T(n) = O(n log n)


We have assumed that n = 2 k. The analysis can be refined to handle cases when ‘n’ is not a
power of 2. The answer turns out to be almost identical.

Although merge sort’s running time is O(n log n), it is hardly ever used for main memory
sorts. The main problem is that merging two sorted lists requires linear extra memory and
the additional work spent copying to the temporary array and back, throughout the
algorithm, has the effect of slowing down the sort considerably. The Best and worst case
time complexity of Merge sort is O(n log n).

6. Explain Quick sort and its Algorithm.

The quick sort algorithm partitions the original array by rearranging it into two groups.
The first group contains those elements less than some arbitrary chosen value taken from
the set, and the second group contains those elements greater than or equal to the chosen
value.

The chosen value is known as the pivot element. Once the array has been rearranged in this
way with respect to the pivot, the very same partitioning is recursively applied to each of
the two subsets. When all the subsets have been partitioned and rearranged, the original
array is sorted.

The function partition() makes use of two pointers ‘i’ and ‘j’ which are moved toward each
other in the following fashion:

Repeatedly increase the pointer ‘i’ until a[i] >= pivot.

Repeatedly decrease the pointer ‘j’ until a[j] <= pivot.


If j > i, interchange a[j] with a[i]

Repeat the steps 1, 2 and 3 till the ‘i’ pointer crosses the ‘j’ pointer. If ‘i’ pointer crosses ‘j’
pointer, the position for pivot is found and place pivot element in ‘j’ pointer position.

The program uses a recursive function quicksort(). The algorithm of quick sort
function sorts all elements in an array ‘a’ between positions ‘low’ and ‘high’.

It terminates when the condition low >= high is satisfied. This condition will be satisfied
only when the array is completely sorted.

Here we choose the first element as the ‘pivot’. So, pivot = x[low]. Now it calls the
partition function to find the proper position j of the element x[low] i.e. pivot. Then we
will have two sub-arrays x[low], x[low+1], . . . .. . . x[j-1] and x[j+1], x[j+2], . . .x[high].

It calls itself recursively to sort the left sub-array


x[low], x[low+1], . . . . .. . x[j-1] between positions low and j-1
(where j is returned by the partition function).

It calls itself recursively to sort the right sub-array


x[j+1], x[j+2], . . . . . .. . . x[high] between positions j+1 and high.

Example

A quick sort first selects a value, which is called the pivot value. Although there are many
different ways to choose the pivot value, we will simply use the first item in the list. The
role of the pivot value is to assist with splitting the list. The actual position where the pivot
value belongs in the final sorted list, commonly called the split point, will be used to divide
the list for subsequent calls to the quick sort.
54 will serve as our first pivot value. Since we have looked at this example a few times already, we
know that 54 will eventually end up in the position currently holding 31.

The partitionprocess will happen next. It will find the split point and at the same time move other
items to the appropriate side of the list, either less than or greater than the pivot value.

Partitioning begins by locating two position markers—let’s call them leftmark and rightmark—at
the beginning and end of the remaining items in the list .

The goal of the partition process is to move items that are on the wrong side with respect to the
pivot value while also converging on the split point.
We begin by incrementing leftmark until we locate a value that is greater than the pivot value. We
then decrement rightmark until we find a value that is less than the pivot value.

At this point we have discovered two items that are out of place with respect to the eventual split
point. For our example, this occurs at 93 and 20.

Now we can exchange these two items and then repeat the process again.

At the point where rightmark becomes less than leftmark, we stop. The position of rightmark is now
the split point. The pivot value can be exchanged with the contents of the split point and the pivot
value is now in place .

In addition, all the items to the left of the split point are less than the pivot value, and all the items to
the right of the split point are greater than the pivot value. The list can now be divided at the split
point and the quick sort can be invoked recursively on the two halves.
Algorithm

Algorithm QUICKSORT(low, high)


/* sorts the elements a(low), . . . . . , a(high) which reside in the global array A(1 : n) into ascending
order a (n + 1) is considered to be defined and must be greater than all elements in a(1 : n);
A(n + 1) = + ∞ */
{
if low < high then
{
j := PARTITION(a, low, high+1); // J is the position of the partitioning element
QUICKSORT(low, j – 1);
QUICKSORT(j + 1 , high);
}
}

Algorithm PARTITION(a, m, p)
{
V := a(m);
i :=m;
j :=p; // A (m) is the partition element
do {
loop i := i + 1 until a(i) >= v // i moves left to right
loop j := j – 1 until a(j) < =v // p moves right to left
if (i < j) then INTERCHANGE(a, i, j)
} while (i >= j);
a[m] :=a[j];
a[j] :=V; // the partition element belongs at position P
return j;
}

Analysis of Quick Sort:

Like merge sort, quick sort is recursive, and hence its analysis requires solving a recurrence
formula.

We will do the analysis for a quick sort, assuming a random pivot (and no cut off for small files).
We will take T (0) = T (1) = 1, as in merge sort.

The running time of quick sort is equal to the running time of the two recursive calls plus the linear
time spent in the partition (The pivot selection takes only constant time).

This gives the basic quick sort relation:


T (n) = T (i) + T (n – i – 1) + C n --- (1)
Where, i = |S1| is the number of elements in S1.

Worst Case Analysis

The pivot is the smallest element


T(N) = T(N-1) + cN, N > 1
Telescoping:
T(N-1) = T(N-2) + c(N-1)
T(N-2) = T(N-3) + c(N-2)
T(N-3) = T(N-4) + c(N-3)
T(2) = T(1) + c.2
Add all equations:
T(N) + T(N-1) + T(N-2) + … + T(2) =
= T(N-1) + T(N-2) + … + T(2) + T(1) + c(N) + c(N-1) + c(N-2) + … + c.2
T(N) = T(1) + c(2 + 3 + … + N)
T(N) = 1 + c(N(N+1)/2 -1)

Therefore T(N) = O(N2)

Best Case Analysis

The List is divided equally


T(N) = 2T(N/2) + cN
Divide by N:
T(N) / N = T(N/2) / (N/2) + c
Telescoping:
T(N/2) / (N/2) = T(N/4) / (N/4) + c
T(N/4) / (N/4) = T(N/8) / (N/8) + c
……
T(2) / 2 = T(1) / (1) + c
Add all equations:
T(N) / N + T(N/2) / (N/2) + T(N/4) / (N/4) + …. + T(2) / 2 =
= (N/2) / (N/2) + T(N/4) / (N/4) + … + T(1) / (1) + c.logN
After crossing the equal terms:
T(N)/N = T(1) + cLogN = 1 + cLogN
T(N) = N + NcLogN
Therefore T(N) = O(NlogN)
Average case analysis
7. Explain strassen’s Matrix multiplication

The matrix multiplication of algorithm due to Strassens is the most dramatic example of
divide and conquer technique (1969).

The usual way to multiply two n x n matrices A and B, yielding result matrix ‘C’ as
follows :

for i := 1 to n do
for j :=1 to n do
c[i, j] := 0;
for K: = 1 to n do
c[i, j] := c[i, j] + a[i, k] * b[k, j];

This algorithm requires n3 scalar multiplication’s (i.e.


multiplication of single numbers) and n 3 scalar additions. So we
naturally cannot improve upon.
We apply divide and conquer to this problem. For example let us
considers three multiplication like this:

A11 A12 B11 B12 C11 C12


* =
A21 A22 B21 B22 C21 C22

Then cij can be found by the usual matrix multiplication algorithm,

C11 = A11 . B11 + A12 . B21


C12 = A11 . B12 + A12 . B22
C21 = A21 . B11 + A22 . B21
C22 = A21 . B12 + A22 . B22

This leads to a divide–and–conquer algorithm, which performs nxn matrix


multiplication by partitioning the matrices into quarters and performing eight
(n/2)x(n/2) matrix multiplications and four (n/2)x(n/2) matrix additions.

T(1) = 1
T(n) = 8 T(n/2)

Which leads to T (n) = O (n3), where n is the power of 2.


Strassens insight was to find an alternative method for calculating the Cij,
requiring seven (n/2) x (n/2) matrix multiplications and eighteen (n/2) x (n/2)
matrix additions and subtractions:

= (A11 + A22) (B11 + B22)

= (A21 + A22) B11

= A11 (B12 – B22)


= A22 (B21 - B11)

= (A11 + A12) B22

= (A21 – A11) (B11 + B12)

= (A12 – A22) (B21 + B22)

C11 = P + S – T + V

C12 = R + T

C21 = Q + S

C22 = P + R - Q + U.

This method is used recursively to perform the seven (n/2) x (n/2) matrix multiplications,
then the recurrence equation for the number of scalar multiplications performed is:

Time complexity:

So, concluding that Strassen’s algorithm is asymptotically more efficient than the standard
algorithm. In practice, the overhead of managing the many small matrices does not pay off until
‘n’ revolves the hundreds.

You might also like