0% found this document useful (0 votes)
3 views21 pages

IA1 - Question Bank

The document is a question bank for the course 'Analysis & Design of Algorithms' at Maharaja Institute of Technology Mysore, covering various topics such as algorithm characteristics, space and time complexity, recursive and non-recursive algorithms, and sorting techniques. It includes definitions, pseudocode examples, and mathematical analyses of different algorithms, emphasizing their efficiency and performance evaluation. Additionally, it discusses Big-Oh, Big-Omega, and Big-Theta notations for analyzing algorithm efficiency.

Uploaded by

meghanapa004
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views21 pages

IA1 - Question Bank

The document is a question bank for the course 'Analysis & Design of Algorithms' at Maharaja Institute of Technology Mysore, covering various topics such as algorithm characteristics, space and time complexity, recursive and non-recursive algorithms, and sorting techniques. It includes definitions, pseudocode examples, and mathematical analyses of different algorithms, emphasizing their efficiency and performance evaluation. Additionally, it discusses Big-Oh, Big-Omega, and Big-Theta notations for analyzing algorithm efficiency.

Uploaded by

meghanapa004
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

MAHARAJA INSTITUTE OF TECHNOLOGY MYSORE

BELAWADI, SRIRANGAPATNA Tq, MANDYA-571477


DEPARTMENT OF COMPUTER SCIENCE & BUSINESS SYSTEM

IA1 Question Bank


Subject Name: ANALYSIS & DESIGN OF ALGORITHMS Subject Code: BCS401
Semester: 4th

1. Discuss the characteristics of an algorithm along with definition.


Answer:
An algorithm is a sequence of unambiguous instructions for solving a problem, i.e.,
for obtaining a required output for any legitimate input in a finite amount of time.
This definition can be illustrated by a simple diagram (Figure 1.1).

Figure 1.1: Notation of Algorithm

The reference to "instructions" in the definition implies that there is something or


someone capable of understanding and following the instructions given. We call this a
"computer," keeping in mind that before the electronic computer was invented, the
word "computer" meant a human being involved in performing numeric calculations.

As examples illustrating the notion of algorithm, we consider in this section three


methods for solving the same problem: computing the greatest common divisor of two
integers.
For example, gcd(60, 24) can be computed as follows:
gcd(60, 24) = gcd(24, 12) = gcd(12, 0) = 12.
Here is a more structured description of this algorithm:
Euclid's algorithm for computing gcd(m, n)
Step 1: If n = 0, return the value of m as the answer and stop; otherwise,
proceed to Step 2.
Step 2: Divide m by n and assign the value of the remainder to r.
Step 3: Assign the value of n to m and the value of r to n. Go to Step 1.

Alternatively, we can express the same algorithm in a pseudocode:


ALGORITHM Euclid(m, n)
// Computes gcd(m, n) by Euclid's algorithm
// Input: Two nonnegative, not -both-zero integers m and n
// Output: Greatest common divisor of m and n
while n ≠ 0 do
r ← m mod n
m←n
n ←r
return m
These examples will help us to illustrate several important points:
1. The non-ambiguity requirement for each step of an algorithm cannot be
compromised.
2. The range of inputs for which an algorithm works has to be specified carefully.
3. The same algorithm can be represented in several different ways.
4. Several algorithms for solving the same problem may exist.
5. Algorithms for the same problem can be based on very different ideas and can
solve the problem with dramatically different speeds.

2. Discuss the way to compute Space and Time Complexity of an algorithm.


Answer:
The space complexity of an algorithm is the amount of memory it needs to run to
completion. The time complexity of an algorithm is the amount of computer time it
needs to run to completion.
Performance evaluation can be loosely divided into two major phases:
(1) a priori estimates and
(2) a posteriori testing.
Space Complexity
The space requirement S(P)of any algorithm P may therefore be written as S(P)= c +
Sp(instance characteristics), where c is a constant.

When analyzing the space complexity of an algorithm, we concentrate solely on


estimating Sp (instance characteristics). For any given problem, we need first to
determine which instance characteristics to use to measure the space
requirements. This is very problem specific and we resort to examples to illustrate the
various possibilities.

Time Complexity
The time T(P)taken by a program P is the sum of the compile time and the run (or
execution)time. The compile time does not depend on the instance characteristics.
Also, we may assume that a compiled program will be run several times without
recompilation. Consequently, we concern ourselves with just the run time of a
program.
This run time is denoted by tp(instance characteristics).
If we knew the characteristics of the compiler to be used, we could proceed to
determine the number of additions, subtractions, multiplications, divisions, compares,
loads, stores, and soon, that would be made by the code for P.
So, we could obtain an expression for tp(n) of the form

tP(n) = caADD(n)+ csSUB(n)cmMUL(n)+ cdDIV(n) + . ………………


where n denotes the instance characteristics and ca, cs, cm, q, and soon, respectively,
denote the time needed for an addition, subtraction, multiplication, division, and soon,
and ADD, SUB, MUL, DIV, and so on, are functions whose values are the numbers of
additions, subtractions, multiplications, divisions, and soon, that are performed when
the code for P is used on an instance with characteristics n.

3. Give mathematical analysis of Recursive Algorithms with illustration.


Answer:
General Plan for Analyzing Time Efficiency of Recursive Algorithms
1. Decide on a parameter (or parameters) indicating an input's size.
2. Identify the algorithm's basic operation.
3. Check whether the number of times the basic operation is executed can vary on
different inputs of the same size; if it can, the worst -case, average-case, and best-
case efficiencies must be investigated separately.
4. Set up a recurrence relation, with an appropriate initial condition, for the
number of times the basic operation is executed.
5. Solve the recurrence or at least ascertain the order of growth of its solution.

Illustration EXAMPLE: Compute the factorial function F(n) = n! for an arbitrary


nonnegative integer n. Since
n! = 1.......... (n - l)·n = (n - l)! · n for n ≥ l
and 0! = 1 by definition, we can compute F(n) = F(n- 1) · n with the following recursive
algorithm.
ALGORITHM F(n)
//Computes n! recursively
//Input: A nonnegative integer n
//Output: The value of n!
if n = 0
return 1
else
return F(n - 1) * n

For simplicity, we consider n itself as an indicator of this algorithm's input size. The
basic operation of the algorithm is multiplication, whose number of executions we
denote M(n). Since the function F(n) is computed according to the formula:
F(n) = F(n- 1) · n for n > 0,
the number of multiplications M (n) needed to compute it must satisfy the equality
( ) ( )

( )
the number of multiplications M (n) needed to compute it must satisfy the equality

Indeed, M(n- 1) multiplications are spent to compute F(n- 1), and one more
multiplication is needed to multiply the result by n.
The last equation defines the sequence M (n) that we need to find. This equation
defines M(n) not explicitly, i.e., as a function of n, but implicitly as a function of its
value at another point, namely n - 1. Such equations are called recurrence relations
or, for brevity, recurrences. Recurrence relations play an important role not only in
analysis of algorithms but also in some areas of applied mathematics.

4. Give mathematical analysis of Non-Recursive matrix Multiplication Algorithm.


Answer:
Given two n-by-n matrices A and B, find the time efficiency of the definition-based
algorithm for computing their product C = AB. By definition, C is an n-by-n matrix
whose elements are computed as the scalar (dot) products of the rows of matrix A
and the columns of matrix B:
where C[i, j] = A[i, 0]B[0, j] + · · · + A[i, k]B[k, j] + · · · + A[i, n -1]B[n -1, j] for every pair
of indices 0≤ i, j ≤ n - 1.

ALGORITHM MatrixMultiplication(A[0 .. n- 1, 0 .. n- 1], B[0 .. n- 1, 0 .. n- 1])


//Multiplies two n-by-n matrices by the definition-based algorithm
//Input: Two n-by-n matrices A and B
//Output: Matrix C = AB
for i ← to n -1 do
for j ← 0 to n - 1 do
C[i, j] ← 0.0
for k ← 0 to n - 1 do
C[i, j] ← C[i, j] + A[i, k] * B[k, j]
return C

We measure an input's size by matrix order n. In the algorithm's innermost loop are
two arithmetical operations-multiplication and addition-that, in principle, can compete
for designation as the algorithm's basic operation. We consider multiplication as the
algorithm's basic operation.

Let us set up a sum for the total number of multiplications M(n) executed by the
algorithm.

Obviously, there is just one multiplication executed on each repetition of the


algorithm's innermost loop, which is governed by the variable k ranging from the lower
bound 0 to the upper bound n - 1. Therefore, the number of multiplications made for
every pair of specific values of variables i and j is

and the total number of multiplications M(n) is expressed by the following triple sum:

( ) ∑∑∑

Now we can compute this sum by using formula (S1) and rule ( R1) (see above).
Starting with the innermost sum ∑ , which is equal ton (why?), we get

( ) ∑∑∑ ∑∑ ∑

If we now want to estimate the running time of the algorithm on a particular machine,
we can do it by the product
( ) ( )
where Cm is the time of one multiplication on the machine in question. We would get a
more accurate estimate if we took into account the time spent on the additions, too:
( ) ( ) ( ) ( + )

where ca is the time of one addition. Note that the estimates differ only by their
multiplicative constants, not by their order of growth.

5. Design Sequential Search Algorithm. Also analyse the time efficiency.


Answer:
This is a straightforward algorithm that searches for a given item (some search
key K) in a list of n elements by checking successive elements of the list until
either a match with the search key is found or the list is exhausted. Here is the
algorithm's pseudocode, in which, for simplicity, a list is implemented as an array.

ALGORITHM SequentialSearch(A[0 .. n - 1], K)


//Searches for a given value in a given array by sequential search
//Input: An array A[O .. n -1] and a search key K
//Output: The index of the first element of A that matches K
// or -1 if there are no matching elements
i ←0
while i <n and A[i] ≠ K do
i ← i+1
if i < n
return i
else
return -1

Clearly, the running time of this algorithm can be quite different for the same list size
n. In the worst case, when there are no matching elements or the first matching
element happens to be the last one on the list, the algorithm makes the largest number
of key comparisons among all possible inputs of size n:
cworst(n) = n.
The worst-case efficiency of an algorithm is its efficiency for the worst-case input of
size n, which is an input (or inputs) of size n for which the algorithm runs the longest
among all possible inputs of that size.

The worst-case efficiency of an algorithm is, in principle, quite straightforward: we


analyze the algorithm to see what kind of inputs yield the largest value of the basic
operation's count C(n) among all possible inputs of size n and then compute this
worst-case value Cworst(n).

The best-case efficiency of an algorithm is its efficiency for the best-case input of size
n, which is an input (or inputs) of size n for which the algorithm runs the fastest
among all possible inputs of that size. For example, for sequential search, best-case
inputs are lists of size n with their first elements equal to a search key; accordingly,
Cbest(n) = 1.

To analyze the algorithm's average-case efficiency, we must make some assumptions


about possible inputs of size n. Let us consider again sequential search. The standard
assumptions are that (a) the probability of a successful search is equal top (0 ≤ p ≤ 1)
and (b) the probability of the first match occurring in the ith position of the list is the
same for every i.

In the case of a successful search, the probability of the first match occurring in the i
th position of the list is p / n for every i, and the number of comparisons made by the
algorithm in such a situation is obviously i. In the case of an unsuccessful search,
the number of comparisons is n with the probability of such a search being (1- p).
Therefore,

6. Design algorithm for sorting using selection sort technique. Also analyze the
time efficiency.
Answer:
We start selection sort by scanning the entire given list to find its smallest element
and exchange it with the first element, putting the smallest element in its final
position in the sorted list.
Then we scan the list, starting with the second element, to find the smallest among
the last n - 1 elements and exchange it with the second element, putting the second
smallest element in its final position.
Generally, on the ith pass through the list, which we number from 0 to n - 2, the
algorithm searches for the smallest item among the last n - i elements and swaps it
with Ai:

After n - 1 passes, the list is sorted.


Here is a pseudocode of this algorithm, which, for simplicity, assumes that the list is
implemented as an array.

ALGORITHM SelectionSort(A[0 .. n -1])


//Sorts a given array by selection sort
//Input: An array A[O .. n - 1] of orderable elements
//Output: Array A[O .. n - 1] sorted in ascending order
for i ← 0 to n - 2 do
min ← i
for j ← i+1 to n-l do
if A[j] < A[min] min ← j
swap A[i] and A[min]
As an example, the action of the algorithm on the list 89, 45, 68, 90, 29, 34, 17 is
illustrated in Figure below.
The analysis of selection sort is straightforward.
The input's size is given by the number of elements n; the algorithm's basic operation
is the key comparison A[j] < A[min]. The number of times it is executed depends only
on the array's size and is given by the following sum:

Whether you compute this sum by distributing the summation symbol or by


immediately getting the sum of decreasing integers, the answer, of course, must be the
same:

Thus, selection sort is a ( ) algorithm on all inputs. Note, however, that the number
of key swaps‘ is only (1)) Or, more precisely, n - 1 (one for each repetition of the i
loop).

7. Discuss briefly with Big-Oh, Big- Omega and Big-Theta notation.


Answer:
The efficiency analysis framework concentrates on the order of growth of an
algorithm's basic operation count as the principal indicator of the algorithm's
efficiency. To compare and rank such orders of growth, computer scientists use
three notations: 0 (big oh), Ω (big omega), and Θ (big theta).

O-notation
DEFINITION 1 A function t(n) is said to be in O(g(n)), denoted t(n) Є O(g(n)), if t(n) is
bounded above by some constant multiple of g(n) for all large 11, i.e., if there exist
some positive constant c and some nonnegative integer n0 such that
( ) ( )

The definition is illustrated in Figure below where, for the sake of visual clarity, n is
extended to be a real number.

Figure: Big-oh notation: t(n) Є O(g(n))


As an example, let us formally prove one of the assertions made: ( )
Indeed,
( )
Thus, as values of the constants c and n0 required by the definition, we can take 101
and 5, respectively.
Note that the definition gives us a lot of freedom in choosing specific values for
constants c and n0. For example, we could also reason that
100n + 5 ≤ l00n + 5n (for all n ≥ 1) = l05n
to complete the proof with c = 105 and n0 = 1.

Ω-notation
DEFINITION 2 A function t(n) is said to be in Ω(g(n)), denoted t(n) Ω (g(n)), if t(n) is
bounded below by some positive constant multiple of g(n) for all large n, i.e., if there
exist some positive constant c and some nonnegative integer n0 such that
( ) ( )
The definition is illustrated in Figure below.

Figure: Big-omega notation: t(n) Є Ω(g(n))

Here is an example of the formal proof that n3 Є Ω (n2):


n3 ≥ n2 for all n ≥ 0,
i.e., we can select c = 1 and n0 = 0.

Θ-notation
DEFINITION 3 A function t(n) is said to be in Θ (g(n)), denoted t(n) Є Θ(g(n)), if t(n) is
bounded both above and below by some positive constant multiples of g(n) for all large
n, i.e., if there exist some positive constant c1 and c2 and some nonnegative integer n0
such that
( ) ( ) ( )

The definition is illustrated in the below figure.


For example, let us prove that ( ) ( ). First, we prove the right inequality (the
upper bound):
( )

8. Explain the fundamentals of Algorithmic Problem solving with block diagram.


Answer:
A sequence of steps one typically goes through in designing and analyzing an
algorithm (Figure below).
Understanding the Problem
From a practical perspective, the first thing you need to do before designing an
algorithm is to understand completely the problem given. Read the problem's
description carefully and ask questions if you have any doubts about the problem, do a
few small examples by hand, think about special cases, and ask questions again if
needed.
An input to an algorithm specifies an instances of the problem the algorithm
solves. It is very important to specify exactly the range of instances the algorithm
needs to handle.

Figure: Notation of Algorithm

Ascertaining the Capabilities of a Computational Device


Once you completely understand a problem you need to ascertain the
capabilities of the computational device the algorithm is intended for. The vast majority
of algorithms in use today are still destined to be programmed for a computer closely
resembling the von Neumann machine-a computer architecture outlined by the
prominent Hungarian-American mathematician John von Neumann.

Choosing between Exact and Approximate Problem Solving


An algorithm is called an exact algorithm; in the latter case, an algorithm is called an
approximation algorithm. Why would one opt for an approximation algorithm?
 First, there are important problems that simply cannot be solved exactly for
most of their instances; examples include extracting square roots, solving
nonlinear equations, and evaluating definite integrals.
 Second, available algorithms for solving a problem exactly can be unacceptably
slow because of the problem's intrinsic complexity.

Deciding on Appropriate Data Structures


Some algorithms do not demand any ingenuity in representing their inputs. But others
are, in fact, predicated on ingenious data structures.

Many years ago, an influential textbook proclaimed the fundamental importance of


both algorithms and data structures for computer programming by its very title:
Algorithms + Data Structures ~ Programs. In the new world of object-oriented
programming, data structures remain crucially important for both design and analysis
of algorithms.

Algorithm Design Techniques


Now, with all the components of the algorithmic problem solving in place, how do
you design an algorithm to solve a given problem? This is the main question.

What is an algorithm design technique?


An algorithm design technique (or "strategy" or "paradigm") is a general approach to
solving problems algorithmically that is applicable to a variety of problems from
different areas of computing. Learning algorithm design techniques is of utmost
importance for the following reasons. First, they provide guidance for designing
algorithms for new problems, i.e., problems for which there is no known satisfactory
algorithm. Second, algorithms are the cornerstone of computer science. Every
science is interested in classifying its principal subject, and computer science is no
exception.

Methods of Specifying an Algorithm


Once you have designed an algorithm, you need to specify it in some fashion. These
are the two options that are most widely used nowadays for specifying algorithms.

Using a natural language has an obvious appeal; however, the inherent ambiguity of
any natural language makes a succinct and clear description of algorithms
surprisingly difficult.
A pseudocode is a mixture of a natural language and programming language like
constructs. A pseudocode is usually more precise than a natural language, and its
usage often yields more succinct algorithm descriptions.
In the earlier days of computing, the dominant vehicle for specifying algorithms was a
flowchart, a method of expressing an algorithm by a collection of connected
geometric shapes containing descriptions of the algorithm's steps. This
representation technique has proved to be inconvenient for all but very simple
algorithms.

Proving an Algorithm's Correctness


Once an algorithm has been specified, you have to prove its correctness. That is, you
have to prove that the algorithm yields a required result for every legitimate input in a
finite amount of time.
A common technique for proving correctness is to use mathematical induction
because an algorithm's iterations provide a natural sequence of steps needed for
such proofs.
The notion of correctness for approximation algorithms is less straightforward than it
is for exact algorithms.

Analyzing an Algorithm
We usually want our algorithms to possess several qualities. After correctness, by far
the most important is efficiency. In fact, there are two kinds of algorithm efficiency:
time efficiency and space efficiency. Time efficiency indicates how fast the
algorithm runs; space efficiency indicates how much extra memory the algorithm
needs.

Coding an Algorithm
Most algorithms are destined to be ultimately implemented as computer programs.
Programming an algorithm presents both a peril and an opportunity. The peril lies in
the possibility of making the transition from an algorithm to a program either
incorrectly or very inefficiently. Some influential computer scientists strongly
believe that unless the correctness of a computer program is proven with full
mathematical rigor, the program cannot be considered correct.

9. Give mathematical analysis of Non- Recursive Algorithms with illustration.


Answer:
Let us start with a very simple example that demonstrates all the principal steps
typically taken in analyzing such algorithms.

Consider the problem of finding the value of the largest element in a list of n
numbers. For simplicity, we assume that the list is implemented as an array. The
following is a pseudocode of a standard algorithm for solving the problem.

ALGORITHM MaxElement(A[0 .. n -1])


//Determines the value of the largest element in a given array
//Input: An array A[O .. n - 1] of real numbers
//Output: The value of the largest element in A
maxval ← A[O]
for i ← l to n-1do
if A[i] > maxval
maxval ← A[i]
return maxval

The obvious measure of an input's size here is the number of elements in the
array, i.e., n. The operations that are going to be executed most often are in the
algorithm's for loop. There are two operations in the loop's body: the comparison
A[i] > maxval and the assignment maxval ← A[i]. Which of these two operations
should we consider basic?. We should consider the comparison to be the algorithm's
basic operation.

Let us denote C(n) the number of times this comparison is executed (size n). The
algorithm makes one comparison on each execution of the loop, which is repeated for
each value of the loop's variable i within the bounds 1 and n - 1 (inclusively).
Therefore, we get the following sum for C(n):
This is an easy sum to compute because it is nothing else but 1 repeated n - 1
times. Thus,

General Plan for Analyzing Time Efficiency of Nonrecursive Algorithms


1. Decide on a parameter (or parameters) indicating an input's size.
2. Identify the algorithm's basic operation.
3. Check whether the number of times the basic operation is executed depends
only on the size of an input. If it also depends on some additional property, the
worst-case, average-case, and, if necessary, best-case efficiencies have to be
investigated separately.
4. Set up a sum expressing the number of times the algorithm's basic operation
is executed.
5. Using standard formulas and rules of sum manipulation, either find a closed-
form formula for the count or, at the very least, establish its order of growth.

10. Apply Strassen’s algorithm for matrix multiplication to multiply the


following matrices and justify how the Strassen’s algorithm better.

[ ] [ ]
Answer:
The principal insight of the algorithm lies in the discovery that we can find the
product C of two 2 × 2 matrices A and B with just seven multiplications as opposed to
the eight required by the brute-force algorithm. This is accomplished by using the
following formulas:

Where,

“Solve the problem by yourself using the above formula”


Let us evaluate the asymptotic efficiency of this algorithm. If M(n) is the number of
multiplications made by Strassen’s algorithm in multiplying two n × n matrices (where
n is a power of 2), we get the following recurrence relation for it:
M(n) = 7M(n/2) for n > 1, M(1) = 1.
Since n = 2 ,
k

M(2k) = 7M(2k−1) = 7[7M(2k−2)]= 72M(2k−2) = . . .


= 7iM(2k−i) . . . = 7kM(2k−k) = 7k.
Since k = log2n,
M(n) = 7log2n = nlog27 ≈ n2.807,

which is smaller than n3 required by the brute-force algorithm.

11. Explain the concept of Divide and Conquer. Write the recursive algorithm
to perform mergesort on the list of elements.
Answer:
Divide-and-conquer algorithms work according to the following general plan:
1. A problem's instance is divided into several smaller instances of the same
problem, ideally of about the same size.
2. The smaller instances are solved (typically recursively, though sometimes a
different algorithm is employed when instances become small enough).
3. If necessary, the solutions obtained for the smaller instances are combined to
get a solution to the original instance.

The divide-and-conquer technique is diagrammed in Figure 2, which depicts the


case of dividing a problem into two smaller subproblems, by far the most widely
occurring case.
In the most typical case of divide-and-conquer, a problem's instance of size n is divided
into two instances of size n/2. More generally, an instance of size n can be divided into
b instances of size n/b, with a of them needing to be solved. (Here, a and b are
constants; a 2: 1 and b > 1).

Figure 2: Divide-and-conquer technique (typical case)


Assuming that size n is a power of b, to simplify our analysis, we get the following
recurrence for the running time T(n):
T(n) = aT(n/b) + f(n), ---------------- (e1)
where f(n) is a function that accounts for the time spent on dividing the problem into
smaller ones and on combining their solutions. (For the summation example, a = b = 2
and f (n) = 1.) Recurrence (e1) is called the general divide-and-conquer recurrence.

Algorithm for Mergesort:


ALGORITHM Mergesort(A[O .. n - 1])
//Sorts array A[O .. n - 1] by recursive mergesort
//Input: An array A[O .. n - 1] of orderable elements
//Output: Array A[O .. n - 1] sorted in nondecreasing order
if n > 1
copy A[0 .. ⌊ ⌋-1] to B[0 .. ⌊ ⌋-1]
copy A[⌊ ⌋ .. n -1] to C[0 .. ⌊ ⌋-1]
Mergesort(B[0 .. ⌊ ⌋-1])
Mergesort(C[0 .. ⌊ ⌋-1])
Merge(B, C, A)

ALGORITHM Merge(B[O .. p- 1], C[O .. q -1], A[O .. p + q -1])


//Merges two sorted arrays into one sorted array
//Input: Arrays B[O .. p -1] and C[O .. q -1] both sorted
//Output: Sorted array A[O .. p + q -1] of the elements of Band C
i ← 0;
j ← 0;
k ← 0;
while i <p and j <q do
if B[i] ≤ C[j]
A[k] ← B[i];
i ←i + 1
else
A[k] ← C[j];
j←j+1
k←k+1
if i = p
copy C[j .. q -1] to A[k .. p + q -1]
else
copy B[i.. p -1] to A[k .. p + q -1]

12. Explain the concept of Decrease and Conquer. With algorithm and analysis
explain insertion sort.
Answer:
The decrease-and-conquer technique is based on exploiting the “relationship
between a solution to a given instance of a problem and a solution to a smaller”
instance of the same problem.
Once such a relationship is established, it can be exploited either top down
(recursively) or bottom up (without a recursion).
There are three major variations of decrease-and-conquer:
 decrease by a constant
 decrease by a constant factor
 variable size decrease
Insertion Sort:
Following the technique's idea, we assume that the smaller problem of sorting
the array A[O .. n- 2] has already been solved to give us a sorted array of size n - 1:
A[O] ≤ ... ≤ A [n - 2]. How can we take advantage of this solution to the smaller
problem to get a solution to the original problem by taking into account the element
A[n -1]?

Obviously, all we need is to find an appropriate position for A[n - 1] among the sorted
elements and insert it there.

There are three reasonable alternatives for doing this.


First, we can scan the sorted sub-array from left to right until the first element
greater than or equal to A[n -1] is encountered and then insert A[n -1] right before that
element. Second, we can scan the sorted sub-array from right to left until the first
element smaller than or equal to A[n -1] is encountered and then insert A[n -1] right
after that element. These two alternatives are essentially equivalent; usually, it is
the second one that is implemented in practice because it is better for sorted and
almost-sorted arrays (why?). The resulting algorithm is called straight insertion sort
or simply insertion sort. The third alternative is to use binary search to find an
appropriate position for A [ n - 1] in the sorted portion of the array. The resulting
algorithm is called binary insertion sort.

Though insertion sort is clearly based on a recursive idea, it is more efficient to


implement this algorithm bottom up, i.e., iteratively. As shown in Figure below 2.5,
starting with A[1] and ending with A[n - 1], A[i] is inserted in its appropriate place
among the first i elements of the array that have been already sorted.

Figure 2.5: Iteration of insertion sort: A[i] is inserted in its proper position among the
preceding elements previously sorted.

Here is a pseudocode of this algorithm.


ALGORITHM InsertionSort(A[0 .. n - 1])
//Sorts a given array by insertion sort
//Input: An array A[O .. n- 1] of n orderable elements
//Output: Array A[O .. n - 1] sorted in non-decreasing order
for i ← 1 to n - 1 do
v ← A[i]
j← i-1
while j ≥ 0 and A[j] > v do
A[j + 1] ← A[j]
j←j-1
A[j + 1] ← v

The operation of the algorithm is illustrated in Figure 2.6.


The basic operation of the algorithm is the key comparison A[j] > v.
The number of key comparisons in this algorithm obviously depends on the nature of
the input. In the worst case, A[j] > v is executed the largest number of times, i.e., for
every j = i - 1,..., 0. Since v = A[i], it happens if and only if A[j] > A[i] for j = i- 1, ..., 0.
89 |45 68 90 29 34 17
45 89 |68 90 29 34 17
45 68 89 |90 29 34 17
45 68 89 90 |29 34 17
29 45 68 89 90 |34 17
29 34 45 68 89 90 |17
17 29 34 45 68 89 90
Figure 3: Example of sorting with insertion sort. A vertical bar separates the sorted
part of the array from the remaining elements; the element being inserted is in bold.

Since v = A[i], it happens if and only if A[j] > A[i] for j = i- 1, ... , 0. Thus, for the
worst-case input, we get A[O] > A[1] (for i= 1), A[1] > A[2] (for i= 2), ... , A[n- 2] > A[n -
1] (for i= n -1). In other words, the worst-case input is an array of strictly decreasing
values. The number of key comparisons for such an input is

( )
( ) ∑∑ ∑ ( )

Thus, in the worst case, insertion sort makes exactly the same number of comparisons
as selection sort.

13. Obtain the topological sort for the graph given below using source removal
method. Explain.

Answer:
“Solve the problem by yourself”
The source removal method is based on a direct implementation of the
decrease ( by one ) -and-conquer technique: repeatedly, identify in a remaining
digraph a source, which is a vertex with no incoming edges, and delete it along
with all the edges outgoing from it.
If there are several sources, break the tie arbitrarily. If there is none,
stop because the problem cannot be solved). The order in which the vertices are
deleted yields a solution to the topological sorting problem.
Note that the solution obtained by the source-removal algorithm is different
from the one obtained by the DFS-based algorithm. Both of them are correct, of
course; the topological sorting problem may have several alternative solutions.
14. Explain Quick sort algorithm in details and arrange the following elements
using quick sort. {5, 3, 1, 9, 8, 2, 4, 7}.
Answer:
Quicksort is the other important sorting algorithm that is based on the divide-
and conquer approach. Unlike mergesort, which divides its input elements according
to their position in the array, quicksort divides them according to their value. A
partition is an arrangement of the array’s elements so that all the elements to the left
of some element A[s] are less than or equal to A[s], and all the elements to the right of
A[s] are greater than or equal to it:

Obviously, after a partition is achieved, A[s] will be in its final position in the sorted
array, and we can continue sorting the two subarrays to the left and to the right of
A[s] independently (e.g., by the same method).

Here is pseudocode of quicksort: call Quicksort(A[0..n − 1]) where


ALGORITHM Quicksort(A[l..r])
//Sorts a subarray by quicksort
//Input: Subarray of array A[0..n − 1], defined by its left and right
// indices l and r
//Output: Subarray A[l..r] sorted in nondecreasing order
if l < r
s ←Partition(A[l..r]) //s is a split position
Quicksort(A[l..s − 1])
Quicksort(A[s + 1..r])

As before, we start by selecting a pivot—an element with respect to whose value we


are going to divide the subarray. There are several different strategies for selecting a
pivot;

Unlike the Lomuto algorithm, we will now scan the subarray from both ends,
comparing the subarray’s elements to the pivot. The left-to-right scan, denoted
below by index pointer i, starts with the second element. Since we want elements
smaller than the pivot to be in the left part of the subarray, this scan skips over
elements that are smaller than the pivot and stops upon encountering the first
element greater than or equal to the pivot. The right-to-left scan, denoted below by
index pointer j, starts with the last element of the subarray. Since we want elements
larger than the pivot to be in the right part of the subarray, this scan skips over
elements that are larger than the pivot and stops on encountering the first element
smaller than or equal to the pivot.

After both scans stop, three situations may arise, depending on whether or not
the scanning indices have crossed. If scanning indices i and j have not crossed, i.e., i <
j, we simply exchange A[i] and A[j] and resume the scans by incrementing i and
decrementing j, respectively:
If the scanning indices have crossed over, i.e., i > j, we will have partitioned the
subarray after exchanging the pivot with A[j]:

Finally, if the scanning indices stop while pointing to the same element, i.e., i = j,
the value they are pointing to must be equal to p. Thus, we have the subarray
partitioned, with the split position s = i = j:

We can combine the last case with the case of crossed-over indices (i > j ) by
exchanging the pivot with A[j ] whenever i ≥ j .
Here is pseudocode implementing this partitioning procedure.
_______________________________________________________________________________
ALGORITHM Partition(A[l..r])
//Partitions a subarray by Hoare’s algorithm, using the first element
// as a pivot
//Input: Subarray of array A[0..n − 1], defined by its left and right
// indices l and r (l<r)
//Output: Partition of A[l..r], with the split position returned as
// this function’s value
p←A[l]
i ←l; j ←r + 1
repeat
repeat i ←i + 1 until A[i]≥ p
repeat j ←j − 1 until A[j ]≤ p
swap(A[i], A[j ])
until i ≥ j
swap(A[i], A[j ]) //undo last swap when i ≥ j
swap(A[l], A[j ])
return j
An example of sorting an array by quicksort is given in Figure 2.13.

We start our discussion of quicksort’s efficiency by noting that the number


of key comparisons made before a partition is achieved is n + 1 if the scanning
indices cross over and n if they coincide (why?). If all the splits happen in the middle
of corresponding subarrays, we will have the best case. The number of key
comparisons in the best case satisfies the recurrence
Figure 4: Example of quicksort operation. (a) Array’s transformations with pivots
shown in bold. (b) Tree of recursive calls to Quicksort with input values l and r of
subarray bounds and split position s of a partition obtained.

In the worst case, all the splits will be skewed to the extreme: one of the two
subarrays will be empty, and the size of the other will be just 1 less than the size of
the subarray being partitioned. This unfortunate situation will happen, in particular,
for increasing arrays, i.e., for inputs for which the problem is already solved! Indeed, if
A[0..n − 1] is a strictly increasing array and we use A[0] as the pivot, the left-to-
right scan will stop on A[1] while the right-to-left scan will go all the way to reach
A[0], indicating the split at position 0:

So, after making n + 1 comparisons to get to this partition and exchanging the
pivot A[0] with itself, the algorithm will be left with the strictly increasing array A[1..n
− 1] to sort. This sorting of strictly increasing arrays of diminishing sizes will continue
until the last one A[n − 2..n − 1] has been processed. The total number of key
comparisons made will be equal to
15. With algorithm and analysis explain finding of binary tree height.
Answer:
A binary tree T is defined as a finite set of nodes that is either empty or consists of a
root and two disjoint binary trees TL and TR called, respectively, the left and right
subtree of the root. We usually think of a binary tree as a special case of an ordered
tree (Figure 2.14).

Figure 4: Standard representation of a binary tree.

Since the definition itself divides a binary tree into two smaller structures of the
same type, the left subtree and the right subtree, many problems about binary trees
can be solved by applying the divide-and-conquer technique. As an example, let us
consider a recursive algorithm for computing the height of a binary tree.

The height is defined as the length of the longest path from the root to a leaf.
Hence, it can be computed as the maximum of the heights of the root’s left and
right subtrees plus 1. (We have to add 1 to account for the extra level of the root.) Also
note that it is convenient to define the height of the empty tree as −1.
Thus, we have the following recursive algorithm.

ALGORITHM Height(T)
//Computes recursively the height of a binary tree
//Input: A binary tree T
//Output: The height of T
if T = ∅
return −1
else
return max{ Height(Tleft), Height(Tright)} + 1

We measure the problem’s instance size by the number of nodes n(T) in a given binary
tree T. Obviously, the number of comparisons made to compute the maximum of
two numbers and the number of additions A(n(T)) made by the algorithm are the same.
We have the following recurrence relation for A(n(T )):
A(n(T )) = A(n(Tleft)) + A(n(Tright)) + 1 for n(T ) > 0,
A(0) = 0.

The number of comparisons to check whether the tree is empty is


C(n) = n + x = 2n + 1,
and the number of additions is
A(n) = n.

16. With example explain Knapsack problem. (Exhaustive Search Approach).


Answer:
Given n items of known weights w1, w2, ..., wn and values v1, v2,... , vn, and a
knapsack of capacity W, find the most valuable subset of the items that fit into the
knapsack. If you do not like the idea of putting yourself in the shoes of a thief who
wants to steal the most valuable loot that fits into his knapsack, think about a
transport plane that has to deliver the most valuable set of items to a remote location
without exceeding the plane's capacity. Figure 5a presents a small instance of the
knapsack problem.

Figure 5: (a) Instance of the knapsack problem. (b) Its solution by exhaustive search.
(The information about the optimal selection is in bold.)
The exhaustive-search approach to this problem leads to generating all the
subsets of the set of n items given, computing the total weight of each subset to
identify feasible subsets (i.e., the ones with the total weight not exceeding the
knapsack's capacity), and finding a subset of the largest value among them. As an
example, the solution to the instance of Figure 5 is given in Figure 5b. Since the
number of subsets of an n-element set is 2n, the exhaustive search leads to a Ω (2n)
algorithm no matter how efficiently individual subsets are generated.

You might also like